AvengerMoJo
AvengerMoJo

現居台灣,曾經居住北京,Dallas,香港出生,開源工程師,軟件顧問,重複創業者,愛做飯,博雅Mentor

Introduce SUSE Enterprise Storage 6 (part 3)

CLI — Don’t crash the cluster in day 1:

$ ceph

This possibly the first command you will use to administrate a cluster

$ rbd$ rados$ radosgw-admin

First, you run the ceph command as the admin. If your organization has more then one admin, an individual user can also set up and authorize different capability to echo user accordingly. As you can imagine the admin has all the ceph capability to access the complete cluster. In order to set how many keys are already been set up in your cluster.

Authorization

$ ceph auth ls

Then, you will see a list of names followed by key: and caps:. All the name has two different formats. One is representing the client.username which is basically the user. One is representing application or daemon which is running inside the cluster like osd.0 mgr.mon-0 etc.

For the clients, you will need to set up the corresponding key in the

/etc/ceph/ceph..keyring /etc/ceph/ceph.client.admin.keyring is the default admin key file

Or you can also get an individual key

$ ceph auth get client.admin

Create a user following these patterns:

$ ceph auth add client.me mon ‘allow r’ osd ‘allow rw pool=mypool’$ ceph auth get-or-create \client.you mon ‘allow r’ osd ‘allow rw pool=Apool’ -o you.keyring

The authentication mechanism is call cephx, normally you will see this setting in the

/etc/ceph/ceph.conf
auth_cluster_required = cephxauth_service_required = cephxauth_client_required = cephx

Cluster Configuration

In order to see all the custom configuration you can use the following:

$ ceph config ls | less$ ceph config dump$ ceph config show osd.0

For different daemon individual configuration can also show everything including the default value ( beware it is a long list )

$ ceph config show-with-defaults osd.2 | less$ ceph config get osd.* osd_objectstorebluestore

At run time you can also use inject configuration into the cluster without add it forever yet

$ ceph tell osd.0 bench$ ceph tell mds.* injectors ‘— mon-pg-warn-max-per-osd 4096’

The Ceph Manager also keeps a record of the configuration change.

$ ceph config log

Health and Status

The following is basically your most use command to look at the cluster's current health status.

$ ceph -s $ ceph health$ ceph health detail$ ceph mon stat$ ceph osd stat$ ceph pg stat

I am not using this command anymore because of the existing dashboard for monitoring but you can still use it if you get no access to web UI.

$ ceph -w

Erasure Coding Profile

For Erasure Coding ruleset I talk about this before using the getcrushmap and crushtool to do editing and update accordingly. For EC profiles SES provide a default setup and you can list out all the existing profiles

$ ceph osd erasure-code-profile lsdefault$ ceph osd erasure-code-profile get defaultk=2m=1plugin=jerasuretechnique=reed_sol_van

For creating EC profile can also use different plugins, by default SES using jerasure and the following:

  • jerasure — default
$ ceph osd erasure-code-profile set JERk6m2 \k=6 m=2 plugin=jerasure \crush-device-class=hdd crush-failure-domain=osd crush-root=default
  • ISR — Intel library optimize ( only for Intel )
$ ceph osd erasure-code-profile set ISAk6m2 \k=6 m=2 plugin=isa \crush-device-class=hdd crush-failure-domain=osd crush-root=default
  • LRC — Locally Repairable Code using layers over existing plugins
$ ceph osd erasure-code-profile set LRCk4m2l3 \k=4 m=2 l=3 plugin=lrc \crush-locality=rack \crush-device-class=hdd crush-failure-domain=host crush-root=default
  • SHEC — Shingled EC using extra storage for recovery efficiency
$ ceph osd erasure-code-profile set SHECk6m2c2 \k=6 m=2 c=2 plugin=shec \crush-device-class=hdd crush-failure-domain=osd crush-root=default
  • CLAY — Coupled Layer reduce network traffic for recovery
$ ceph osd erasure-code-profile set CLAYk6m2d7 \k=6 m=2 d=6 plugin=clay \crush-device-class=hdd crush-failure-domain=osd crush-root=default

At last, if you want to remove a profile

$ ceph osd erasure-code-profile rm

CRUSH Weight and Reweight

$ ceph osd df tree $ ceph osd crush reweight

Pools

Once pools are being created we need to apply it for an application. The predefined applications are RBD, RGW, and CephFS. And you can apply a subtype like CephFS:data and CephFS:metadata. Then we can start putting data into it and monitor their performance and growth. All the pool attributes are similar to the configuration flag ceph config.

$ ceph osd pool application enable $ ceph osd lspools $ ceph osd pool stats $ ceph osd pool get

Data usage

$ ceph df $ ceph df detail

The default pool quota is as much as your ruleset and profile define or the hardware limitation. However, you can also set a maximum object number to maximum volume size in byte. And we can remote the quota by setting it back to 0. For RBD normally you already predefine the actual block size.

$ ceph osd pool get-quota  $ ceph osd pool set-quota max_objects $ ceph osd pool set-quota max_bytes

With the copy-on-write (COW) features, you can easily create a snapshot and then clone a snapshot cheaply. Which could be very useful for RBD when you are using it for a system image as the base for VM etc. Be beware, if you did a pool snapshot instead of RBD snapshot. They cannot coexist and you can only pick one. So most likely for an RBD pool, you want to use RBD snapshot only.

$ ceph osd pool mksnap $ ceph osd pool rmsnap $ rados -p lssnap$ rados -p rollback

Scrub and Deep-Scrub

It is like fsck for the disk and data consistency, basic Scrub will perform daily to make sure data are in ok. Basic Scrub only check the object size but Deep-Scrub performs weekly that read all data and checks the checksums. It is also related to PG since all data in OSD are inside it. So you can hand Scrub individual OSD or PG number.

$ ceph osd scrub osd.0$ ceph pg scrub 2.2$ ceph config ls | grep osd_scrub$ ceph config ls | grep osd_deep_scrub$ ceph config set osd.* osd_scrub_begin_week_day 6 #start Saturday$ ceph config set osd.* osd_scrub_begin_week_day 7 # end Sunday$ ceph config set osd.* osd_scrub_begin_hour 22$ ceph config set osd.* osd_scrub_end_hour 6

If scrubbing created error or inconsistent data and you want it to repair it.

$ ceph osd repair $ ceph pg repair < pg_number>

DeepSea and ceph.conf

Since SES6 using DeepSea for cluster deployment and some basic management. /etc/ceph.conf is also controlled by DeepSea across all the MONs, MGRs, OSDs and gateway. But you need to first update the following:

/srv/salt/ceph/configuration/files/ceph.conf.d/global.conf

Then we can run the following DeepSea (Salt) commands:

$ salt admin* state.apply ceph.configuration.create$ salt * state.apply ceph.configuration

After configuration change and you apply it by:

$ceph config assimilate-conf -i /etc/ceph/ceph.conf

Manager Modules

Ceph manager has a list of the module you can see which is being enabled or disabled.

$ ceph mgr module ls | less$ ceph mgr module enable $ ceph mgr module disable $ ceph mgr services
CC BY-NC-ND 2.0 版权声明

喜欢我的文章吗?
别忘了给点支持与赞赏,让我知道创作的路上有你陪伴。

加载中…

发布评论