Neko Docs

CEPH

Troubleshooting

  • Unable to start Ceph:

    Failed to start ceph-osd@0.service - Ceph object storage daemon osd.0.

    ceph-osd@0.service: Start request repeated too quickly.

    solution:

    systemctl reset-failed ceph
    systemctl restart ceph.target

NVME 优化参数

/etc/ceph/ceph.conf:

[global]
        ms_dispatch_throttle_bytes=2097152000
[osd]
        osd_max_write_size=1024
        objecter_inflight_ops=40960
        bluestore_rocksdb_options = compression=kNoCompression,max_write_buffer_number=128,min_write_buffer_number_to_merge=16,compaction_style=kCompactionStyleLevel,write_buffer_size=8388608,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
        osd_enable_op_tracker=false
        osd_memory_target_autotune=false
        osd_memory_target=16G
        osd_map_cache_size=1024
 
[client]
        rbd_cache=true
        rbd_cache_policy=writearound
        rbd_cache_max_dirty_age=5
        rbd_cache_writethrough_until_flush=false

应用配置:

systemctl restart ceph.target

性能测试:

  • 连续写入:
    • Write: rados -p <pool name> bench 60 write -b 4M -t 64 --no-cleanup -f plain --run-name bench_4m
    • Read: rados -p <pool name> bench 60 seq -t 64 --no-cleanup -f plain --run-name bench_4m
  • 随机写入:
    • Write: rados -p <pool name> bench 60 write -b 4k -t 64 --no-cleanup -f plain --run-name bench_4k
    • Read: rados -p <pool name> bench 60 seq -t 64 --no-cleanup -f plain --run-name bench_4k
  • 清理: rados -p <pool name> cleanup

PG数量计算方式: osd数量*[100-200]/副本数量

开启krbd配合writeback可以极大提高速度

混合盘

On this page