The configuration option osd pool erasure code stripe width has

been replaced by osd pool erasure code stripe unit , and given

the ability to be overridden by the erasure code profile setting

stripe_unit . For more details see

/rados/operations/erasure-code#erasure-code-profiles .

rbd and cephfs can use erasure coding with bluestore. This may be

enabled by setting allow_ec_overwrites to true for a pool. Since

this relies on bluestore’s checksumming to do deep scrubbing,

enabling this on a pool stored on filestore is not allowed.

The rados df JSON output now prints numeric values as numbers instead of

strings.

The mon_osd_max_op_age option has been renamed to

mon_osd_warn_op_age (default: 32 seconds), to indicate we

generate a warning at this age. There is also a new

mon_osd_err_op_age_ratio that is a expressed as a multitple of

mon_osd_warn_op_age (default: 128, for roughly 60 minutes) to

control when an error is generated.

The default maximum size for a single RADOS object has been reduced from

100GB to 128MB. The 100GB limit was completely impractical in practice

while the 128MB limit is a bit high but not unreasonable. If you have an

application written directly to librados that is using objects larger than

128MB you may need to adjust osd_max_object_size .

The semantics of the rados ls and librados object listing

operations have always been a bit confusing in that “whiteout”

objects (which logically don’t exist and will return ENOENT if you

try to access them) are included in the results. Previously

whiteouts only occurred in cache tier pools. In luminous, logically

deleted but snapshotted objects now result in a whiteout object, and

as a result they will appear in rados ls results, even though

trying to read such an object will result in ENOENT. The rados

listsnaps operation can be used in such a case to enumerate which

snapshots are present. This may seem a bit strange, but is less strange than having a

deleted-but-snapshotted object not appear at all and be completely

hidden from librados’s ability to enumerate objects. Future

versions of Ceph will likely include an alternative object

enumeration interface that makes it more natural and efficient to

enumerate all objects along with their snapshot and clone metadata.

The deprecated crush_ruleset property has finally been removed;

please use crush_rule instead for the osd pool get ... and osd

pool set ... commands.

The osd pool default crush replicated ruleset option has been

removed and replaced by the psd pool default crush rule option.

By default it is -1, which means the mon will pick the first type

replicated rule in the CRUSH map for replicated pools. Erasure

coded pools have rules that are automatically created for them if

they are not specified at pool creation time.

We no longer test the FileStore ceph-osd backend in combination with

btrfs. We recommend against using btrfs. If you are using

btrfs-based OSDs and want to upgrade to luminous you will need to

add the follwing to your ceph.conf: enable experimental unrecoverable data corrupting features = btrfs The code is mature and unlikely to change, but we are only

continuing to test the Jewel stable branch against btrfs. We

recommend moving these OSDs to FileStore with XFS or BlueStore.

The ruleset-* properties for the erasure code profiles have been

renamed to crush-* to (1) move away from the obsolete ‘ruleset’

term and to be more clear about their purpose. There is also a new

optional crush-device-class property to specify a CRUSH device

class to use for the erasure coded pool. Existing erasure code

profiles will be converted automatically when upgrade completes

(when the ceph osd require-osd-release luminous command is run)

but any provisioning tools that create erasure coded pools may need

to be updated.

The structure of the XML output for osd crush tree has changed

slightly to better match the osd tree output. The top level

structure is now nodes instead of crush_map_roots .

When assigning a network to the public network and not to

the cluster network the network specification of the public

network will be used for the cluster network as well.

In older versions this would lead to cluster services

being bound to 0.0.0.0:<port>, thus making the

cluster service even more publicly available than the

public services. When only specifying a cluster network it

will still result in the public services binding to 0.0.0.0.

In previous versions, if a client sent an op to the wrong OSD, the OSD

would reply with ENXIO. The rationale here is that the client or OSD is

clearly buggy and we want to surface the error as clearly as possible.

We now only send the ENXIO reply if the osd_enxio_on_misdirected_op option

is enabled (it’s off by default). This means that a VM using librbd that

previously would have gotten an EIO and gone read-only will now see a

blocked/hung IO instead.

The “journaler allow split entries” config setting has been removed.