core:

CephFS:

Upgrading an MDS cluster to 12.2.3+ will result in all active MDS

exiting due to feature incompatibilities once an upgraded MDS comes online

(even as standby). Operators may ignore the error messages and continue

upgrading/restarting or follow this upgrade sequence: After upgrading the monitors to Mimic, reduce the number of ranks to 1

( ceph fs set <fs_name> max_mds 1 ), wait for all other MDS to deactivate,

leaving the one active MDS, stop all standbys, upgrade the single active

MDS, then upgrade/start standbys. Finally, restore the previous max_mds. !! NOTE: see release notes on snapshots in CephFS if you have ever enabled

snapshots on your file system. See also: https://tracker.ceph.com/issues/23172

Several ceph mds ... commands have been obsoleted and replaced by

equivalent ceph fs ... commands: mds dump -> fs dump

-> mds getmap -> fs dump

-> mds stop -> mds deactivate

-> mds set_max_mds -> fs set max_mds

-> mds set -> fs set

-> mds cluster_down -> fs set cluster_down true

-> mds cluster_up -> fs set cluster_down false

-> mds add_data_pool -> fs add_data_pool

-> mds remove_data_pool -> fs rm_data_pool

-> mds rm_data_pool -> fs rm_data_pool

New CephFS file system attributes session_timeout and

session_autoclose are configurable via ceph fs set . The MDS

config options mds_session_timeout , mds_session_autoclose , and

mds_max_file_size are now obsolete.

As the multiple MDS feature is now standard, it is now enabled by

default. ceph fs set allow_multimds is now deprecated and will be

removed in a future release.

As the directory fragmentation feature is now standard, it is now

enabled by default. ceph fs set allow_dirfrags is now deprecated and

will be removed in a future release.

MDS daemons now activate and deactivate based on the value of

max_mds . Accordingly, ceph mds deactivate has been deprecated as it

is now redundant.

Taking a CephFS cluster down is now done by setting the down flag which

deactivates all MDS. For example: ceph fs set cephfs down true .

Preventing standbys from joining as new actives (formerly the now

deprecated cluster_down flag) on a file system is now accomplished by

setting the joinable flag. This is useful mostly for testing so that a

file system may be quickly brought down and deleted.

New CephFS file system attributes session_timeout and session_autoclose

are configurable via ceph fs set . The MDS config options

mds_session_timeout, mds_session_autoclose, and mds_max_file_size are now

obsolete.

Each mds rank now maintains a table that tracks open files and their

ancestor directories. Recovering MDS can quickly get open files’ paths,

significantly reducing the time of loading inodes for open files. MDS

creates the table automatically if it does not exist.

CephFS snapshot is now stable and enabled by default on new filesystems.

To enable snapshot on existing filesystems, use the command: ceph fs set < fs_name > allow_new_snaps The on-disk format of snapshot metadata has changed. The old format

metadata can not be properly handled in multiple active MDS configuration.

To guarantee all snapshot metadata on existing filesystems get updated,

perform the sequence of upgrading the MDS cluster strictly. See http://docs.ceph.com/docs/mimic/cephfs/upgrading/ For filesystems that have ever enabled snapshots, the multiple-active MDS

feature is disabled by the mimic monitor daemon. This will cause the “restore

previous max_mds” step in above URL to fail. To re-enable the feature,

either delete all old snapshots or scrub the whole filesystem: ceph daemon <mds of rank 0> scrub_path / force recursive repair

ceph daemon <mds of rank 0> scrub_path '~mdsdir' force recursive repair

Support has been added in Mimic for quotas in the Linux kernel client as of v4.17. See http://docs.ceph.com/docs/mimic/cephfs/quota/

Many fixes have been made to the MDS metadata balancer which distributes

load across MDS. It is expected that the automatic balancing should work

well for most use-cases. In Luminous, subtree pinning was advised as a

manual workaround for poor balancer behavior. This may no longer be

necessary so it is recommended to try experimentally disabling pinning as a

form of load balancing to see if the built-in balancer adequately works for

you. Please report any poor behavior post-upgrade.

NFS-Ganesha is an NFS userspace server that can export shares from multiple

file systems, including CephFS. Support for this CephFS client has improved

significantly in Mimic. In particular, delegations are now supported through

the libcephfs library so that Ganesha may issue delegations to its NFS clients

allowing for safe write buffering and coherent read caching. Documentation

is also now available: http://docs.ceph.com/docs/mimic/cephfs/nfs/

MDS uptime is now available in the output of the MDS admin socket status command.