red hat ceph storage 4...may 20, 2020  · with this release, the ceph dashboard installation code...

24
Red Hat Ceph Storage 4.0 Release Notes Release notes for Red Hat Ceph Storage 4.0 Last Updated: 2020-05-20

Upload: others

Post on 03-Aug-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

Red Hat Ceph Storage 40

Release Notes

Release notes for Red Hat Ceph Storage 40

Last Updated 2020-05-20

Red Hat Ceph Storage 40 Release Notes

Release notes for Red Hat Ceph Storage 40

Legal Notice

Copyright copy 2020 Red Hat Inc

The text of and illustrations in this document are licensed by Red Hat under a Creative CommonsAttributionndashShare Alike 30 Unported license (CC-BY-SA) An explanation of CC-BY-SA isavailable athttpcreativecommonsorglicensesby-sa30 In accordance with CC-BY-SA if you distribute this document or an adaptation of it you mustprovide the URL for the original version

Red Hat as the licensor of this document waives the right to enforce and agrees not to assertSection 4d of CC-BY-SA to the fullest extent permitted by applicable law

Red Hat Red Hat Enterprise Linux the Shadowman logo the Red Hat logo JBoss OpenShiftFedora the Infinity logo and RHCE are trademarks of Red Hat Inc registered in the United Statesand other countries

Linux reg is the registered trademark of Linus Torvalds in the United States and other countries

Java reg is a registered trademark of Oracle andor its affiliates

XFS reg is a trademark of Silicon Graphics International Corp or its subsidiaries in the United Statesandor other countries

MySQL reg is a registered trademark of MySQL AB in the United States the European Union andother countries

Nodejs reg is an official trademark of Joyent Red Hat is not formally related to or endorsed by theofficial Joyent Nodejs open source or commercial project

The OpenStack reg Word Mark and OpenStack logo are either registered trademarksservice marksor trademarksservice marks of the OpenStack Foundation in the United States and othercountries and are used with the OpenStack Foundations permission We are not affiliated withendorsed or sponsored by the OpenStack Foundation or the OpenStack community

All other trademarks are the property of their respective owners

Abstract

The Release Notes document describes the major features and enhancements implemented in RedHat Ceph Storage in a particular release The document also includes known issues and bug fixes

Table of Contents

CHAPTER 1 INTRODUCTION

CHAPTER 2 ACKNOWLEDGMENTS

CHAPTER 3 NEW FEATURES31 THE CEPH-ANSIBLE UTILITY32 CEPH MANAGEMENT DASHBOARD33 CEPH FILE SYSTEM34 CEPH MEDIC35 ISCSI GATEWAY36 OBJECT GATEWAY37 PACKAGES38 RADOS39 BLOCK DEVICES (RBD)310 RBD MIRRORING

CHAPTER 4 BUG FIXES41 THE CEPH-ANSIBLE UTILITY42 OBJECT GATEWAY43 RADOS

CHAPTER 5 TECHNOLOGY PREVIEWS51 CEPH FILE SYSTEM52 BLOCK DEVICES (RBD)53 OBJECT GATEWAY

CHAPTER 6 KNOWN ISSUES61 THE CEPH-ANSIBLE UTILITY62 CEPH MANAGEMENT DASHBOARD63 PACKAGES64 RED HAT ENTERPRISE LINUX

CHAPTER 7 DEPRECATED FUNCTIONALITY71 THE CEPH-ANSIBLE UTILITY72 THE CEPH-DISK UTILITY73 RADOS

CHAPTER 8 SOURCES

3

4

556777888

1011

12121212

14141414

1616161818

19191919

20

Table of Contents

1

Red Hat Ceph Storage 40 Release Notes

2

CHAPTER 1 INTRODUCTIONRed Hat Ceph Storage is a massively scalable open software-defined storage platform that combinesthe most stable version of the Ceph storage system with a Ceph management platform deploymentutilities and support services

The Red Hat Ceph Storage documentation is available athttpsaccessredhatcomdocumentationenred-hat-ceph-storage

CHAPTER 1 INTRODUCTION

3

CHAPTER 2 ACKNOWLEDGMENTSRed Hat Ceph Storage 40 contains many contributions from the Red Hat Ceph Storage teamAdditionally the Ceph project is seeing amazing growth in the quality and quantity of contributions fromindividuals and organizations in the Ceph community We would like to thank all members of the Red HatCeph Storage team all of the individual contributors in the Ceph community and additionally (but notlimited to) the contributions from organizations such as

Intel

Fujitsu

UnitedStack

Yahoo

UbuntuKylin

Mellanox

CERN

Deutsche Telekom

Mirantis

SanDisk

SUSE

Red Hat Ceph Storage 40 Release Notes

4

CHAPTER 3 NEW FEATURESThis section lists all major updates enhancements and new features introduced in this release of RedHat Ceph Storage

The main features added by this release are

General availability of the BlueStore OSD back end

Ceph Management Dashboard

Ceph Object Gateway Beast Asio web server

Erasure coding for Ceph Block Device

Auto-scaling placement groups

Web-based installation interface

On-wire encryption

RBD performance monitoring and metrics gathering tools

Red Hat Enterprise Linux in FIPS mode

31 THE CEPH-ANSIBLE UTILITY

Ceph OSDs created with ceph-disk are migrated to ceph-volume during upgrade

When upgrading to Red Hat Ceph Storage 4 all running Ceph OSDs previously created by the ceph-disk utility will be migrated to the ceph-volume utility because ceph-disk has been deprecated in thisrelease

For bare-metal and container deployments of Red Hat Ceph Storage the ceph-volume utility does asimple scan and takes over the existing Ceph OSDs deployed by the ceph-disk utility Also do not usethese migrated devices in configurations for subsequent deployments Note that you cannot create anynew Ceph OSDs during the upgrade process

After the upgrade all Ceph OSDs created by ceph-disk will start and operate like any Ceph OSDscreated by ceph-volume

Ansible playbooks for scaling of all Ceph services

Previously ceph-ansible playbooks offered limited scale up and scale down features only to core Cephproducts such as Monitors and OSDs With this update additional Ansible playbooks allow for scaling ofall Ceph services

Ceph iSCSI packages merged into single package

The ceph-iscsi-cli and ceph-iscsi-config packages have been merged to one package named ceph-iscsi

The nfs-ganesha service is now supported as a standalone deployment

Red Hat Openstack Directory (OSPd) requires the ceph-ansible utility to be able to deploy the nfs-ganesha service and configure it so that it points to an external unmanaged pre-existing Ceph clusterAs of Red Hat Ceph Storage 4 ceph-ansible allows the deployment of an internal nfs-ganesha service

CHAPTER 3 NEW FEATURES

5

with an external Ceph cluster

Ceph container can now write logs to a respective daemon file

The previous way of logging for containerized Ceph environments did not allow limiting the journalctloutput when looking at log data by using the sosreport collection With this release logging can beenabled or disabled for a particular Ceph daemon with the following command

ceph config set daemonid log_to_file true

Where daemon is the type of the daemon and id is its ID For example to enable logging for the Monitordaemon with ID mon0

ceph config set monmon0 log_to_file true

This new feature makes debugging easier

Ability to configure Ceph Object Gateway to use TLS encryption

This release of Red Hat Ceph Storage provides the ability to configure the Ceph Object Gatewaylistener with an SSL certificate for TLS encryption by using the radosgw_frontend_ssl_certificatevariable to secure the Transmission Control Protocol (TCP) traffic

Ansible playbook for migrating OSDs from FileStore to BlueStore

A new Ansible playbook has been added to migrate OSDs from FileStore to BlueStore The object storemigration is not done as part of the upgrade process to Red Hat Ceph Storage 4 Do the migration afterthe upgrade completes For details see the How to migrate the object store from FileStore to BlueStoresection in the Red Hat Ceph Storage Administration Guide

32 CEPH MANAGEMENT DASHBOARD

Improvements to information for pool usage

With this enhancement valuable information was added to the pools table The following columns wereadded usage read bytes write bytes read operations and write operations Also the PlacementGroups column was renamed to Pg Status

Red Hat Ceph Storage Dashboard alerts

Red Hat Ceph Storage Dashboard supports alerts based on Ceph metrics and configured thresholdsThe Prometheus AlertManager configures gathers and triggers the alerts The alerts are displayed inthe Dashboard as a pop-up notifications in the upper-right corner You can view details of recent alertsin Cluster gt Alerts You can configure the alerts only in Prometheus but you can temporarily mute themfrom the Dashboard by creating Alert Silences in Cluster gt Silences

Displaying and hiding Ceph components in Dashboard

In the Red Hat Ceph Storage Dashboard you can display or hide Ceph components such as Ceph iSCSIRBD mirroring Ceph Block Devices Ceph File System or Ceph Object Gateway This feature allows youto hide components that are not configured

Ceph dashboard has been added to the Ceph Ansible playbooks

With this release the Ceph dashboard installation code was merged into the Ceph Ansible playbooksCeph Ansible does a containerized deployment of the Ceph dashboard regardless of the Red HatCeph Storage deployment type bare metal or containers These four new roles were added ceph-

Red Hat Ceph Storage 40 Release Notes

6

grafana ceph-dashboard ceph-prometheus and ceph-node-exporter

Red Hat Ceph Storage Dashboard alerts

Red Hat Ceph Storage Dashboard supports alerts based on Ceph metrics and configured thresholdsThe Prometheus AlertManager configures gathers and triggers the alerts

Viewing the cluster hierarchy from the Dashboard

Red Hat Ceph Storage Dashboard provides the ability to view the cluster hierarchy For details see theViewing the CRUSH map section in the Dashboard Guide for Red Hat Ceph Storage 4

33 CEPH FILE SYSTEM

ceph -w now shows information about CephFS scrubs

Previously it was not possible to check the ongoing Ceph File System (CephFS) scrubs status asidefrom checking the Metadata server (MDS) logs With this update the ceph -w command showsinformation about active CephFS scrubs to better understand the status

ceph-mgr volumes module for managing CephFS exports

This release provides Ceph Manager (ceph-mgr) volumes module to manage Ceph File System(CephFS) exports The volumes module implements the following file system export abstractions

FS volumes an abstraction for CephFS file systems

FS subvolumes an abstraction for independent CephFS directory trees

FS subvolume groups an abstraction for a directory higher than FS subvolumes to effectpolicies such as file layouts across a set of subvolumes

In addition these new commands are now supported

fs subvolume ls for listing subvolumes

fs subvolumegroup ls for listing subvolumes groups

fs subvolume snapshot ls for listing subvolume snapshots

fs subvolumegroup snapshot ls for listing subvolume group snapshots

fs subvolume rm for removing snapshots

34 CEPH MEDIC

ceph-medic can check the health of Ceph running in containers

With this release the ceph-medic utility can now check the health of a Red Hat Ceph Storage clusterrunning within a container

35 ISCSI GATEWAY

Non-administrators can now be used for the ceph-iscsi service

As of Red Hat Ceph Storage 4 non-administrative Ceph users may be used for the ceph-iscsi service

CHAPTER 3 NEW FEATURES

7

As of Red Hat Ceph Storage 4 non-administrative Ceph users may be used for the ceph-iscsi serviceby setting the cluster_client_name in the etccephiscsi-gatewaycfg on all iSCSI Gateways Thisallows resources to be restricted based on users

Running Ceph iSCSI Gateways can now be removed

As of Red Hat Ceph Storage 4 running iSCSI Gateways can now be removed from a Ceph iSCSI clusterfor maintenance or to reallocate resources The gatewayrsquos iSCSI target and its portals will be stoppedand all iSCSI target objects for that gateway will be removed from the kernel and the gatewayconfiguration Removing gateways that are down is not yet supported

36 OBJECT GATEWAY

The Beast HTTP front end

In Red Hat Ceph Storage 4 the default HTTP front end for the Ceph Object Gateway is Beast TheBeast front end uses the BoostBeast library for HTTP parsing and the BoostAsio library forasynchronous IO For details see the Using the Beast front end in the Object Gateway Configurationand Administration Guide for Red Hat Ceph Storage 4

Support for S3 MFA-Delete

With this release the Ceph Object Gateway supports S3 MFA-Delete using Time-Based One-TimePassword (TOTP) one-time passwords as an authentication factor This feature adds security againstinappropriate data removal You can configure buckets to require a TOTP one-time token in addition tostandard S3 authentication to delete data

Addition of storage classes and lifecycle transitions to the Ceph Object Gateway

The Ceph Object Gateway now provides support for storage classes an S3 compatible representationof the Ceph Object Gatewayrsquos underlying placement targets and for lifecycle transitions a mechanismto migrate objects between classes Storage classes and lifecycle transitions provide a higher level ofcontrol over data placement for the applications that need it They have many potential uses includingperformance tuning for specific workloads as well as automatic migration of inactive data to coldstorage

Users can now create new IAM policies and roles using REST APIs

With the release of Red Hat Ceph Storage 4 REST APIs for IAM roles and user policies are now availablein the same namespace as S3 APIs and can be accessed using the same endpoint as S3 APIs in theCeph Object Gateway This allows end users to create new IAM policies and roles using REST APIs

37 PACKAGES

Ability to install a Ceph cluster using a web-based interface

With this release the Cockpit web-based interface is supported Cockpit allows you to install a Red HatCeph Storage 4 cluster and other components such as Metadata Servers Ceph clients or the CephObject Gateway on base-metal or in containers For details see the Installing Red Hat Ceph Storageusing the Cockpit Web User Interface chapter in the Red Hat Ceph Storage 4 Installation Guide Notethat minimal experience with Red Hat Ceph Storage is required

38 RADOS

Ceph on-wire encryption

Starting with Red Hat Ceph Storage 4 you can enable encryption for all Ceph traffic over the network

Red Hat Ceph Storage 40 Release Notes

8

with the introduction of the messenger version 2 protocol For details see the Ceph on-wire encryptionchapter in the Architecture Guide and Encryption in transit section in the Data Security and HardeningGuide for Red Hat Ceph Storage 4

OSD BlueStore is now fully supported

BlueStore is a new back end for the OSD daemons that allows for storing objects directly on the blockdevices Because BlueStore does not need any file system interface it improves performance of Cephstorage clusters To learn more about the BlueStore OSD back end see the OSD BlueStore chapter inthe Administration Guide for Red Hat Ceph Storage 4

Red Hat Enterprise Linux in FIPS mode

With this release you can install Red Hat Ceph Storage on Red Hat Enterprise Linux where the FIPSmode is enabled

Changes the ceph df output and a new ceph osd df command

The output of the ceph df command has been improved Notably the RAW USED and RAW USEDvalues now show the preallocated space for the db and wal BlueStore partitions The ceph osd dfcommand shows the OSD utilization stats such as the amount of written data

Asynchronous recovery for non-acting OSD sets

Previously recovery with Ceph was a synchronous process by blocking write operations to objects untilthose objects were recovered In this release the recovery process is now asynchronous by not blockingwrite operations to objects only in the non-acting set of OSDs This new feature requires having morethan the minimum number of replicas as to have enough OSDs in the non-acting set

The new configuration option osd_async_recovery_min_cost controls how much asynchronousrecovery to do The default value for this option is 100 A higher value means asynchronous recovery willbe less whereas a lower value means asynchronous recovery will be more

Configuration is now stored in Monitors accessible by using ceph config

In this release Red Hat Ceph Storage centralizes configuration in Monitors instead of using the Cephconfiguration file (cephconf) Previously changing configuration included manually updating cephconf distributing it to appropriate nodes and restarting all affected daemons Now the Monitorsmanage a configuration database that has the same semantic structure as cephconf The database isaccessible by the ceph config command Any changes to the configuration are applied to daemons orclients in the system immediately and restarting them is no longer needed Use the ceph config -hcommand for details on the available set of commands Note that a Ceph configuration file is stillrequired to identify the Monitors nodes

Placement groups can now be auto-scaled

Red Hat Ceph Storage 4 introduces the ability to auto-scale placement groups (PGs) The number ofplacement groups (PGs) in a pool plays a significant role in how a cluster peers distributes data andrebalances Auto-scaling the number of PGs can make managing the cluster easier The new pg-autoscaling command provides recommendations for scaling PGs or automatically scales PGs basedon how the cluster is being used For more details about auto-scaling PGs see the Auto-scalingplacement groups section in the Storage Strategies Guide for Red Hat Ceph Storage 4

Introduction of diskprediction module

The Red Hat Ceph Storage diskprediction module gathers metrics to predict disk failures before theyhappen The module has two modes cloud and local With this release only the local mode is supportedThe local mode does not require any external server for data analysis It uses an internal predictormodule for disk prediction service and then returns the disk prediction result to the Ceph system

CHAPTER 3 NEW FEATURES

9

To enable the diskprediction module

ceph mgr module enable diskprediction_local

To set the prediction mode

ceph config set global device_failure_prediction_mode local

To disable the diskprediction module

ceph config set global device_failure_prediction_mode none

New configurable option mon_memory_target

Red Hat Ceph Storage 4 introduces a new configurable option mon_memory_target used to set thetarget amount of bytes for Monitor memory usage It specifies the amount of memory to allocate andmanage using the priority cache tuner for the associated Monitor daemon caches The default value of mon_memory_target is set to 2 GiB and you can change it during runtime with

ceph config set global mon_memory_target size

Prior to this release as a cluster scaled the Monitor specific RSS usage exceeded the limits that wereset using the mon_osd_cache_size option which led to issues This enhancement allows for improvedmanagement of memory allocated to the monitor caches and keeps the usage within specified limits

39 BLOCK DEVICES (RBD)

Erasure coding for Ceph Block Device

Erasure coding for Ceph Block Device (RBD) is now fully supported This feature allows RBD to storetheir data in an erasure coded pool For details see the Erasure Coding with Overwrites section in theStorage Strategies for Red Hat Ceph Storage 4

RBD performance monitoring and metrics gathering tools

Red Hat Ceph Storage 4 now incorporates new Ceph Block Device performance monitoring utilities foraggregated RBD image metrics for IOPS throughput and latency Per-image RBD metrics are nowavailable using the Ceph Manager Prometheus module the Ceph Dashboard and the rbd CLI using the rbd perf image iostat or rbd perf image iotop commands

Cloned images can be created from non-primary images

Creating cloned child RBD images from mirrored non-primary parent image is now supportedPreviously cloning of mirrored images was only supported for primary images When cloning goldenimages for virtual machines this restriction prevented the creation of new cloned images from thegolden non-primary image This update removes this restriction and cloned images can be created fromnon-primary mirrored images

Segregating RBD images within isolated namespaces within the same pool

RBD images can now be segregated within isolated namespaces within the same pool When using CephBlock Devices directly without a higher-level system such as OpenStack or OpenShift ContainerStorage it was not possible to restrict user access to specific RBD images When combined with CephXcapabilities users can be restricted to specific pool namespaces to restrict access to RBD images

Red Hat Ceph Storage 40 Release Notes

10

Moving RBD images between different pools within the same cluster

This version of Red Hat Ceph Storage adds the ability to move RBD images between different poolswithin the same cluster For details see the Moving images between pools section in the Block DeviceGuide for Red Hat Ceph Storage 4

Long-running RBD operations can run in the background

Long-running RBD operations such as image removal or cloned image flattening can now bescheduled to run in the background RBD operations that involve iterating over every backing RADOSobject for the image can take a long time depending on the size of the image When using the CLI toperform one of these operations the rbd CLI is blocked until the operation is complete Theseoperations can now be scheduled to run by the Ceph Manager as a background task by using the ceph rbd task add commands The progress of these tasks is visible on the Ceph dashboard as well as byusing the CLI

310 RBD MIRRORING

Support for multiple active instances of RBD mirror daemon in a single storage cluster

Red Hat Ceph Storage 4 now supports deploying multiple active instances of the RBD mirror daemon ina single storage cluster This enables multiple RBD mirror daemons to perform replication for the RBDimages or pools using an algorithm for chunking the images across the number of active mirroringdaemons

CHAPTER 3 NEW FEATURES

11

CHAPTER 4 BUG FIXESThis section describes bugs with significant impact on users that were fixed in this release of Red HatCeph Storage In addition the section includes descriptions of fixed known issues found in previousversions

41 THE CEPH-ANSIBLE UTILITY

The Ansible playbook no longer takes hours to complete the fuser command

Previously on a system running thousands of processes the fuser command in the Ansible playbookcould take several minutes or hours to complete because it iterates over all PID present in the procdirectory Because of this the handler tasks take a long time to complete checking if a Ceph process isalready running With this update instead of using the fuser command the Ansible playbook now checksthe socket file in the procnetunix directory and the handler tasks that check the Ceph socketcompletes almost instantly

(BZ1717011)

The purge-docker-clusteryml Ansible playbook no longer fails

Previously the purge-docker-clusteryml Ansible playbook could fail when trying to unmap RADOSBlock Devices (RBDs) because either the binary was absent or because the Atomic host versionprovided was too old With this update Ansible now uses the sysfs method to unmap devices if thereare any and the purge-docker-clusteryml playbook no longer fails

(BZ1766064)

42 OBJECT GATEWAY

The Expiration Days S3 Lifecycle parameter can now be set to 0

The Ceph Object Gateway did not accept the value of 0 for the Expiration Days Lifecycleconfiguration parameter Consequently setting the expiration to 0 could not be used to triggerbackground delete operation of objects With this update Expiration Days can be set to 0 as expected

(BZ1493476)

Archive zone fetches the current version of the source objects

An object is synced from multiple source zones to archive zone This behavior can lead to differentversions of the same object on the archive zone This update eliminates duplicate versions from gettingcreated by ensuring that the archive zone fetches the current version of the source object

(BZ1760862)

43 RADOS

A message to set expected_num_objects is no longer shown when using BlueStore

With this update a message that recommends setting the expected_num_obejcts parameter duringthe creation of the BlueStore pools has been removed because this message does not apply when usingthe BlueStore OSD back end

(BZ1650922)

Red Hat Ceph Storage 40 Release Notes

12

Deprecated JSON fields have been removed

This update removes deprecated fields from the JSON output of the ceph status command

(BZ1739233)

CHAPTER 4 BUG FIXES

13

CHAPTER 5 TECHNOLOGY PREVIEWSThis section provides an overview of Technology Preview features introduced or updated in this releaseof Red Hat Ceph Storage

IMPORTANT

Technology Preview features are not supported with Red Hat production service levelagreements (SLAs) might not be functionally complete and Red Hat does notrecommend to use them for production These features provide early access toupcoming product features enabling customers to test functionality and providefeedback during the development process

For more information on Red Hat Technology Preview features support scope seehttpsaccessredhatcomsupportofferingstechpreview

51 CEPH FILE SYSTEM

CephFS snapshots

Ceph File System (CephFS) supports taking snapshots as a Technology Preview Snapshots create animmutable view of the file system at the point in time they are taken

52 BLOCK DEVICES (RBD)

Mapping RBD images to NBD images

The rbd-nbd utility maps RADOS Block Device (RBD) images to Network Block Devices (NBD) andenables Ceph clients to access volumes and images in Kubernetes environments To use rbd-nbd installthe rbd-nbd package For details see the rbd-nbd(7) manual page

53 OBJECT GATEWAY

Object Gateway archive site

With this release an archive site is supported as a Technology Preview The archive site allows you tohave a history of versions of S3 objects that can only be eliminated through the gateways associatedwith the archive zone Including an archive zone in a multizone configuration allows you to have theflexibility of an S3 object history in only one zone while saving the space that the replicas of the versionsS3 objects would consume in the rest of the zones

Tiering within a cluster by disk type

This release adds the ability to tier within a cluster by disk type as a Technology Preview by using thecapabilities for mapping for example pools to placement targets and storage classes Using lifecycletransition rules it is possible to cause objects to migrate between storage pools based on policy

S3 bucket notifications

S3 bucket notifications are now supported as a Technology Preview When certain events are triggeredon an S3 bucket the notifications can be sent from the Ceph Object Gateway to HTTP AdvancedMessage Queuing Protocol (AMQP) 91 and Kafka endpoints Additionally the notifications can bestored in a ldquoPubSubrdquo zone instead of or in addition to sending them to the endpoints ldquoPubSubrdquo is apublish-subscribe model that enables recipients to pull notifications from Ceph

Red Hat Ceph Storage 40 Release Notes

14

To use the S3 notifications install the librabbitmq and librdkafka packages

CHAPTER 5 TECHNOLOGY PREVIEWS

15

CHAPTER 6 KNOWN ISSUESThis section documents known issues found in this release of Red Hat Ceph Storage

61 THE CEPH-ANSIBLE UTILITY

Red Hat Ceph Storage installation on Red Hat OpenStack Platform fails

The ceph-ansible utility becomes unresponsive when attempting to install Red Hat Ceph Storagetogether with the Red Hat OpenStack Platform 16 and it returns an error similar to the following one

Error unable to exec into ceph-mon-dcn1-computehci1-2 no container with name or ID ceph-mon-dcn1-computehci1-2 found no such container

To work around this issue update the following part of the handler_osdsyml file located in the ceph-ansiblerolesceph-handlertasks directory

- name unset noup flag command container_exec_cmd | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

To

- name unset noup flag command hostvars[groups[mon_group_name][0]][container_exec_cmd] | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

And start the installation process again

(BZ1792320)

Ansible does not unset the norebalance flag after it completes

The rolling-updateyml Ansible playbook does not unset the norebalance flag after it completes Towork around this issue unset the flag manually

(BZ1793564)

Ansible fails to upgrade a multisite Ceph Object Gateway when the Dashboard is enabled

When the Red Hat Ceph Storage Dashboard is enabled an attempt to use Ansible to upgrade to afurther version of Red Hat Ceph Storage fails when trying to upgrade the secondary Ceph ObjectGateway site in a multisite setup This bug does not occur on the primary site or if the Dashboard is notenabled

(BZ1794351)

62 CEPH MANAGEMENT DASHBOARD

The Dashboard does not show correct values for certain configuration options

Both the Red Hat Ceph Storage Dashboard and the underlying ceph config show command do not

Red Hat Ceph Storage 40 Release Notes

16

return the current values for certain configuration options such as fsid This is probably because certaincore options that are not meant for further modification after deploying a cluster are not updated and adefault value is used As a result the Dashboard does not show correct values for certain configurationoptions

(BZ1765493 BZ1772310)

NFS Ganesha in Dashboard

The Red Hat Ceph Storage Dashboard currently does not support managing NFS Ganesha

(BZ1772316)

Dashboard does not support email verification

The Red Hat Ceph Storage Dashboard does not support any email verification when changing a userspassword This behavior is intentional because the Dashboard supports Single Sign-On (SSO) and thisfeature can be delegated to the SSO provider

(BZ1778608)

The OSD histogram graph for read and write operations is not clear

The Red Hat Ceph Storage Dashboard does not display any numbers or description in the OSDhistogram graph for read and write operations and therefore the graph is not clear

(BZ1779170)

Dashboard returns an error when a ceph-mgr module is enabled from the ceph CLI

When enabling a Ceph Manager (ceph-mgr) module such as telemetry from the ceph CLI the Red HatCeph Storage Dashboard displays the following error message

0 - Unknown Error

In addition the Dashboard does not mark the module as enabled until the Refresh button is clicked

(BZ1785077)

The Dashboard allows modifying the LUN ID and WWN which can lead to data corruption

The Red Hat Ceph Storage Dashboard allows you to modify the LUN ID and its World Wide Name(WWN) which is not required after creating the LUN Moreover editing these parameters can bedangerous for certain initiators that do not fully support this feature by default Consequently editingthese parameters after creating them can lead to data corruption To avoid this do not modify the LUNID and WWN in the Dashboard

(BZ1786455)

The Dashboard does not provide correct Ceph iSCSI error messages

If the Ceph iSCSI returns an error for example the HTTP 400 code when trying to delete an iSCSItarget while a user is logged in the Red Hat Ceph Storage Dashboard does not forward that error codeand message to the Dashboard user using the pop-up notifications but displays a generic 500 InternalServer Error Consequently the message that the Dashboard provides is not informative and evenmisleading an expected behavior (users cannot delete a busy resource) is perceived as an operationalfailure (internal server error) To work around this issue see the Dashboard logs

(BZ1786457)

CHAPTER 6 KNOWN ISSUES

17

The Dashboard requires to disable iptables rules

The Red Hat Ceph Storage Dashboard is unable to perform any iSCSI operations such as creating agateway unless all iptables rules are manually disabled on the Ceph iSCSI node To do so use thefollowing command as root or the sudo user

iptables -F

Note that after a reboot the rules are enabled again Either disable them again or delete thempermanently

(BZ1792818)

63 PACKAGES

Current version of Grafana causes certain bugs in the Dashboard

Red Hat Ceph Storage 4 uses the Grafana version 524 This version causes the following bugs in theRed Hat Ceph Storage Dashboard

When navigating to Pools gt Overall Performance Grafana returns the following error

TypeError lc[ttype] is undefinedtrue

When viewing a poolrsquos performance details (Pools gt select a pool from the list gt PerformanceDetails) the Grafana bar is displayed along with other graphs and values but it should not bethere

These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red HatCeph Storage

(BZ1786107 BZ1762197 BZ1765536)

64 RED HAT ENTERPRISE LINUX

Ansible cannot start NFS Ganesha if SELinux is in enforcing mode

When using SELinux in enforcing mode on Red Hat Enterprise Linux 81 the ceph-ansible utility fails tostart the NFS Ganesha service because SELinux policy currently does not allow creating a directoryrequired for NFS Ganesha

(BZ1794027 BZ1796160)

Red Hat Ceph Storage 40 Release Notes

18

CHAPTER 7 DEPRECATED FUNCTIONALITYThis section provides an overview of functionality that has been deprecated in all minor releases up tothis release of Red Hat Ceph Storage

Ubuntu is no longer supported

Installing a Red Hat Ceph Storage 4 cluster on Ubuntu is no longer supported Use Red HatEnterprise Linux as the underlying operating system

71 THE CEPH-ANSIBLE UTILITY

Configuring iSCSI gateway using ceph-ansible is no longer supported

Configuring the Ceph iSCSI gateway by using the ceph-ansible utility is no longer supported Use ceph-ansible to install the gateway and then use either the gwcli utility of the Red Hat Ceph StorageDashboard to configure the gateway For details see the Using the Ceph iSCSI Gateway chapter in theBlock Device Guide for Red Hat Ceph Storage 4

72 THE CEPH-DISK UTILITY

ceph-disk is deprecated

With this release the ceph-disk utility is no longer supported The ceph-volume utility is used insteadFor details see the Why does ceph-volume replace `ceph-disk` section in the Administration Guide forRed Hat Ceph Storage 4

73 RADOS

FileStore is no longer supported in production

The FileStore OSD back end is now deprecated because the new BlueStore back end is now fullysupported in production For details see the How to migrate the object store from FileStore toBlueStore section in the Red Hat Ceph Storage 4 Installation Guide

Ceph configuration is now deprecated

The Ceph configuration file (cephconf) is now deprecated in favor of new centralized configurationstored in Monitors For details see the Configuration is now stored in Monitors accessible by using `cephconfig` release note

CHAPTER 7 DEPRECATED FUNCTIONALITY

19

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES
Page 2: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

Red Hat Ceph Storage 40 Release Notes

Release notes for Red Hat Ceph Storage 40

Legal Notice

Copyright copy 2020 Red Hat Inc

The text of and illustrations in this document are licensed by Red Hat under a Creative CommonsAttributionndashShare Alike 30 Unported license (CC-BY-SA) An explanation of CC-BY-SA isavailable athttpcreativecommonsorglicensesby-sa30 In accordance with CC-BY-SA if you distribute this document or an adaptation of it you mustprovide the URL for the original version

Red Hat as the licensor of this document waives the right to enforce and agrees not to assertSection 4d of CC-BY-SA to the fullest extent permitted by applicable law

Red Hat Red Hat Enterprise Linux the Shadowman logo the Red Hat logo JBoss OpenShiftFedora the Infinity logo and RHCE are trademarks of Red Hat Inc registered in the United Statesand other countries

Linux reg is the registered trademark of Linus Torvalds in the United States and other countries

Java reg is a registered trademark of Oracle andor its affiliates

XFS reg is a trademark of Silicon Graphics International Corp or its subsidiaries in the United Statesandor other countries

MySQL reg is a registered trademark of MySQL AB in the United States the European Union andother countries

Nodejs reg is an official trademark of Joyent Red Hat is not formally related to or endorsed by theofficial Joyent Nodejs open source or commercial project

The OpenStack reg Word Mark and OpenStack logo are either registered trademarksservice marksor trademarksservice marks of the OpenStack Foundation in the United States and othercountries and are used with the OpenStack Foundations permission We are not affiliated withendorsed or sponsored by the OpenStack Foundation or the OpenStack community

All other trademarks are the property of their respective owners

Abstract

The Release Notes document describes the major features and enhancements implemented in RedHat Ceph Storage in a particular release The document also includes known issues and bug fixes

Table of Contents

CHAPTER 1 INTRODUCTION

CHAPTER 2 ACKNOWLEDGMENTS

CHAPTER 3 NEW FEATURES31 THE CEPH-ANSIBLE UTILITY32 CEPH MANAGEMENT DASHBOARD33 CEPH FILE SYSTEM34 CEPH MEDIC35 ISCSI GATEWAY36 OBJECT GATEWAY37 PACKAGES38 RADOS39 BLOCK DEVICES (RBD)310 RBD MIRRORING

CHAPTER 4 BUG FIXES41 THE CEPH-ANSIBLE UTILITY42 OBJECT GATEWAY43 RADOS

CHAPTER 5 TECHNOLOGY PREVIEWS51 CEPH FILE SYSTEM52 BLOCK DEVICES (RBD)53 OBJECT GATEWAY

CHAPTER 6 KNOWN ISSUES61 THE CEPH-ANSIBLE UTILITY62 CEPH MANAGEMENT DASHBOARD63 PACKAGES64 RED HAT ENTERPRISE LINUX

CHAPTER 7 DEPRECATED FUNCTIONALITY71 THE CEPH-ANSIBLE UTILITY72 THE CEPH-DISK UTILITY73 RADOS

CHAPTER 8 SOURCES

3

4

556777888

1011

12121212

14141414

1616161818

19191919

20

Table of Contents

1

Red Hat Ceph Storage 40 Release Notes

2

CHAPTER 1 INTRODUCTIONRed Hat Ceph Storage is a massively scalable open software-defined storage platform that combinesthe most stable version of the Ceph storage system with a Ceph management platform deploymentutilities and support services

The Red Hat Ceph Storage documentation is available athttpsaccessredhatcomdocumentationenred-hat-ceph-storage

CHAPTER 1 INTRODUCTION

3

CHAPTER 2 ACKNOWLEDGMENTSRed Hat Ceph Storage 40 contains many contributions from the Red Hat Ceph Storage teamAdditionally the Ceph project is seeing amazing growth in the quality and quantity of contributions fromindividuals and organizations in the Ceph community We would like to thank all members of the Red HatCeph Storage team all of the individual contributors in the Ceph community and additionally (but notlimited to) the contributions from organizations such as

Intel

Fujitsu

UnitedStack

Yahoo

UbuntuKylin

Mellanox

CERN

Deutsche Telekom

Mirantis

SanDisk

SUSE

Red Hat Ceph Storage 40 Release Notes

4

CHAPTER 3 NEW FEATURESThis section lists all major updates enhancements and new features introduced in this release of RedHat Ceph Storage

The main features added by this release are

General availability of the BlueStore OSD back end

Ceph Management Dashboard

Ceph Object Gateway Beast Asio web server

Erasure coding for Ceph Block Device

Auto-scaling placement groups

Web-based installation interface

On-wire encryption

RBD performance monitoring and metrics gathering tools

Red Hat Enterprise Linux in FIPS mode

31 THE CEPH-ANSIBLE UTILITY

Ceph OSDs created with ceph-disk are migrated to ceph-volume during upgrade

When upgrading to Red Hat Ceph Storage 4 all running Ceph OSDs previously created by the ceph-disk utility will be migrated to the ceph-volume utility because ceph-disk has been deprecated in thisrelease

For bare-metal and container deployments of Red Hat Ceph Storage the ceph-volume utility does asimple scan and takes over the existing Ceph OSDs deployed by the ceph-disk utility Also do not usethese migrated devices in configurations for subsequent deployments Note that you cannot create anynew Ceph OSDs during the upgrade process

After the upgrade all Ceph OSDs created by ceph-disk will start and operate like any Ceph OSDscreated by ceph-volume

Ansible playbooks for scaling of all Ceph services

Previously ceph-ansible playbooks offered limited scale up and scale down features only to core Cephproducts such as Monitors and OSDs With this update additional Ansible playbooks allow for scaling ofall Ceph services

Ceph iSCSI packages merged into single package

The ceph-iscsi-cli and ceph-iscsi-config packages have been merged to one package named ceph-iscsi

The nfs-ganesha service is now supported as a standalone deployment

Red Hat Openstack Directory (OSPd) requires the ceph-ansible utility to be able to deploy the nfs-ganesha service and configure it so that it points to an external unmanaged pre-existing Ceph clusterAs of Red Hat Ceph Storage 4 ceph-ansible allows the deployment of an internal nfs-ganesha service

CHAPTER 3 NEW FEATURES

5

with an external Ceph cluster

Ceph container can now write logs to a respective daemon file

The previous way of logging for containerized Ceph environments did not allow limiting the journalctloutput when looking at log data by using the sosreport collection With this release logging can beenabled or disabled for a particular Ceph daemon with the following command

ceph config set daemonid log_to_file true

Where daemon is the type of the daemon and id is its ID For example to enable logging for the Monitordaemon with ID mon0

ceph config set monmon0 log_to_file true

This new feature makes debugging easier

Ability to configure Ceph Object Gateway to use TLS encryption

This release of Red Hat Ceph Storage provides the ability to configure the Ceph Object Gatewaylistener with an SSL certificate for TLS encryption by using the radosgw_frontend_ssl_certificatevariable to secure the Transmission Control Protocol (TCP) traffic

Ansible playbook for migrating OSDs from FileStore to BlueStore

A new Ansible playbook has been added to migrate OSDs from FileStore to BlueStore The object storemigration is not done as part of the upgrade process to Red Hat Ceph Storage 4 Do the migration afterthe upgrade completes For details see the How to migrate the object store from FileStore to BlueStoresection in the Red Hat Ceph Storage Administration Guide

32 CEPH MANAGEMENT DASHBOARD

Improvements to information for pool usage

With this enhancement valuable information was added to the pools table The following columns wereadded usage read bytes write bytes read operations and write operations Also the PlacementGroups column was renamed to Pg Status

Red Hat Ceph Storage Dashboard alerts

Red Hat Ceph Storage Dashboard supports alerts based on Ceph metrics and configured thresholdsThe Prometheus AlertManager configures gathers and triggers the alerts The alerts are displayed inthe Dashboard as a pop-up notifications in the upper-right corner You can view details of recent alertsin Cluster gt Alerts You can configure the alerts only in Prometheus but you can temporarily mute themfrom the Dashboard by creating Alert Silences in Cluster gt Silences

Displaying and hiding Ceph components in Dashboard

In the Red Hat Ceph Storage Dashboard you can display or hide Ceph components such as Ceph iSCSIRBD mirroring Ceph Block Devices Ceph File System or Ceph Object Gateway This feature allows youto hide components that are not configured

Ceph dashboard has been added to the Ceph Ansible playbooks

With this release the Ceph dashboard installation code was merged into the Ceph Ansible playbooksCeph Ansible does a containerized deployment of the Ceph dashboard regardless of the Red HatCeph Storage deployment type bare metal or containers These four new roles were added ceph-

Red Hat Ceph Storage 40 Release Notes

6

grafana ceph-dashboard ceph-prometheus and ceph-node-exporter

Red Hat Ceph Storage Dashboard alerts

Red Hat Ceph Storage Dashboard supports alerts based on Ceph metrics and configured thresholdsThe Prometheus AlertManager configures gathers and triggers the alerts

Viewing the cluster hierarchy from the Dashboard

Red Hat Ceph Storage Dashboard provides the ability to view the cluster hierarchy For details see theViewing the CRUSH map section in the Dashboard Guide for Red Hat Ceph Storage 4

33 CEPH FILE SYSTEM

ceph -w now shows information about CephFS scrubs

Previously it was not possible to check the ongoing Ceph File System (CephFS) scrubs status asidefrom checking the Metadata server (MDS) logs With this update the ceph -w command showsinformation about active CephFS scrubs to better understand the status

ceph-mgr volumes module for managing CephFS exports

This release provides Ceph Manager (ceph-mgr) volumes module to manage Ceph File System(CephFS) exports The volumes module implements the following file system export abstractions

FS volumes an abstraction for CephFS file systems

FS subvolumes an abstraction for independent CephFS directory trees

FS subvolume groups an abstraction for a directory higher than FS subvolumes to effectpolicies such as file layouts across a set of subvolumes

In addition these new commands are now supported

fs subvolume ls for listing subvolumes

fs subvolumegroup ls for listing subvolumes groups

fs subvolume snapshot ls for listing subvolume snapshots

fs subvolumegroup snapshot ls for listing subvolume group snapshots

fs subvolume rm for removing snapshots

34 CEPH MEDIC

ceph-medic can check the health of Ceph running in containers

With this release the ceph-medic utility can now check the health of a Red Hat Ceph Storage clusterrunning within a container

35 ISCSI GATEWAY

Non-administrators can now be used for the ceph-iscsi service

As of Red Hat Ceph Storage 4 non-administrative Ceph users may be used for the ceph-iscsi service

CHAPTER 3 NEW FEATURES

7

As of Red Hat Ceph Storage 4 non-administrative Ceph users may be used for the ceph-iscsi serviceby setting the cluster_client_name in the etccephiscsi-gatewaycfg on all iSCSI Gateways Thisallows resources to be restricted based on users

Running Ceph iSCSI Gateways can now be removed

As of Red Hat Ceph Storage 4 running iSCSI Gateways can now be removed from a Ceph iSCSI clusterfor maintenance or to reallocate resources The gatewayrsquos iSCSI target and its portals will be stoppedand all iSCSI target objects for that gateway will be removed from the kernel and the gatewayconfiguration Removing gateways that are down is not yet supported

36 OBJECT GATEWAY

The Beast HTTP front end

In Red Hat Ceph Storage 4 the default HTTP front end for the Ceph Object Gateway is Beast TheBeast front end uses the BoostBeast library for HTTP parsing and the BoostAsio library forasynchronous IO For details see the Using the Beast front end in the Object Gateway Configurationand Administration Guide for Red Hat Ceph Storage 4

Support for S3 MFA-Delete

With this release the Ceph Object Gateway supports S3 MFA-Delete using Time-Based One-TimePassword (TOTP) one-time passwords as an authentication factor This feature adds security againstinappropriate data removal You can configure buckets to require a TOTP one-time token in addition tostandard S3 authentication to delete data

Addition of storage classes and lifecycle transitions to the Ceph Object Gateway

The Ceph Object Gateway now provides support for storage classes an S3 compatible representationof the Ceph Object Gatewayrsquos underlying placement targets and for lifecycle transitions a mechanismto migrate objects between classes Storage classes and lifecycle transitions provide a higher level ofcontrol over data placement for the applications that need it They have many potential uses includingperformance tuning for specific workloads as well as automatic migration of inactive data to coldstorage

Users can now create new IAM policies and roles using REST APIs

With the release of Red Hat Ceph Storage 4 REST APIs for IAM roles and user policies are now availablein the same namespace as S3 APIs and can be accessed using the same endpoint as S3 APIs in theCeph Object Gateway This allows end users to create new IAM policies and roles using REST APIs

37 PACKAGES

Ability to install a Ceph cluster using a web-based interface

With this release the Cockpit web-based interface is supported Cockpit allows you to install a Red HatCeph Storage 4 cluster and other components such as Metadata Servers Ceph clients or the CephObject Gateway on base-metal or in containers For details see the Installing Red Hat Ceph Storageusing the Cockpit Web User Interface chapter in the Red Hat Ceph Storage 4 Installation Guide Notethat minimal experience with Red Hat Ceph Storage is required

38 RADOS

Ceph on-wire encryption

Starting with Red Hat Ceph Storage 4 you can enable encryption for all Ceph traffic over the network

Red Hat Ceph Storage 40 Release Notes

8

with the introduction of the messenger version 2 protocol For details see the Ceph on-wire encryptionchapter in the Architecture Guide and Encryption in transit section in the Data Security and HardeningGuide for Red Hat Ceph Storage 4

OSD BlueStore is now fully supported

BlueStore is a new back end for the OSD daemons that allows for storing objects directly on the blockdevices Because BlueStore does not need any file system interface it improves performance of Cephstorage clusters To learn more about the BlueStore OSD back end see the OSD BlueStore chapter inthe Administration Guide for Red Hat Ceph Storage 4

Red Hat Enterprise Linux in FIPS mode

With this release you can install Red Hat Ceph Storage on Red Hat Enterprise Linux where the FIPSmode is enabled

Changes the ceph df output and a new ceph osd df command

The output of the ceph df command has been improved Notably the RAW USED and RAW USEDvalues now show the preallocated space for the db and wal BlueStore partitions The ceph osd dfcommand shows the OSD utilization stats such as the amount of written data

Asynchronous recovery for non-acting OSD sets

Previously recovery with Ceph was a synchronous process by blocking write operations to objects untilthose objects were recovered In this release the recovery process is now asynchronous by not blockingwrite operations to objects only in the non-acting set of OSDs This new feature requires having morethan the minimum number of replicas as to have enough OSDs in the non-acting set

The new configuration option osd_async_recovery_min_cost controls how much asynchronousrecovery to do The default value for this option is 100 A higher value means asynchronous recovery willbe less whereas a lower value means asynchronous recovery will be more

Configuration is now stored in Monitors accessible by using ceph config

In this release Red Hat Ceph Storage centralizes configuration in Monitors instead of using the Cephconfiguration file (cephconf) Previously changing configuration included manually updating cephconf distributing it to appropriate nodes and restarting all affected daemons Now the Monitorsmanage a configuration database that has the same semantic structure as cephconf The database isaccessible by the ceph config command Any changes to the configuration are applied to daemons orclients in the system immediately and restarting them is no longer needed Use the ceph config -hcommand for details on the available set of commands Note that a Ceph configuration file is stillrequired to identify the Monitors nodes

Placement groups can now be auto-scaled

Red Hat Ceph Storage 4 introduces the ability to auto-scale placement groups (PGs) The number ofplacement groups (PGs) in a pool plays a significant role in how a cluster peers distributes data andrebalances Auto-scaling the number of PGs can make managing the cluster easier The new pg-autoscaling command provides recommendations for scaling PGs or automatically scales PGs basedon how the cluster is being used For more details about auto-scaling PGs see the Auto-scalingplacement groups section in the Storage Strategies Guide for Red Hat Ceph Storage 4

Introduction of diskprediction module

The Red Hat Ceph Storage diskprediction module gathers metrics to predict disk failures before theyhappen The module has two modes cloud and local With this release only the local mode is supportedThe local mode does not require any external server for data analysis It uses an internal predictormodule for disk prediction service and then returns the disk prediction result to the Ceph system

CHAPTER 3 NEW FEATURES

9

To enable the diskprediction module

ceph mgr module enable diskprediction_local

To set the prediction mode

ceph config set global device_failure_prediction_mode local

To disable the diskprediction module

ceph config set global device_failure_prediction_mode none

New configurable option mon_memory_target

Red Hat Ceph Storage 4 introduces a new configurable option mon_memory_target used to set thetarget amount of bytes for Monitor memory usage It specifies the amount of memory to allocate andmanage using the priority cache tuner for the associated Monitor daemon caches The default value of mon_memory_target is set to 2 GiB and you can change it during runtime with

ceph config set global mon_memory_target size

Prior to this release as a cluster scaled the Monitor specific RSS usage exceeded the limits that wereset using the mon_osd_cache_size option which led to issues This enhancement allows for improvedmanagement of memory allocated to the monitor caches and keeps the usage within specified limits

39 BLOCK DEVICES (RBD)

Erasure coding for Ceph Block Device

Erasure coding for Ceph Block Device (RBD) is now fully supported This feature allows RBD to storetheir data in an erasure coded pool For details see the Erasure Coding with Overwrites section in theStorage Strategies for Red Hat Ceph Storage 4

RBD performance monitoring and metrics gathering tools

Red Hat Ceph Storage 4 now incorporates new Ceph Block Device performance monitoring utilities foraggregated RBD image metrics for IOPS throughput and latency Per-image RBD metrics are nowavailable using the Ceph Manager Prometheus module the Ceph Dashboard and the rbd CLI using the rbd perf image iostat or rbd perf image iotop commands

Cloned images can be created from non-primary images

Creating cloned child RBD images from mirrored non-primary parent image is now supportedPreviously cloning of mirrored images was only supported for primary images When cloning goldenimages for virtual machines this restriction prevented the creation of new cloned images from thegolden non-primary image This update removes this restriction and cloned images can be created fromnon-primary mirrored images

Segregating RBD images within isolated namespaces within the same pool

RBD images can now be segregated within isolated namespaces within the same pool When using CephBlock Devices directly without a higher-level system such as OpenStack or OpenShift ContainerStorage it was not possible to restrict user access to specific RBD images When combined with CephXcapabilities users can be restricted to specific pool namespaces to restrict access to RBD images

Red Hat Ceph Storage 40 Release Notes

10

Moving RBD images between different pools within the same cluster

This version of Red Hat Ceph Storage adds the ability to move RBD images between different poolswithin the same cluster For details see the Moving images between pools section in the Block DeviceGuide for Red Hat Ceph Storage 4

Long-running RBD operations can run in the background

Long-running RBD operations such as image removal or cloned image flattening can now bescheduled to run in the background RBD operations that involve iterating over every backing RADOSobject for the image can take a long time depending on the size of the image When using the CLI toperform one of these operations the rbd CLI is blocked until the operation is complete Theseoperations can now be scheduled to run by the Ceph Manager as a background task by using the ceph rbd task add commands The progress of these tasks is visible on the Ceph dashboard as well as byusing the CLI

310 RBD MIRRORING

Support for multiple active instances of RBD mirror daemon in a single storage cluster

Red Hat Ceph Storage 4 now supports deploying multiple active instances of the RBD mirror daemon ina single storage cluster This enables multiple RBD mirror daemons to perform replication for the RBDimages or pools using an algorithm for chunking the images across the number of active mirroringdaemons

CHAPTER 3 NEW FEATURES

11

CHAPTER 4 BUG FIXESThis section describes bugs with significant impact on users that were fixed in this release of Red HatCeph Storage In addition the section includes descriptions of fixed known issues found in previousversions

41 THE CEPH-ANSIBLE UTILITY

The Ansible playbook no longer takes hours to complete the fuser command

Previously on a system running thousands of processes the fuser command in the Ansible playbookcould take several minutes or hours to complete because it iterates over all PID present in the procdirectory Because of this the handler tasks take a long time to complete checking if a Ceph process isalready running With this update instead of using the fuser command the Ansible playbook now checksthe socket file in the procnetunix directory and the handler tasks that check the Ceph socketcompletes almost instantly

(BZ1717011)

The purge-docker-clusteryml Ansible playbook no longer fails

Previously the purge-docker-clusteryml Ansible playbook could fail when trying to unmap RADOSBlock Devices (RBDs) because either the binary was absent or because the Atomic host versionprovided was too old With this update Ansible now uses the sysfs method to unmap devices if thereare any and the purge-docker-clusteryml playbook no longer fails

(BZ1766064)

42 OBJECT GATEWAY

The Expiration Days S3 Lifecycle parameter can now be set to 0

The Ceph Object Gateway did not accept the value of 0 for the Expiration Days Lifecycleconfiguration parameter Consequently setting the expiration to 0 could not be used to triggerbackground delete operation of objects With this update Expiration Days can be set to 0 as expected

(BZ1493476)

Archive zone fetches the current version of the source objects

An object is synced from multiple source zones to archive zone This behavior can lead to differentversions of the same object on the archive zone This update eliminates duplicate versions from gettingcreated by ensuring that the archive zone fetches the current version of the source object

(BZ1760862)

43 RADOS

A message to set expected_num_objects is no longer shown when using BlueStore

With this update a message that recommends setting the expected_num_obejcts parameter duringthe creation of the BlueStore pools has been removed because this message does not apply when usingthe BlueStore OSD back end

(BZ1650922)

Red Hat Ceph Storage 40 Release Notes

12

Deprecated JSON fields have been removed

This update removes deprecated fields from the JSON output of the ceph status command

(BZ1739233)

CHAPTER 4 BUG FIXES

13

CHAPTER 5 TECHNOLOGY PREVIEWSThis section provides an overview of Technology Preview features introduced or updated in this releaseof Red Hat Ceph Storage

IMPORTANT

Technology Preview features are not supported with Red Hat production service levelagreements (SLAs) might not be functionally complete and Red Hat does notrecommend to use them for production These features provide early access toupcoming product features enabling customers to test functionality and providefeedback during the development process

For more information on Red Hat Technology Preview features support scope seehttpsaccessredhatcomsupportofferingstechpreview

51 CEPH FILE SYSTEM

CephFS snapshots

Ceph File System (CephFS) supports taking snapshots as a Technology Preview Snapshots create animmutable view of the file system at the point in time they are taken

52 BLOCK DEVICES (RBD)

Mapping RBD images to NBD images

The rbd-nbd utility maps RADOS Block Device (RBD) images to Network Block Devices (NBD) andenables Ceph clients to access volumes and images in Kubernetes environments To use rbd-nbd installthe rbd-nbd package For details see the rbd-nbd(7) manual page

53 OBJECT GATEWAY

Object Gateway archive site

With this release an archive site is supported as a Technology Preview The archive site allows you tohave a history of versions of S3 objects that can only be eliminated through the gateways associatedwith the archive zone Including an archive zone in a multizone configuration allows you to have theflexibility of an S3 object history in only one zone while saving the space that the replicas of the versionsS3 objects would consume in the rest of the zones

Tiering within a cluster by disk type

This release adds the ability to tier within a cluster by disk type as a Technology Preview by using thecapabilities for mapping for example pools to placement targets and storage classes Using lifecycletransition rules it is possible to cause objects to migrate between storage pools based on policy

S3 bucket notifications

S3 bucket notifications are now supported as a Technology Preview When certain events are triggeredon an S3 bucket the notifications can be sent from the Ceph Object Gateway to HTTP AdvancedMessage Queuing Protocol (AMQP) 91 and Kafka endpoints Additionally the notifications can bestored in a ldquoPubSubrdquo zone instead of or in addition to sending them to the endpoints ldquoPubSubrdquo is apublish-subscribe model that enables recipients to pull notifications from Ceph

Red Hat Ceph Storage 40 Release Notes

14

To use the S3 notifications install the librabbitmq and librdkafka packages

CHAPTER 5 TECHNOLOGY PREVIEWS

15

CHAPTER 6 KNOWN ISSUESThis section documents known issues found in this release of Red Hat Ceph Storage

61 THE CEPH-ANSIBLE UTILITY

Red Hat Ceph Storage installation on Red Hat OpenStack Platform fails

The ceph-ansible utility becomes unresponsive when attempting to install Red Hat Ceph Storagetogether with the Red Hat OpenStack Platform 16 and it returns an error similar to the following one

Error unable to exec into ceph-mon-dcn1-computehci1-2 no container with name or ID ceph-mon-dcn1-computehci1-2 found no such container

To work around this issue update the following part of the handler_osdsyml file located in the ceph-ansiblerolesceph-handlertasks directory

- name unset noup flag command container_exec_cmd | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

To

- name unset noup flag command hostvars[groups[mon_group_name][0]][container_exec_cmd] | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

And start the installation process again

(BZ1792320)

Ansible does not unset the norebalance flag after it completes

The rolling-updateyml Ansible playbook does not unset the norebalance flag after it completes Towork around this issue unset the flag manually

(BZ1793564)

Ansible fails to upgrade a multisite Ceph Object Gateway when the Dashboard is enabled

When the Red Hat Ceph Storage Dashboard is enabled an attempt to use Ansible to upgrade to afurther version of Red Hat Ceph Storage fails when trying to upgrade the secondary Ceph ObjectGateway site in a multisite setup This bug does not occur on the primary site or if the Dashboard is notenabled

(BZ1794351)

62 CEPH MANAGEMENT DASHBOARD

The Dashboard does not show correct values for certain configuration options

Both the Red Hat Ceph Storage Dashboard and the underlying ceph config show command do not

Red Hat Ceph Storage 40 Release Notes

16

return the current values for certain configuration options such as fsid This is probably because certaincore options that are not meant for further modification after deploying a cluster are not updated and adefault value is used As a result the Dashboard does not show correct values for certain configurationoptions

(BZ1765493 BZ1772310)

NFS Ganesha in Dashboard

The Red Hat Ceph Storage Dashboard currently does not support managing NFS Ganesha

(BZ1772316)

Dashboard does not support email verification

The Red Hat Ceph Storage Dashboard does not support any email verification when changing a userspassword This behavior is intentional because the Dashboard supports Single Sign-On (SSO) and thisfeature can be delegated to the SSO provider

(BZ1778608)

The OSD histogram graph for read and write operations is not clear

The Red Hat Ceph Storage Dashboard does not display any numbers or description in the OSDhistogram graph for read and write operations and therefore the graph is not clear

(BZ1779170)

Dashboard returns an error when a ceph-mgr module is enabled from the ceph CLI

When enabling a Ceph Manager (ceph-mgr) module such as telemetry from the ceph CLI the Red HatCeph Storage Dashboard displays the following error message

0 - Unknown Error

In addition the Dashboard does not mark the module as enabled until the Refresh button is clicked

(BZ1785077)

The Dashboard allows modifying the LUN ID and WWN which can lead to data corruption

The Red Hat Ceph Storage Dashboard allows you to modify the LUN ID and its World Wide Name(WWN) which is not required after creating the LUN Moreover editing these parameters can bedangerous for certain initiators that do not fully support this feature by default Consequently editingthese parameters after creating them can lead to data corruption To avoid this do not modify the LUNID and WWN in the Dashboard

(BZ1786455)

The Dashboard does not provide correct Ceph iSCSI error messages

If the Ceph iSCSI returns an error for example the HTTP 400 code when trying to delete an iSCSItarget while a user is logged in the Red Hat Ceph Storage Dashboard does not forward that error codeand message to the Dashboard user using the pop-up notifications but displays a generic 500 InternalServer Error Consequently the message that the Dashboard provides is not informative and evenmisleading an expected behavior (users cannot delete a busy resource) is perceived as an operationalfailure (internal server error) To work around this issue see the Dashboard logs

(BZ1786457)

CHAPTER 6 KNOWN ISSUES

17

The Dashboard requires to disable iptables rules

The Red Hat Ceph Storage Dashboard is unable to perform any iSCSI operations such as creating agateway unless all iptables rules are manually disabled on the Ceph iSCSI node To do so use thefollowing command as root or the sudo user

iptables -F

Note that after a reboot the rules are enabled again Either disable them again or delete thempermanently

(BZ1792818)

63 PACKAGES

Current version of Grafana causes certain bugs in the Dashboard

Red Hat Ceph Storage 4 uses the Grafana version 524 This version causes the following bugs in theRed Hat Ceph Storage Dashboard

When navigating to Pools gt Overall Performance Grafana returns the following error

TypeError lc[ttype] is undefinedtrue

When viewing a poolrsquos performance details (Pools gt select a pool from the list gt PerformanceDetails) the Grafana bar is displayed along with other graphs and values but it should not bethere

These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red HatCeph Storage

(BZ1786107 BZ1762197 BZ1765536)

64 RED HAT ENTERPRISE LINUX

Ansible cannot start NFS Ganesha if SELinux is in enforcing mode

When using SELinux in enforcing mode on Red Hat Enterprise Linux 81 the ceph-ansible utility fails tostart the NFS Ganesha service because SELinux policy currently does not allow creating a directoryrequired for NFS Ganesha

(BZ1794027 BZ1796160)

Red Hat Ceph Storage 40 Release Notes

18

CHAPTER 7 DEPRECATED FUNCTIONALITYThis section provides an overview of functionality that has been deprecated in all minor releases up tothis release of Red Hat Ceph Storage

Ubuntu is no longer supported

Installing a Red Hat Ceph Storage 4 cluster on Ubuntu is no longer supported Use Red HatEnterprise Linux as the underlying operating system

71 THE CEPH-ANSIBLE UTILITY

Configuring iSCSI gateway using ceph-ansible is no longer supported

Configuring the Ceph iSCSI gateway by using the ceph-ansible utility is no longer supported Use ceph-ansible to install the gateway and then use either the gwcli utility of the Red Hat Ceph StorageDashboard to configure the gateway For details see the Using the Ceph iSCSI Gateway chapter in theBlock Device Guide for Red Hat Ceph Storage 4

72 THE CEPH-DISK UTILITY

ceph-disk is deprecated

With this release the ceph-disk utility is no longer supported The ceph-volume utility is used insteadFor details see the Why does ceph-volume replace `ceph-disk` section in the Administration Guide forRed Hat Ceph Storage 4

73 RADOS

FileStore is no longer supported in production

The FileStore OSD back end is now deprecated because the new BlueStore back end is now fullysupported in production For details see the How to migrate the object store from FileStore toBlueStore section in the Red Hat Ceph Storage 4 Installation Guide

Ceph configuration is now deprecated

The Ceph configuration file (cephconf) is now deprecated in favor of new centralized configurationstored in Monitors For details see the Configuration is now stored in Monitors accessible by using `cephconfig` release note

CHAPTER 7 DEPRECATED FUNCTIONALITY

19

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES
Page 3: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

Legal Notice

Copyright copy 2020 Red Hat Inc

The text of and illustrations in this document are licensed by Red Hat under a Creative CommonsAttributionndashShare Alike 30 Unported license (CC-BY-SA) An explanation of CC-BY-SA isavailable athttpcreativecommonsorglicensesby-sa30 In accordance with CC-BY-SA if you distribute this document or an adaptation of it you mustprovide the URL for the original version

Red Hat as the licensor of this document waives the right to enforce and agrees not to assertSection 4d of CC-BY-SA to the fullest extent permitted by applicable law

Red Hat Red Hat Enterprise Linux the Shadowman logo the Red Hat logo JBoss OpenShiftFedora the Infinity logo and RHCE are trademarks of Red Hat Inc registered in the United Statesand other countries

Linux reg is the registered trademark of Linus Torvalds in the United States and other countries

Java reg is a registered trademark of Oracle andor its affiliates

XFS reg is a trademark of Silicon Graphics International Corp or its subsidiaries in the United Statesandor other countries

MySQL reg is a registered trademark of MySQL AB in the United States the European Union andother countries

Nodejs reg is an official trademark of Joyent Red Hat is not formally related to or endorsed by theofficial Joyent Nodejs open source or commercial project

The OpenStack reg Word Mark and OpenStack logo are either registered trademarksservice marksor trademarksservice marks of the OpenStack Foundation in the United States and othercountries and are used with the OpenStack Foundations permission We are not affiliated withendorsed or sponsored by the OpenStack Foundation or the OpenStack community

All other trademarks are the property of their respective owners

Abstract

The Release Notes document describes the major features and enhancements implemented in RedHat Ceph Storage in a particular release The document also includes known issues and bug fixes

Table of Contents

CHAPTER 1 INTRODUCTION

CHAPTER 2 ACKNOWLEDGMENTS

CHAPTER 3 NEW FEATURES31 THE CEPH-ANSIBLE UTILITY32 CEPH MANAGEMENT DASHBOARD33 CEPH FILE SYSTEM34 CEPH MEDIC35 ISCSI GATEWAY36 OBJECT GATEWAY37 PACKAGES38 RADOS39 BLOCK DEVICES (RBD)310 RBD MIRRORING

CHAPTER 4 BUG FIXES41 THE CEPH-ANSIBLE UTILITY42 OBJECT GATEWAY43 RADOS

CHAPTER 5 TECHNOLOGY PREVIEWS51 CEPH FILE SYSTEM52 BLOCK DEVICES (RBD)53 OBJECT GATEWAY

CHAPTER 6 KNOWN ISSUES61 THE CEPH-ANSIBLE UTILITY62 CEPH MANAGEMENT DASHBOARD63 PACKAGES64 RED HAT ENTERPRISE LINUX

CHAPTER 7 DEPRECATED FUNCTIONALITY71 THE CEPH-ANSIBLE UTILITY72 THE CEPH-DISK UTILITY73 RADOS

CHAPTER 8 SOURCES

3

4

556777888

1011

12121212

14141414

1616161818

19191919

20

Table of Contents

1

Red Hat Ceph Storage 40 Release Notes

2

CHAPTER 1 INTRODUCTIONRed Hat Ceph Storage is a massively scalable open software-defined storage platform that combinesthe most stable version of the Ceph storage system with a Ceph management platform deploymentutilities and support services

The Red Hat Ceph Storage documentation is available athttpsaccessredhatcomdocumentationenred-hat-ceph-storage

CHAPTER 1 INTRODUCTION

3

CHAPTER 2 ACKNOWLEDGMENTSRed Hat Ceph Storage 40 contains many contributions from the Red Hat Ceph Storage teamAdditionally the Ceph project is seeing amazing growth in the quality and quantity of contributions fromindividuals and organizations in the Ceph community We would like to thank all members of the Red HatCeph Storage team all of the individual contributors in the Ceph community and additionally (but notlimited to) the contributions from organizations such as

Intel

Fujitsu

UnitedStack

Yahoo

UbuntuKylin

Mellanox

CERN

Deutsche Telekom

Mirantis

SanDisk

SUSE

Red Hat Ceph Storage 40 Release Notes

4

CHAPTER 3 NEW FEATURESThis section lists all major updates enhancements and new features introduced in this release of RedHat Ceph Storage

The main features added by this release are

General availability of the BlueStore OSD back end

Ceph Management Dashboard

Ceph Object Gateway Beast Asio web server

Erasure coding for Ceph Block Device

Auto-scaling placement groups

Web-based installation interface

On-wire encryption

RBD performance monitoring and metrics gathering tools

Red Hat Enterprise Linux in FIPS mode

31 THE CEPH-ANSIBLE UTILITY

Ceph OSDs created with ceph-disk are migrated to ceph-volume during upgrade

When upgrading to Red Hat Ceph Storage 4 all running Ceph OSDs previously created by the ceph-disk utility will be migrated to the ceph-volume utility because ceph-disk has been deprecated in thisrelease

For bare-metal and container deployments of Red Hat Ceph Storage the ceph-volume utility does asimple scan and takes over the existing Ceph OSDs deployed by the ceph-disk utility Also do not usethese migrated devices in configurations for subsequent deployments Note that you cannot create anynew Ceph OSDs during the upgrade process

After the upgrade all Ceph OSDs created by ceph-disk will start and operate like any Ceph OSDscreated by ceph-volume

Ansible playbooks for scaling of all Ceph services

Previously ceph-ansible playbooks offered limited scale up and scale down features only to core Cephproducts such as Monitors and OSDs With this update additional Ansible playbooks allow for scaling ofall Ceph services

Ceph iSCSI packages merged into single package

The ceph-iscsi-cli and ceph-iscsi-config packages have been merged to one package named ceph-iscsi

The nfs-ganesha service is now supported as a standalone deployment

Red Hat Openstack Directory (OSPd) requires the ceph-ansible utility to be able to deploy the nfs-ganesha service and configure it so that it points to an external unmanaged pre-existing Ceph clusterAs of Red Hat Ceph Storage 4 ceph-ansible allows the deployment of an internal nfs-ganesha service

CHAPTER 3 NEW FEATURES

5

with an external Ceph cluster

Ceph container can now write logs to a respective daemon file

The previous way of logging for containerized Ceph environments did not allow limiting the journalctloutput when looking at log data by using the sosreport collection With this release logging can beenabled or disabled for a particular Ceph daemon with the following command

ceph config set daemonid log_to_file true

Where daemon is the type of the daemon and id is its ID For example to enable logging for the Monitordaemon with ID mon0

ceph config set monmon0 log_to_file true

This new feature makes debugging easier

Ability to configure Ceph Object Gateway to use TLS encryption

This release of Red Hat Ceph Storage provides the ability to configure the Ceph Object Gatewaylistener with an SSL certificate for TLS encryption by using the radosgw_frontend_ssl_certificatevariable to secure the Transmission Control Protocol (TCP) traffic

Ansible playbook for migrating OSDs from FileStore to BlueStore

A new Ansible playbook has been added to migrate OSDs from FileStore to BlueStore The object storemigration is not done as part of the upgrade process to Red Hat Ceph Storage 4 Do the migration afterthe upgrade completes For details see the How to migrate the object store from FileStore to BlueStoresection in the Red Hat Ceph Storage Administration Guide

32 CEPH MANAGEMENT DASHBOARD

Improvements to information for pool usage

With this enhancement valuable information was added to the pools table The following columns wereadded usage read bytes write bytes read operations and write operations Also the PlacementGroups column was renamed to Pg Status

Red Hat Ceph Storage Dashboard alerts

Red Hat Ceph Storage Dashboard supports alerts based on Ceph metrics and configured thresholdsThe Prometheus AlertManager configures gathers and triggers the alerts The alerts are displayed inthe Dashboard as a pop-up notifications in the upper-right corner You can view details of recent alertsin Cluster gt Alerts You can configure the alerts only in Prometheus but you can temporarily mute themfrom the Dashboard by creating Alert Silences in Cluster gt Silences

Displaying and hiding Ceph components in Dashboard

In the Red Hat Ceph Storage Dashboard you can display or hide Ceph components such as Ceph iSCSIRBD mirroring Ceph Block Devices Ceph File System or Ceph Object Gateway This feature allows youto hide components that are not configured

Ceph dashboard has been added to the Ceph Ansible playbooks

With this release the Ceph dashboard installation code was merged into the Ceph Ansible playbooksCeph Ansible does a containerized deployment of the Ceph dashboard regardless of the Red HatCeph Storage deployment type bare metal or containers These four new roles were added ceph-

Red Hat Ceph Storage 40 Release Notes

6

grafana ceph-dashboard ceph-prometheus and ceph-node-exporter

Red Hat Ceph Storage Dashboard alerts

Red Hat Ceph Storage Dashboard supports alerts based on Ceph metrics and configured thresholdsThe Prometheus AlertManager configures gathers and triggers the alerts

Viewing the cluster hierarchy from the Dashboard

Red Hat Ceph Storage Dashboard provides the ability to view the cluster hierarchy For details see theViewing the CRUSH map section in the Dashboard Guide for Red Hat Ceph Storage 4

33 CEPH FILE SYSTEM

ceph -w now shows information about CephFS scrubs

Previously it was not possible to check the ongoing Ceph File System (CephFS) scrubs status asidefrom checking the Metadata server (MDS) logs With this update the ceph -w command showsinformation about active CephFS scrubs to better understand the status

ceph-mgr volumes module for managing CephFS exports

This release provides Ceph Manager (ceph-mgr) volumes module to manage Ceph File System(CephFS) exports The volumes module implements the following file system export abstractions

FS volumes an abstraction for CephFS file systems

FS subvolumes an abstraction for independent CephFS directory trees

FS subvolume groups an abstraction for a directory higher than FS subvolumes to effectpolicies such as file layouts across a set of subvolumes

In addition these new commands are now supported

fs subvolume ls for listing subvolumes

fs subvolumegroup ls for listing subvolumes groups

fs subvolume snapshot ls for listing subvolume snapshots

fs subvolumegroup snapshot ls for listing subvolume group snapshots

fs subvolume rm for removing snapshots

34 CEPH MEDIC

ceph-medic can check the health of Ceph running in containers

With this release the ceph-medic utility can now check the health of a Red Hat Ceph Storage clusterrunning within a container

35 ISCSI GATEWAY

Non-administrators can now be used for the ceph-iscsi service

As of Red Hat Ceph Storage 4 non-administrative Ceph users may be used for the ceph-iscsi service

CHAPTER 3 NEW FEATURES

7

As of Red Hat Ceph Storage 4 non-administrative Ceph users may be used for the ceph-iscsi serviceby setting the cluster_client_name in the etccephiscsi-gatewaycfg on all iSCSI Gateways Thisallows resources to be restricted based on users

Running Ceph iSCSI Gateways can now be removed

As of Red Hat Ceph Storage 4 running iSCSI Gateways can now be removed from a Ceph iSCSI clusterfor maintenance or to reallocate resources The gatewayrsquos iSCSI target and its portals will be stoppedand all iSCSI target objects for that gateway will be removed from the kernel and the gatewayconfiguration Removing gateways that are down is not yet supported

36 OBJECT GATEWAY

The Beast HTTP front end

In Red Hat Ceph Storage 4 the default HTTP front end for the Ceph Object Gateway is Beast TheBeast front end uses the BoostBeast library for HTTP parsing and the BoostAsio library forasynchronous IO For details see the Using the Beast front end in the Object Gateway Configurationand Administration Guide for Red Hat Ceph Storage 4

Support for S3 MFA-Delete

With this release the Ceph Object Gateway supports S3 MFA-Delete using Time-Based One-TimePassword (TOTP) one-time passwords as an authentication factor This feature adds security againstinappropriate data removal You can configure buckets to require a TOTP one-time token in addition tostandard S3 authentication to delete data

Addition of storage classes and lifecycle transitions to the Ceph Object Gateway

The Ceph Object Gateway now provides support for storage classes an S3 compatible representationof the Ceph Object Gatewayrsquos underlying placement targets and for lifecycle transitions a mechanismto migrate objects between classes Storage classes and lifecycle transitions provide a higher level ofcontrol over data placement for the applications that need it They have many potential uses includingperformance tuning for specific workloads as well as automatic migration of inactive data to coldstorage

Users can now create new IAM policies and roles using REST APIs

With the release of Red Hat Ceph Storage 4 REST APIs for IAM roles and user policies are now availablein the same namespace as S3 APIs and can be accessed using the same endpoint as S3 APIs in theCeph Object Gateway This allows end users to create new IAM policies and roles using REST APIs

37 PACKAGES

Ability to install a Ceph cluster using a web-based interface

With this release the Cockpit web-based interface is supported Cockpit allows you to install a Red HatCeph Storage 4 cluster and other components such as Metadata Servers Ceph clients or the CephObject Gateway on base-metal or in containers For details see the Installing Red Hat Ceph Storageusing the Cockpit Web User Interface chapter in the Red Hat Ceph Storage 4 Installation Guide Notethat minimal experience with Red Hat Ceph Storage is required

38 RADOS

Ceph on-wire encryption

Starting with Red Hat Ceph Storage 4 you can enable encryption for all Ceph traffic over the network

Red Hat Ceph Storage 40 Release Notes

8

with the introduction of the messenger version 2 protocol For details see the Ceph on-wire encryptionchapter in the Architecture Guide and Encryption in transit section in the Data Security and HardeningGuide for Red Hat Ceph Storage 4

OSD BlueStore is now fully supported

BlueStore is a new back end for the OSD daemons that allows for storing objects directly on the blockdevices Because BlueStore does not need any file system interface it improves performance of Cephstorage clusters To learn more about the BlueStore OSD back end see the OSD BlueStore chapter inthe Administration Guide for Red Hat Ceph Storage 4

Red Hat Enterprise Linux in FIPS mode

With this release you can install Red Hat Ceph Storage on Red Hat Enterprise Linux where the FIPSmode is enabled

Changes the ceph df output and a new ceph osd df command

The output of the ceph df command has been improved Notably the RAW USED and RAW USEDvalues now show the preallocated space for the db and wal BlueStore partitions The ceph osd dfcommand shows the OSD utilization stats such as the amount of written data

Asynchronous recovery for non-acting OSD sets

Previously recovery with Ceph was a synchronous process by blocking write operations to objects untilthose objects were recovered In this release the recovery process is now asynchronous by not blockingwrite operations to objects only in the non-acting set of OSDs This new feature requires having morethan the minimum number of replicas as to have enough OSDs in the non-acting set

The new configuration option osd_async_recovery_min_cost controls how much asynchronousrecovery to do The default value for this option is 100 A higher value means asynchronous recovery willbe less whereas a lower value means asynchronous recovery will be more

Configuration is now stored in Monitors accessible by using ceph config

In this release Red Hat Ceph Storage centralizes configuration in Monitors instead of using the Cephconfiguration file (cephconf) Previously changing configuration included manually updating cephconf distributing it to appropriate nodes and restarting all affected daemons Now the Monitorsmanage a configuration database that has the same semantic structure as cephconf The database isaccessible by the ceph config command Any changes to the configuration are applied to daemons orclients in the system immediately and restarting them is no longer needed Use the ceph config -hcommand for details on the available set of commands Note that a Ceph configuration file is stillrequired to identify the Monitors nodes

Placement groups can now be auto-scaled

Red Hat Ceph Storage 4 introduces the ability to auto-scale placement groups (PGs) The number ofplacement groups (PGs) in a pool plays a significant role in how a cluster peers distributes data andrebalances Auto-scaling the number of PGs can make managing the cluster easier The new pg-autoscaling command provides recommendations for scaling PGs or automatically scales PGs basedon how the cluster is being used For more details about auto-scaling PGs see the Auto-scalingplacement groups section in the Storage Strategies Guide for Red Hat Ceph Storage 4

Introduction of diskprediction module

The Red Hat Ceph Storage diskprediction module gathers metrics to predict disk failures before theyhappen The module has two modes cloud and local With this release only the local mode is supportedThe local mode does not require any external server for data analysis It uses an internal predictormodule for disk prediction service and then returns the disk prediction result to the Ceph system

CHAPTER 3 NEW FEATURES

9

To enable the diskprediction module

ceph mgr module enable diskprediction_local

To set the prediction mode

ceph config set global device_failure_prediction_mode local

To disable the diskprediction module

ceph config set global device_failure_prediction_mode none

New configurable option mon_memory_target

Red Hat Ceph Storage 4 introduces a new configurable option mon_memory_target used to set thetarget amount of bytes for Monitor memory usage It specifies the amount of memory to allocate andmanage using the priority cache tuner for the associated Monitor daemon caches The default value of mon_memory_target is set to 2 GiB and you can change it during runtime with

ceph config set global mon_memory_target size

Prior to this release as a cluster scaled the Monitor specific RSS usage exceeded the limits that wereset using the mon_osd_cache_size option which led to issues This enhancement allows for improvedmanagement of memory allocated to the monitor caches and keeps the usage within specified limits

39 BLOCK DEVICES (RBD)

Erasure coding for Ceph Block Device

Erasure coding for Ceph Block Device (RBD) is now fully supported This feature allows RBD to storetheir data in an erasure coded pool For details see the Erasure Coding with Overwrites section in theStorage Strategies for Red Hat Ceph Storage 4

RBD performance monitoring and metrics gathering tools

Red Hat Ceph Storage 4 now incorporates new Ceph Block Device performance monitoring utilities foraggregated RBD image metrics for IOPS throughput and latency Per-image RBD metrics are nowavailable using the Ceph Manager Prometheus module the Ceph Dashboard and the rbd CLI using the rbd perf image iostat or rbd perf image iotop commands

Cloned images can be created from non-primary images

Creating cloned child RBD images from mirrored non-primary parent image is now supportedPreviously cloning of mirrored images was only supported for primary images When cloning goldenimages for virtual machines this restriction prevented the creation of new cloned images from thegolden non-primary image This update removes this restriction and cloned images can be created fromnon-primary mirrored images

Segregating RBD images within isolated namespaces within the same pool

RBD images can now be segregated within isolated namespaces within the same pool When using CephBlock Devices directly without a higher-level system such as OpenStack or OpenShift ContainerStorage it was not possible to restrict user access to specific RBD images When combined with CephXcapabilities users can be restricted to specific pool namespaces to restrict access to RBD images

Red Hat Ceph Storage 40 Release Notes

10

Moving RBD images between different pools within the same cluster

This version of Red Hat Ceph Storage adds the ability to move RBD images between different poolswithin the same cluster For details see the Moving images between pools section in the Block DeviceGuide for Red Hat Ceph Storage 4

Long-running RBD operations can run in the background

Long-running RBD operations such as image removal or cloned image flattening can now bescheduled to run in the background RBD operations that involve iterating over every backing RADOSobject for the image can take a long time depending on the size of the image When using the CLI toperform one of these operations the rbd CLI is blocked until the operation is complete Theseoperations can now be scheduled to run by the Ceph Manager as a background task by using the ceph rbd task add commands The progress of these tasks is visible on the Ceph dashboard as well as byusing the CLI

310 RBD MIRRORING

Support for multiple active instances of RBD mirror daemon in a single storage cluster

Red Hat Ceph Storage 4 now supports deploying multiple active instances of the RBD mirror daemon ina single storage cluster This enables multiple RBD mirror daemons to perform replication for the RBDimages or pools using an algorithm for chunking the images across the number of active mirroringdaemons

CHAPTER 3 NEW FEATURES

11

CHAPTER 4 BUG FIXESThis section describes bugs with significant impact on users that were fixed in this release of Red HatCeph Storage In addition the section includes descriptions of fixed known issues found in previousversions

41 THE CEPH-ANSIBLE UTILITY

The Ansible playbook no longer takes hours to complete the fuser command

Previously on a system running thousands of processes the fuser command in the Ansible playbookcould take several minutes or hours to complete because it iterates over all PID present in the procdirectory Because of this the handler tasks take a long time to complete checking if a Ceph process isalready running With this update instead of using the fuser command the Ansible playbook now checksthe socket file in the procnetunix directory and the handler tasks that check the Ceph socketcompletes almost instantly

(BZ1717011)

The purge-docker-clusteryml Ansible playbook no longer fails

Previously the purge-docker-clusteryml Ansible playbook could fail when trying to unmap RADOSBlock Devices (RBDs) because either the binary was absent or because the Atomic host versionprovided was too old With this update Ansible now uses the sysfs method to unmap devices if thereare any and the purge-docker-clusteryml playbook no longer fails

(BZ1766064)

42 OBJECT GATEWAY

The Expiration Days S3 Lifecycle parameter can now be set to 0

The Ceph Object Gateway did not accept the value of 0 for the Expiration Days Lifecycleconfiguration parameter Consequently setting the expiration to 0 could not be used to triggerbackground delete operation of objects With this update Expiration Days can be set to 0 as expected

(BZ1493476)

Archive zone fetches the current version of the source objects

An object is synced from multiple source zones to archive zone This behavior can lead to differentversions of the same object on the archive zone This update eliminates duplicate versions from gettingcreated by ensuring that the archive zone fetches the current version of the source object

(BZ1760862)

43 RADOS

A message to set expected_num_objects is no longer shown when using BlueStore

With this update a message that recommends setting the expected_num_obejcts parameter duringthe creation of the BlueStore pools has been removed because this message does not apply when usingthe BlueStore OSD back end

(BZ1650922)

Red Hat Ceph Storage 40 Release Notes

12

Deprecated JSON fields have been removed

This update removes deprecated fields from the JSON output of the ceph status command

(BZ1739233)

CHAPTER 4 BUG FIXES

13

CHAPTER 5 TECHNOLOGY PREVIEWSThis section provides an overview of Technology Preview features introduced or updated in this releaseof Red Hat Ceph Storage

IMPORTANT

Technology Preview features are not supported with Red Hat production service levelagreements (SLAs) might not be functionally complete and Red Hat does notrecommend to use them for production These features provide early access toupcoming product features enabling customers to test functionality and providefeedback during the development process

For more information on Red Hat Technology Preview features support scope seehttpsaccessredhatcomsupportofferingstechpreview

51 CEPH FILE SYSTEM

CephFS snapshots

Ceph File System (CephFS) supports taking snapshots as a Technology Preview Snapshots create animmutable view of the file system at the point in time they are taken

52 BLOCK DEVICES (RBD)

Mapping RBD images to NBD images

The rbd-nbd utility maps RADOS Block Device (RBD) images to Network Block Devices (NBD) andenables Ceph clients to access volumes and images in Kubernetes environments To use rbd-nbd installthe rbd-nbd package For details see the rbd-nbd(7) manual page

53 OBJECT GATEWAY

Object Gateway archive site

With this release an archive site is supported as a Technology Preview The archive site allows you tohave a history of versions of S3 objects that can only be eliminated through the gateways associatedwith the archive zone Including an archive zone in a multizone configuration allows you to have theflexibility of an S3 object history in only one zone while saving the space that the replicas of the versionsS3 objects would consume in the rest of the zones

Tiering within a cluster by disk type

This release adds the ability to tier within a cluster by disk type as a Technology Preview by using thecapabilities for mapping for example pools to placement targets and storage classes Using lifecycletransition rules it is possible to cause objects to migrate between storage pools based on policy

S3 bucket notifications

S3 bucket notifications are now supported as a Technology Preview When certain events are triggeredon an S3 bucket the notifications can be sent from the Ceph Object Gateway to HTTP AdvancedMessage Queuing Protocol (AMQP) 91 and Kafka endpoints Additionally the notifications can bestored in a ldquoPubSubrdquo zone instead of or in addition to sending them to the endpoints ldquoPubSubrdquo is apublish-subscribe model that enables recipients to pull notifications from Ceph

Red Hat Ceph Storage 40 Release Notes

14

To use the S3 notifications install the librabbitmq and librdkafka packages

CHAPTER 5 TECHNOLOGY PREVIEWS

15

CHAPTER 6 KNOWN ISSUESThis section documents known issues found in this release of Red Hat Ceph Storage

61 THE CEPH-ANSIBLE UTILITY

Red Hat Ceph Storage installation on Red Hat OpenStack Platform fails

The ceph-ansible utility becomes unresponsive when attempting to install Red Hat Ceph Storagetogether with the Red Hat OpenStack Platform 16 and it returns an error similar to the following one

Error unable to exec into ceph-mon-dcn1-computehci1-2 no container with name or ID ceph-mon-dcn1-computehci1-2 found no such container

To work around this issue update the following part of the handler_osdsyml file located in the ceph-ansiblerolesceph-handlertasks directory

- name unset noup flag command container_exec_cmd | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

To

- name unset noup flag command hostvars[groups[mon_group_name][0]][container_exec_cmd] | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

And start the installation process again

(BZ1792320)

Ansible does not unset the norebalance flag after it completes

The rolling-updateyml Ansible playbook does not unset the norebalance flag after it completes Towork around this issue unset the flag manually

(BZ1793564)

Ansible fails to upgrade a multisite Ceph Object Gateway when the Dashboard is enabled

When the Red Hat Ceph Storage Dashboard is enabled an attempt to use Ansible to upgrade to afurther version of Red Hat Ceph Storage fails when trying to upgrade the secondary Ceph ObjectGateway site in a multisite setup This bug does not occur on the primary site or if the Dashboard is notenabled

(BZ1794351)

62 CEPH MANAGEMENT DASHBOARD

The Dashboard does not show correct values for certain configuration options

Both the Red Hat Ceph Storage Dashboard and the underlying ceph config show command do not

Red Hat Ceph Storage 40 Release Notes

16

return the current values for certain configuration options such as fsid This is probably because certaincore options that are not meant for further modification after deploying a cluster are not updated and adefault value is used As a result the Dashboard does not show correct values for certain configurationoptions

(BZ1765493 BZ1772310)

NFS Ganesha in Dashboard

The Red Hat Ceph Storage Dashboard currently does not support managing NFS Ganesha

(BZ1772316)

Dashboard does not support email verification

The Red Hat Ceph Storage Dashboard does not support any email verification when changing a userspassword This behavior is intentional because the Dashboard supports Single Sign-On (SSO) and thisfeature can be delegated to the SSO provider

(BZ1778608)

The OSD histogram graph for read and write operations is not clear

The Red Hat Ceph Storage Dashboard does not display any numbers or description in the OSDhistogram graph for read and write operations and therefore the graph is not clear

(BZ1779170)

Dashboard returns an error when a ceph-mgr module is enabled from the ceph CLI

When enabling a Ceph Manager (ceph-mgr) module such as telemetry from the ceph CLI the Red HatCeph Storage Dashboard displays the following error message

0 - Unknown Error

In addition the Dashboard does not mark the module as enabled until the Refresh button is clicked

(BZ1785077)

The Dashboard allows modifying the LUN ID and WWN which can lead to data corruption

The Red Hat Ceph Storage Dashboard allows you to modify the LUN ID and its World Wide Name(WWN) which is not required after creating the LUN Moreover editing these parameters can bedangerous for certain initiators that do not fully support this feature by default Consequently editingthese parameters after creating them can lead to data corruption To avoid this do not modify the LUNID and WWN in the Dashboard

(BZ1786455)

The Dashboard does not provide correct Ceph iSCSI error messages

If the Ceph iSCSI returns an error for example the HTTP 400 code when trying to delete an iSCSItarget while a user is logged in the Red Hat Ceph Storage Dashboard does not forward that error codeand message to the Dashboard user using the pop-up notifications but displays a generic 500 InternalServer Error Consequently the message that the Dashboard provides is not informative and evenmisleading an expected behavior (users cannot delete a busy resource) is perceived as an operationalfailure (internal server error) To work around this issue see the Dashboard logs

(BZ1786457)

CHAPTER 6 KNOWN ISSUES

17

The Dashboard requires to disable iptables rules

The Red Hat Ceph Storage Dashboard is unable to perform any iSCSI operations such as creating agateway unless all iptables rules are manually disabled on the Ceph iSCSI node To do so use thefollowing command as root or the sudo user

iptables -F

Note that after a reboot the rules are enabled again Either disable them again or delete thempermanently

(BZ1792818)

63 PACKAGES

Current version of Grafana causes certain bugs in the Dashboard

Red Hat Ceph Storage 4 uses the Grafana version 524 This version causes the following bugs in theRed Hat Ceph Storage Dashboard

When navigating to Pools gt Overall Performance Grafana returns the following error

TypeError lc[ttype] is undefinedtrue

When viewing a poolrsquos performance details (Pools gt select a pool from the list gt PerformanceDetails) the Grafana bar is displayed along with other graphs and values but it should not bethere

These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red HatCeph Storage

(BZ1786107 BZ1762197 BZ1765536)

64 RED HAT ENTERPRISE LINUX

Ansible cannot start NFS Ganesha if SELinux is in enforcing mode

When using SELinux in enforcing mode on Red Hat Enterprise Linux 81 the ceph-ansible utility fails tostart the NFS Ganesha service because SELinux policy currently does not allow creating a directoryrequired for NFS Ganesha

(BZ1794027 BZ1796160)

Red Hat Ceph Storage 40 Release Notes

18

CHAPTER 7 DEPRECATED FUNCTIONALITYThis section provides an overview of functionality that has been deprecated in all minor releases up tothis release of Red Hat Ceph Storage

Ubuntu is no longer supported

Installing a Red Hat Ceph Storage 4 cluster on Ubuntu is no longer supported Use Red HatEnterprise Linux as the underlying operating system

71 THE CEPH-ANSIBLE UTILITY

Configuring iSCSI gateway using ceph-ansible is no longer supported

Configuring the Ceph iSCSI gateway by using the ceph-ansible utility is no longer supported Use ceph-ansible to install the gateway and then use either the gwcli utility of the Red Hat Ceph StorageDashboard to configure the gateway For details see the Using the Ceph iSCSI Gateway chapter in theBlock Device Guide for Red Hat Ceph Storage 4

72 THE CEPH-DISK UTILITY

ceph-disk is deprecated

With this release the ceph-disk utility is no longer supported The ceph-volume utility is used insteadFor details see the Why does ceph-volume replace `ceph-disk` section in the Administration Guide forRed Hat Ceph Storage 4

73 RADOS

FileStore is no longer supported in production

The FileStore OSD back end is now deprecated because the new BlueStore back end is now fullysupported in production For details see the How to migrate the object store from FileStore toBlueStore section in the Red Hat Ceph Storage 4 Installation Guide

Ceph configuration is now deprecated

The Ceph configuration file (cephconf) is now deprecated in favor of new centralized configurationstored in Monitors For details see the Configuration is now stored in Monitors accessible by using `cephconfig` release note

CHAPTER 7 DEPRECATED FUNCTIONALITY

19

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES
Page 4: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

Table of Contents

CHAPTER 1 INTRODUCTION

CHAPTER 2 ACKNOWLEDGMENTS

CHAPTER 3 NEW FEATURES31 THE CEPH-ANSIBLE UTILITY32 CEPH MANAGEMENT DASHBOARD33 CEPH FILE SYSTEM34 CEPH MEDIC35 ISCSI GATEWAY36 OBJECT GATEWAY37 PACKAGES38 RADOS39 BLOCK DEVICES (RBD)310 RBD MIRRORING

CHAPTER 4 BUG FIXES41 THE CEPH-ANSIBLE UTILITY42 OBJECT GATEWAY43 RADOS

CHAPTER 5 TECHNOLOGY PREVIEWS51 CEPH FILE SYSTEM52 BLOCK DEVICES (RBD)53 OBJECT GATEWAY

CHAPTER 6 KNOWN ISSUES61 THE CEPH-ANSIBLE UTILITY62 CEPH MANAGEMENT DASHBOARD63 PACKAGES64 RED HAT ENTERPRISE LINUX

CHAPTER 7 DEPRECATED FUNCTIONALITY71 THE CEPH-ANSIBLE UTILITY72 THE CEPH-DISK UTILITY73 RADOS

CHAPTER 8 SOURCES

3

4

556777888

1011

12121212

14141414

1616161818

19191919

20

Table of Contents

1

Red Hat Ceph Storage 40 Release Notes

2

CHAPTER 1 INTRODUCTIONRed Hat Ceph Storage is a massively scalable open software-defined storage platform that combinesthe most stable version of the Ceph storage system with a Ceph management platform deploymentutilities and support services

The Red Hat Ceph Storage documentation is available athttpsaccessredhatcomdocumentationenred-hat-ceph-storage

CHAPTER 1 INTRODUCTION

3

CHAPTER 2 ACKNOWLEDGMENTSRed Hat Ceph Storage 40 contains many contributions from the Red Hat Ceph Storage teamAdditionally the Ceph project is seeing amazing growth in the quality and quantity of contributions fromindividuals and organizations in the Ceph community We would like to thank all members of the Red HatCeph Storage team all of the individual contributors in the Ceph community and additionally (but notlimited to) the contributions from organizations such as

Intel

Fujitsu

UnitedStack

Yahoo

UbuntuKylin

Mellanox

CERN

Deutsche Telekom

Mirantis

SanDisk

SUSE

Red Hat Ceph Storage 40 Release Notes

4

CHAPTER 3 NEW FEATURESThis section lists all major updates enhancements and new features introduced in this release of RedHat Ceph Storage

The main features added by this release are

General availability of the BlueStore OSD back end

Ceph Management Dashboard

Ceph Object Gateway Beast Asio web server

Erasure coding for Ceph Block Device

Auto-scaling placement groups

Web-based installation interface

On-wire encryption

RBD performance monitoring and metrics gathering tools

Red Hat Enterprise Linux in FIPS mode

31 THE CEPH-ANSIBLE UTILITY

Ceph OSDs created with ceph-disk are migrated to ceph-volume during upgrade

When upgrading to Red Hat Ceph Storage 4 all running Ceph OSDs previously created by the ceph-disk utility will be migrated to the ceph-volume utility because ceph-disk has been deprecated in thisrelease

For bare-metal and container deployments of Red Hat Ceph Storage the ceph-volume utility does asimple scan and takes over the existing Ceph OSDs deployed by the ceph-disk utility Also do not usethese migrated devices in configurations for subsequent deployments Note that you cannot create anynew Ceph OSDs during the upgrade process

After the upgrade all Ceph OSDs created by ceph-disk will start and operate like any Ceph OSDscreated by ceph-volume

Ansible playbooks for scaling of all Ceph services

Previously ceph-ansible playbooks offered limited scale up and scale down features only to core Cephproducts such as Monitors and OSDs With this update additional Ansible playbooks allow for scaling ofall Ceph services

Ceph iSCSI packages merged into single package

The ceph-iscsi-cli and ceph-iscsi-config packages have been merged to one package named ceph-iscsi

The nfs-ganesha service is now supported as a standalone deployment

Red Hat Openstack Directory (OSPd) requires the ceph-ansible utility to be able to deploy the nfs-ganesha service and configure it so that it points to an external unmanaged pre-existing Ceph clusterAs of Red Hat Ceph Storage 4 ceph-ansible allows the deployment of an internal nfs-ganesha service

CHAPTER 3 NEW FEATURES

5

with an external Ceph cluster

Ceph container can now write logs to a respective daemon file

The previous way of logging for containerized Ceph environments did not allow limiting the journalctloutput when looking at log data by using the sosreport collection With this release logging can beenabled or disabled for a particular Ceph daemon with the following command

ceph config set daemonid log_to_file true

Where daemon is the type of the daemon and id is its ID For example to enable logging for the Monitordaemon with ID mon0

ceph config set monmon0 log_to_file true

This new feature makes debugging easier

Ability to configure Ceph Object Gateway to use TLS encryption

This release of Red Hat Ceph Storage provides the ability to configure the Ceph Object Gatewaylistener with an SSL certificate for TLS encryption by using the radosgw_frontend_ssl_certificatevariable to secure the Transmission Control Protocol (TCP) traffic

Ansible playbook for migrating OSDs from FileStore to BlueStore

A new Ansible playbook has been added to migrate OSDs from FileStore to BlueStore The object storemigration is not done as part of the upgrade process to Red Hat Ceph Storage 4 Do the migration afterthe upgrade completes For details see the How to migrate the object store from FileStore to BlueStoresection in the Red Hat Ceph Storage Administration Guide

32 CEPH MANAGEMENT DASHBOARD

Improvements to information for pool usage

With this enhancement valuable information was added to the pools table The following columns wereadded usage read bytes write bytes read operations and write operations Also the PlacementGroups column was renamed to Pg Status

Red Hat Ceph Storage Dashboard alerts

Red Hat Ceph Storage Dashboard supports alerts based on Ceph metrics and configured thresholdsThe Prometheus AlertManager configures gathers and triggers the alerts The alerts are displayed inthe Dashboard as a pop-up notifications in the upper-right corner You can view details of recent alertsin Cluster gt Alerts You can configure the alerts only in Prometheus but you can temporarily mute themfrom the Dashboard by creating Alert Silences in Cluster gt Silences

Displaying and hiding Ceph components in Dashboard

In the Red Hat Ceph Storage Dashboard you can display or hide Ceph components such as Ceph iSCSIRBD mirroring Ceph Block Devices Ceph File System or Ceph Object Gateway This feature allows youto hide components that are not configured

Ceph dashboard has been added to the Ceph Ansible playbooks

With this release the Ceph dashboard installation code was merged into the Ceph Ansible playbooksCeph Ansible does a containerized deployment of the Ceph dashboard regardless of the Red HatCeph Storage deployment type bare metal or containers These four new roles were added ceph-

Red Hat Ceph Storage 40 Release Notes

6

grafana ceph-dashboard ceph-prometheus and ceph-node-exporter

Red Hat Ceph Storage Dashboard alerts

Red Hat Ceph Storage Dashboard supports alerts based on Ceph metrics and configured thresholdsThe Prometheus AlertManager configures gathers and triggers the alerts

Viewing the cluster hierarchy from the Dashboard

Red Hat Ceph Storage Dashboard provides the ability to view the cluster hierarchy For details see theViewing the CRUSH map section in the Dashboard Guide for Red Hat Ceph Storage 4

33 CEPH FILE SYSTEM

ceph -w now shows information about CephFS scrubs

Previously it was not possible to check the ongoing Ceph File System (CephFS) scrubs status asidefrom checking the Metadata server (MDS) logs With this update the ceph -w command showsinformation about active CephFS scrubs to better understand the status

ceph-mgr volumes module for managing CephFS exports

This release provides Ceph Manager (ceph-mgr) volumes module to manage Ceph File System(CephFS) exports The volumes module implements the following file system export abstractions

FS volumes an abstraction for CephFS file systems

FS subvolumes an abstraction for independent CephFS directory trees

FS subvolume groups an abstraction for a directory higher than FS subvolumes to effectpolicies such as file layouts across a set of subvolumes

In addition these new commands are now supported

fs subvolume ls for listing subvolumes

fs subvolumegroup ls for listing subvolumes groups

fs subvolume snapshot ls for listing subvolume snapshots

fs subvolumegroup snapshot ls for listing subvolume group snapshots

fs subvolume rm for removing snapshots

34 CEPH MEDIC

ceph-medic can check the health of Ceph running in containers

With this release the ceph-medic utility can now check the health of a Red Hat Ceph Storage clusterrunning within a container

35 ISCSI GATEWAY

Non-administrators can now be used for the ceph-iscsi service

As of Red Hat Ceph Storage 4 non-administrative Ceph users may be used for the ceph-iscsi service

CHAPTER 3 NEW FEATURES

7

As of Red Hat Ceph Storage 4 non-administrative Ceph users may be used for the ceph-iscsi serviceby setting the cluster_client_name in the etccephiscsi-gatewaycfg on all iSCSI Gateways Thisallows resources to be restricted based on users

Running Ceph iSCSI Gateways can now be removed

As of Red Hat Ceph Storage 4 running iSCSI Gateways can now be removed from a Ceph iSCSI clusterfor maintenance or to reallocate resources The gatewayrsquos iSCSI target and its portals will be stoppedand all iSCSI target objects for that gateway will be removed from the kernel and the gatewayconfiguration Removing gateways that are down is not yet supported

36 OBJECT GATEWAY

The Beast HTTP front end

In Red Hat Ceph Storage 4 the default HTTP front end for the Ceph Object Gateway is Beast TheBeast front end uses the BoostBeast library for HTTP parsing and the BoostAsio library forasynchronous IO For details see the Using the Beast front end in the Object Gateway Configurationand Administration Guide for Red Hat Ceph Storage 4

Support for S3 MFA-Delete

With this release the Ceph Object Gateway supports S3 MFA-Delete using Time-Based One-TimePassword (TOTP) one-time passwords as an authentication factor This feature adds security againstinappropriate data removal You can configure buckets to require a TOTP one-time token in addition tostandard S3 authentication to delete data

Addition of storage classes and lifecycle transitions to the Ceph Object Gateway

The Ceph Object Gateway now provides support for storage classes an S3 compatible representationof the Ceph Object Gatewayrsquos underlying placement targets and for lifecycle transitions a mechanismto migrate objects between classes Storage classes and lifecycle transitions provide a higher level ofcontrol over data placement for the applications that need it They have many potential uses includingperformance tuning for specific workloads as well as automatic migration of inactive data to coldstorage

Users can now create new IAM policies and roles using REST APIs

With the release of Red Hat Ceph Storage 4 REST APIs for IAM roles and user policies are now availablein the same namespace as S3 APIs and can be accessed using the same endpoint as S3 APIs in theCeph Object Gateway This allows end users to create new IAM policies and roles using REST APIs

37 PACKAGES

Ability to install a Ceph cluster using a web-based interface

With this release the Cockpit web-based interface is supported Cockpit allows you to install a Red HatCeph Storage 4 cluster and other components such as Metadata Servers Ceph clients or the CephObject Gateway on base-metal or in containers For details see the Installing Red Hat Ceph Storageusing the Cockpit Web User Interface chapter in the Red Hat Ceph Storage 4 Installation Guide Notethat minimal experience with Red Hat Ceph Storage is required

38 RADOS

Ceph on-wire encryption

Starting with Red Hat Ceph Storage 4 you can enable encryption for all Ceph traffic over the network

Red Hat Ceph Storage 40 Release Notes

8

with the introduction of the messenger version 2 protocol For details see the Ceph on-wire encryptionchapter in the Architecture Guide and Encryption in transit section in the Data Security and HardeningGuide for Red Hat Ceph Storage 4

OSD BlueStore is now fully supported

BlueStore is a new back end for the OSD daemons that allows for storing objects directly on the blockdevices Because BlueStore does not need any file system interface it improves performance of Cephstorage clusters To learn more about the BlueStore OSD back end see the OSD BlueStore chapter inthe Administration Guide for Red Hat Ceph Storage 4

Red Hat Enterprise Linux in FIPS mode

With this release you can install Red Hat Ceph Storage on Red Hat Enterprise Linux where the FIPSmode is enabled

Changes the ceph df output and a new ceph osd df command

The output of the ceph df command has been improved Notably the RAW USED and RAW USEDvalues now show the preallocated space for the db and wal BlueStore partitions The ceph osd dfcommand shows the OSD utilization stats such as the amount of written data

Asynchronous recovery for non-acting OSD sets

Previously recovery with Ceph was a synchronous process by blocking write operations to objects untilthose objects were recovered In this release the recovery process is now asynchronous by not blockingwrite operations to objects only in the non-acting set of OSDs This new feature requires having morethan the minimum number of replicas as to have enough OSDs in the non-acting set

The new configuration option osd_async_recovery_min_cost controls how much asynchronousrecovery to do The default value for this option is 100 A higher value means asynchronous recovery willbe less whereas a lower value means asynchronous recovery will be more

Configuration is now stored in Monitors accessible by using ceph config

In this release Red Hat Ceph Storage centralizes configuration in Monitors instead of using the Cephconfiguration file (cephconf) Previously changing configuration included manually updating cephconf distributing it to appropriate nodes and restarting all affected daemons Now the Monitorsmanage a configuration database that has the same semantic structure as cephconf The database isaccessible by the ceph config command Any changes to the configuration are applied to daemons orclients in the system immediately and restarting them is no longer needed Use the ceph config -hcommand for details on the available set of commands Note that a Ceph configuration file is stillrequired to identify the Monitors nodes

Placement groups can now be auto-scaled

Red Hat Ceph Storage 4 introduces the ability to auto-scale placement groups (PGs) The number ofplacement groups (PGs) in a pool plays a significant role in how a cluster peers distributes data andrebalances Auto-scaling the number of PGs can make managing the cluster easier The new pg-autoscaling command provides recommendations for scaling PGs or automatically scales PGs basedon how the cluster is being used For more details about auto-scaling PGs see the Auto-scalingplacement groups section in the Storage Strategies Guide for Red Hat Ceph Storage 4

Introduction of diskprediction module

The Red Hat Ceph Storage diskprediction module gathers metrics to predict disk failures before theyhappen The module has two modes cloud and local With this release only the local mode is supportedThe local mode does not require any external server for data analysis It uses an internal predictormodule for disk prediction service and then returns the disk prediction result to the Ceph system

CHAPTER 3 NEW FEATURES

9

To enable the diskprediction module

ceph mgr module enable diskprediction_local

To set the prediction mode

ceph config set global device_failure_prediction_mode local

To disable the diskprediction module

ceph config set global device_failure_prediction_mode none

New configurable option mon_memory_target

Red Hat Ceph Storage 4 introduces a new configurable option mon_memory_target used to set thetarget amount of bytes for Monitor memory usage It specifies the amount of memory to allocate andmanage using the priority cache tuner for the associated Monitor daemon caches The default value of mon_memory_target is set to 2 GiB and you can change it during runtime with

ceph config set global mon_memory_target size

Prior to this release as a cluster scaled the Monitor specific RSS usage exceeded the limits that wereset using the mon_osd_cache_size option which led to issues This enhancement allows for improvedmanagement of memory allocated to the monitor caches and keeps the usage within specified limits

39 BLOCK DEVICES (RBD)

Erasure coding for Ceph Block Device

Erasure coding for Ceph Block Device (RBD) is now fully supported This feature allows RBD to storetheir data in an erasure coded pool For details see the Erasure Coding with Overwrites section in theStorage Strategies for Red Hat Ceph Storage 4

RBD performance monitoring and metrics gathering tools

Red Hat Ceph Storage 4 now incorporates new Ceph Block Device performance monitoring utilities foraggregated RBD image metrics for IOPS throughput and latency Per-image RBD metrics are nowavailable using the Ceph Manager Prometheus module the Ceph Dashboard and the rbd CLI using the rbd perf image iostat or rbd perf image iotop commands

Cloned images can be created from non-primary images

Creating cloned child RBD images from mirrored non-primary parent image is now supportedPreviously cloning of mirrored images was only supported for primary images When cloning goldenimages for virtual machines this restriction prevented the creation of new cloned images from thegolden non-primary image This update removes this restriction and cloned images can be created fromnon-primary mirrored images

Segregating RBD images within isolated namespaces within the same pool

RBD images can now be segregated within isolated namespaces within the same pool When using CephBlock Devices directly without a higher-level system such as OpenStack or OpenShift ContainerStorage it was not possible to restrict user access to specific RBD images When combined with CephXcapabilities users can be restricted to specific pool namespaces to restrict access to RBD images

Red Hat Ceph Storage 40 Release Notes

10

Moving RBD images between different pools within the same cluster

This version of Red Hat Ceph Storage adds the ability to move RBD images between different poolswithin the same cluster For details see the Moving images between pools section in the Block DeviceGuide for Red Hat Ceph Storage 4

Long-running RBD operations can run in the background

Long-running RBD operations such as image removal or cloned image flattening can now bescheduled to run in the background RBD operations that involve iterating over every backing RADOSobject for the image can take a long time depending on the size of the image When using the CLI toperform one of these operations the rbd CLI is blocked until the operation is complete Theseoperations can now be scheduled to run by the Ceph Manager as a background task by using the ceph rbd task add commands The progress of these tasks is visible on the Ceph dashboard as well as byusing the CLI

310 RBD MIRRORING

Support for multiple active instances of RBD mirror daemon in a single storage cluster

Red Hat Ceph Storage 4 now supports deploying multiple active instances of the RBD mirror daemon ina single storage cluster This enables multiple RBD mirror daemons to perform replication for the RBDimages or pools using an algorithm for chunking the images across the number of active mirroringdaemons

CHAPTER 3 NEW FEATURES

11

CHAPTER 4 BUG FIXESThis section describes bugs with significant impact on users that were fixed in this release of Red HatCeph Storage In addition the section includes descriptions of fixed known issues found in previousversions

41 THE CEPH-ANSIBLE UTILITY

The Ansible playbook no longer takes hours to complete the fuser command

Previously on a system running thousands of processes the fuser command in the Ansible playbookcould take several minutes or hours to complete because it iterates over all PID present in the procdirectory Because of this the handler tasks take a long time to complete checking if a Ceph process isalready running With this update instead of using the fuser command the Ansible playbook now checksthe socket file in the procnetunix directory and the handler tasks that check the Ceph socketcompletes almost instantly

(BZ1717011)

The purge-docker-clusteryml Ansible playbook no longer fails

Previously the purge-docker-clusteryml Ansible playbook could fail when trying to unmap RADOSBlock Devices (RBDs) because either the binary was absent or because the Atomic host versionprovided was too old With this update Ansible now uses the sysfs method to unmap devices if thereare any and the purge-docker-clusteryml playbook no longer fails

(BZ1766064)

42 OBJECT GATEWAY

The Expiration Days S3 Lifecycle parameter can now be set to 0

The Ceph Object Gateway did not accept the value of 0 for the Expiration Days Lifecycleconfiguration parameter Consequently setting the expiration to 0 could not be used to triggerbackground delete operation of objects With this update Expiration Days can be set to 0 as expected

(BZ1493476)

Archive zone fetches the current version of the source objects

An object is synced from multiple source zones to archive zone This behavior can lead to differentversions of the same object on the archive zone This update eliminates duplicate versions from gettingcreated by ensuring that the archive zone fetches the current version of the source object

(BZ1760862)

43 RADOS

A message to set expected_num_objects is no longer shown when using BlueStore

With this update a message that recommends setting the expected_num_obejcts parameter duringthe creation of the BlueStore pools has been removed because this message does not apply when usingthe BlueStore OSD back end

(BZ1650922)

Red Hat Ceph Storage 40 Release Notes

12

Deprecated JSON fields have been removed

This update removes deprecated fields from the JSON output of the ceph status command

(BZ1739233)

CHAPTER 4 BUG FIXES

13

CHAPTER 5 TECHNOLOGY PREVIEWSThis section provides an overview of Technology Preview features introduced or updated in this releaseof Red Hat Ceph Storage

IMPORTANT

Technology Preview features are not supported with Red Hat production service levelagreements (SLAs) might not be functionally complete and Red Hat does notrecommend to use them for production These features provide early access toupcoming product features enabling customers to test functionality and providefeedback during the development process

For more information on Red Hat Technology Preview features support scope seehttpsaccessredhatcomsupportofferingstechpreview

51 CEPH FILE SYSTEM

CephFS snapshots

Ceph File System (CephFS) supports taking snapshots as a Technology Preview Snapshots create animmutable view of the file system at the point in time they are taken

52 BLOCK DEVICES (RBD)

Mapping RBD images to NBD images

The rbd-nbd utility maps RADOS Block Device (RBD) images to Network Block Devices (NBD) andenables Ceph clients to access volumes and images in Kubernetes environments To use rbd-nbd installthe rbd-nbd package For details see the rbd-nbd(7) manual page

53 OBJECT GATEWAY

Object Gateway archive site

With this release an archive site is supported as a Technology Preview The archive site allows you tohave a history of versions of S3 objects that can only be eliminated through the gateways associatedwith the archive zone Including an archive zone in a multizone configuration allows you to have theflexibility of an S3 object history in only one zone while saving the space that the replicas of the versionsS3 objects would consume in the rest of the zones

Tiering within a cluster by disk type

This release adds the ability to tier within a cluster by disk type as a Technology Preview by using thecapabilities for mapping for example pools to placement targets and storage classes Using lifecycletransition rules it is possible to cause objects to migrate between storage pools based on policy

S3 bucket notifications

S3 bucket notifications are now supported as a Technology Preview When certain events are triggeredon an S3 bucket the notifications can be sent from the Ceph Object Gateway to HTTP AdvancedMessage Queuing Protocol (AMQP) 91 and Kafka endpoints Additionally the notifications can bestored in a ldquoPubSubrdquo zone instead of or in addition to sending them to the endpoints ldquoPubSubrdquo is apublish-subscribe model that enables recipients to pull notifications from Ceph

Red Hat Ceph Storage 40 Release Notes

14

To use the S3 notifications install the librabbitmq and librdkafka packages

CHAPTER 5 TECHNOLOGY PREVIEWS

15

CHAPTER 6 KNOWN ISSUESThis section documents known issues found in this release of Red Hat Ceph Storage

61 THE CEPH-ANSIBLE UTILITY

Red Hat Ceph Storage installation on Red Hat OpenStack Platform fails

The ceph-ansible utility becomes unresponsive when attempting to install Red Hat Ceph Storagetogether with the Red Hat OpenStack Platform 16 and it returns an error similar to the following one

Error unable to exec into ceph-mon-dcn1-computehci1-2 no container with name or ID ceph-mon-dcn1-computehci1-2 found no such container

To work around this issue update the following part of the handler_osdsyml file located in the ceph-ansiblerolesceph-handlertasks directory

- name unset noup flag command container_exec_cmd | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

To

- name unset noup flag command hostvars[groups[mon_group_name][0]][container_exec_cmd] | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

And start the installation process again

(BZ1792320)

Ansible does not unset the norebalance flag after it completes

The rolling-updateyml Ansible playbook does not unset the norebalance flag after it completes Towork around this issue unset the flag manually

(BZ1793564)

Ansible fails to upgrade a multisite Ceph Object Gateway when the Dashboard is enabled

When the Red Hat Ceph Storage Dashboard is enabled an attempt to use Ansible to upgrade to afurther version of Red Hat Ceph Storage fails when trying to upgrade the secondary Ceph ObjectGateway site in a multisite setup This bug does not occur on the primary site or if the Dashboard is notenabled

(BZ1794351)

62 CEPH MANAGEMENT DASHBOARD

The Dashboard does not show correct values for certain configuration options

Both the Red Hat Ceph Storage Dashboard and the underlying ceph config show command do not

Red Hat Ceph Storage 40 Release Notes

16

return the current values for certain configuration options such as fsid This is probably because certaincore options that are not meant for further modification after deploying a cluster are not updated and adefault value is used As a result the Dashboard does not show correct values for certain configurationoptions

(BZ1765493 BZ1772310)

NFS Ganesha in Dashboard

The Red Hat Ceph Storage Dashboard currently does not support managing NFS Ganesha

(BZ1772316)

Dashboard does not support email verification

The Red Hat Ceph Storage Dashboard does not support any email verification when changing a userspassword This behavior is intentional because the Dashboard supports Single Sign-On (SSO) and thisfeature can be delegated to the SSO provider

(BZ1778608)

The OSD histogram graph for read and write operations is not clear

The Red Hat Ceph Storage Dashboard does not display any numbers or description in the OSDhistogram graph for read and write operations and therefore the graph is not clear

(BZ1779170)

Dashboard returns an error when a ceph-mgr module is enabled from the ceph CLI

When enabling a Ceph Manager (ceph-mgr) module such as telemetry from the ceph CLI the Red HatCeph Storage Dashboard displays the following error message

0 - Unknown Error

In addition the Dashboard does not mark the module as enabled until the Refresh button is clicked

(BZ1785077)

The Dashboard allows modifying the LUN ID and WWN which can lead to data corruption

The Red Hat Ceph Storage Dashboard allows you to modify the LUN ID and its World Wide Name(WWN) which is not required after creating the LUN Moreover editing these parameters can bedangerous for certain initiators that do not fully support this feature by default Consequently editingthese parameters after creating them can lead to data corruption To avoid this do not modify the LUNID and WWN in the Dashboard

(BZ1786455)

The Dashboard does not provide correct Ceph iSCSI error messages

If the Ceph iSCSI returns an error for example the HTTP 400 code when trying to delete an iSCSItarget while a user is logged in the Red Hat Ceph Storage Dashboard does not forward that error codeand message to the Dashboard user using the pop-up notifications but displays a generic 500 InternalServer Error Consequently the message that the Dashboard provides is not informative and evenmisleading an expected behavior (users cannot delete a busy resource) is perceived as an operationalfailure (internal server error) To work around this issue see the Dashboard logs

(BZ1786457)

CHAPTER 6 KNOWN ISSUES

17

The Dashboard requires to disable iptables rules

The Red Hat Ceph Storage Dashboard is unable to perform any iSCSI operations such as creating agateway unless all iptables rules are manually disabled on the Ceph iSCSI node To do so use thefollowing command as root or the sudo user

iptables -F

Note that after a reboot the rules are enabled again Either disable them again or delete thempermanently

(BZ1792818)

63 PACKAGES

Current version of Grafana causes certain bugs in the Dashboard

Red Hat Ceph Storage 4 uses the Grafana version 524 This version causes the following bugs in theRed Hat Ceph Storage Dashboard

When navigating to Pools gt Overall Performance Grafana returns the following error

TypeError lc[ttype] is undefinedtrue

When viewing a poolrsquos performance details (Pools gt select a pool from the list gt PerformanceDetails) the Grafana bar is displayed along with other graphs and values but it should not bethere

These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red HatCeph Storage

(BZ1786107 BZ1762197 BZ1765536)

64 RED HAT ENTERPRISE LINUX

Ansible cannot start NFS Ganesha if SELinux is in enforcing mode

When using SELinux in enforcing mode on Red Hat Enterprise Linux 81 the ceph-ansible utility fails tostart the NFS Ganesha service because SELinux policy currently does not allow creating a directoryrequired for NFS Ganesha

(BZ1794027 BZ1796160)

Red Hat Ceph Storage 40 Release Notes

18

CHAPTER 7 DEPRECATED FUNCTIONALITYThis section provides an overview of functionality that has been deprecated in all minor releases up tothis release of Red Hat Ceph Storage

Ubuntu is no longer supported

Installing a Red Hat Ceph Storage 4 cluster on Ubuntu is no longer supported Use Red HatEnterprise Linux as the underlying operating system

71 THE CEPH-ANSIBLE UTILITY

Configuring iSCSI gateway using ceph-ansible is no longer supported

Configuring the Ceph iSCSI gateway by using the ceph-ansible utility is no longer supported Use ceph-ansible to install the gateway and then use either the gwcli utility of the Red Hat Ceph StorageDashboard to configure the gateway For details see the Using the Ceph iSCSI Gateway chapter in theBlock Device Guide for Red Hat Ceph Storage 4

72 THE CEPH-DISK UTILITY

ceph-disk is deprecated

With this release the ceph-disk utility is no longer supported The ceph-volume utility is used insteadFor details see the Why does ceph-volume replace `ceph-disk` section in the Administration Guide forRed Hat Ceph Storage 4

73 RADOS

FileStore is no longer supported in production

The FileStore OSD back end is now deprecated because the new BlueStore back end is now fullysupported in production For details see the How to migrate the object store from FileStore toBlueStore section in the Red Hat Ceph Storage 4 Installation Guide

Ceph configuration is now deprecated

The Ceph configuration file (cephconf) is now deprecated in favor of new centralized configurationstored in Monitors For details see the Configuration is now stored in Monitors accessible by using `cephconfig` release note

CHAPTER 7 DEPRECATED FUNCTIONALITY

19

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES
Page 5: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

Red Hat Ceph Storage 40 Release Notes

2

CHAPTER 1 INTRODUCTIONRed Hat Ceph Storage is a massively scalable open software-defined storage platform that combinesthe most stable version of the Ceph storage system with a Ceph management platform deploymentutilities and support services

The Red Hat Ceph Storage documentation is available athttpsaccessredhatcomdocumentationenred-hat-ceph-storage

CHAPTER 1 INTRODUCTION

3

CHAPTER 2 ACKNOWLEDGMENTSRed Hat Ceph Storage 40 contains many contributions from the Red Hat Ceph Storage teamAdditionally the Ceph project is seeing amazing growth in the quality and quantity of contributions fromindividuals and organizations in the Ceph community We would like to thank all members of the Red HatCeph Storage team all of the individual contributors in the Ceph community and additionally (but notlimited to) the contributions from organizations such as

Intel

Fujitsu

UnitedStack

Yahoo

UbuntuKylin

Mellanox

CERN

Deutsche Telekom

Mirantis

SanDisk

SUSE

Red Hat Ceph Storage 40 Release Notes

4

CHAPTER 3 NEW FEATURESThis section lists all major updates enhancements and new features introduced in this release of RedHat Ceph Storage

The main features added by this release are

General availability of the BlueStore OSD back end

Ceph Management Dashboard

Ceph Object Gateway Beast Asio web server

Erasure coding for Ceph Block Device

Auto-scaling placement groups

Web-based installation interface

On-wire encryption

RBD performance monitoring and metrics gathering tools

Red Hat Enterprise Linux in FIPS mode

31 THE CEPH-ANSIBLE UTILITY

Ceph OSDs created with ceph-disk are migrated to ceph-volume during upgrade

When upgrading to Red Hat Ceph Storage 4 all running Ceph OSDs previously created by the ceph-disk utility will be migrated to the ceph-volume utility because ceph-disk has been deprecated in thisrelease

For bare-metal and container deployments of Red Hat Ceph Storage the ceph-volume utility does asimple scan and takes over the existing Ceph OSDs deployed by the ceph-disk utility Also do not usethese migrated devices in configurations for subsequent deployments Note that you cannot create anynew Ceph OSDs during the upgrade process

After the upgrade all Ceph OSDs created by ceph-disk will start and operate like any Ceph OSDscreated by ceph-volume

Ansible playbooks for scaling of all Ceph services

Previously ceph-ansible playbooks offered limited scale up and scale down features only to core Cephproducts such as Monitors and OSDs With this update additional Ansible playbooks allow for scaling ofall Ceph services

Ceph iSCSI packages merged into single package

The ceph-iscsi-cli and ceph-iscsi-config packages have been merged to one package named ceph-iscsi

The nfs-ganesha service is now supported as a standalone deployment

Red Hat Openstack Directory (OSPd) requires the ceph-ansible utility to be able to deploy the nfs-ganesha service and configure it so that it points to an external unmanaged pre-existing Ceph clusterAs of Red Hat Ceph Storage 4 ceph-ansible allows the deployment of an internal nfs-ganesha service

CHAPTER 3 NEW FEATURES

5

with an external Ceph cluster

Ceph container can now write logs to a respective daemon file

The previous way of logging for containerized Ceph environments did not allow limiting the journalctloutput when looking at log data by using the sosreport collection With this release logging can beenabled or disabled for a particular Ceph daemon with the following command

ceph config set daemonid log_to_file true

Where daemon is the type of the daemon and id is its ID For example to enable logging for the Monitordaemon with ID mon0

ceph config set monmon0 log_to_file true

This new feature makes debugging easier

Ability to configure Ceph Object Gateway to use TLS encryption

This release of Red Hat Ceph Storage provides the ability to configure the Ceph Object Gatewaylistener with an SSL certificate for TLS encryption by using the radosgw_frontend_ssl_certificatevariable to secure the Transmission Control Protocol (TCP) traffic

Ansible playbook for migrating OSDs from FileStore to BlueStore

A new Ansible playbook has been added to migrate OSDs from FileStore to BlueStore The object storemigration is not done as part of the upgrade process to Red Hat Ceph Storage 4 Do the migration afterthe upgrade completes For details see the How to migrate the object store from FileStore to BlueStoresection in the Red Hat Ceph Storage Administration Guide

32 CEPH MANAGEMENT DASHBOARD

Improvements to information for pool usage

With this enhancement valuable information was added to the pools table The following columns wereadded usage read bytes write bytes read operations and write operations Also the PlacementGroups column was renamed to Pg Status

Red Hat Ceph Storage Dashboard alerts

Red Hat Ceph Storage Dashboard supports alerts based on Ceph metrics and configured thresholdsThe Prometheus AlertManager configures gathers and triggers the alerts The alerts are displayed inthe Dashboard as a pop-up notifications in the upper-right corner You can view details of recent alertsin Cluster gt Alerts You can configure the alerts only in Prometheus but you can temporarily mute themfrom the Dashboard by creating Alert Silences in Cluster gt Silences

Displaying and hiding Ceph components in Dashboard

In the Red Hat Ceph Storage Dashboard you can display or hide Ceph components such as Ceph iSCSIRBD mirroring Ceph Block Devices Ceph File System or Ceph Object Gateway This feature allows youto hide components that are not configured

Ceph dashboard has been added to the Ceph Ansible playbooks

With this release the Ceph dashboard installation code was merged into the Ceph Ansible playbooksCeph Ansible does a containerized deployment of the Ceph dashboard regardless of the Red HatCeph Storage deployment type bare metal or containers These four new roles were added ceph-

Red Hat Ceph Storage 40 Release Notes

6

grafana ceph-dashboard ceph-prometheus and ceph-node-exporter

Red Hat Ceph Storage Dashboard alerts

Red Hat Ceph Storage Dashboard supports alerts based on Ceph metrics and configured thresholdsThe Prometheus AlertManager configures gathers and triggers the alerts

Viewing the cluster hierarchy from the Dashboard

Red Hat Ceph Storage Dashboard provides the ability to view the cluster hierarchy For details see theViewing the CRUSH map section in the Dashboard Guide for Red Hat Ceph Storage 4

33 CEPH FILE SYSTEM

ceph -w now shows information about CephFS scrubs

Previously it was not possible to check the ongoing Ceph File System (CephFS) scrubs status asidefrom checking the Metadata server (MDS) logs With this update the ceph -w command showsinformation about active CephFS scrubs to better understand the status

ceph-mgr volumes module for managing CephFS exports

This release provides Ceph Manager (ceph-mgr) volumes module to manage Ceph File System(CephFS) exports The volumes module implements the following file system export abstractions

FS volumes an abstraction for CephFS file systems

FS subvolumes an abstraction for independent CephFS directory trees

FS subvolume groups an abstraction for a directory higher than FS subvolumes to effectpolicies such as file layouts across a set of subvolumes

In addition these new commands are now supported

fs subvolume ls for listing subvolumes

fs subvolumegroup ls for listing subvolumes groups

fs subvolume snapshot ls for listing subvolume snapshots

fs subvolumegroup snapshot ls for listing subvolume group snapshots

fs subvolume rm for removing snapshots

34 CEPH MEDIC

ceph-medic can check the health of Ceph running in containers

With this release the ceph-medic utility can now check the health of a Red Hat Ceph Storage clusterrunning within a container

35 ISCSI GATEWAY

Non-administrators can now be used for the ceph-iscsi service

As of Red Hat Ceph Storage 4 non-administrative Ceph users may be used for the ceph-iscsi service

CHAPTER 3 NEW FEATURES

7

As of Red Hat Ceph Storage 4 non-administrative Ceph users may be used for the ceph-iscsi serviceby setting the cluster_client_name in the etccephiscsi-gatewaycfg on all iSCSI Gateways Thisallows resources to be restricted based on users

Running Ceph iSCSI Gateways can now be removed

As of Red Hat Ceph Storage 4 running iSCSI Gateways can now be removed from a Ceph iSCSI clusterfor maintenance or to reallocate resources The gatewayrsquos iSCSI target and its portals will be stoppedand all iSCSI target objects for that gateway will be removed from the kernel and the gatewayconfiguration Removing gateways that are down is not yet supported

36 OBJECT GATEWAY

The Beast HTTP front end

In Red Hat Ceph Storage 4 the default HTTP front end for the Ceph Object Gateway is Beast TheBeast front end uses the BoostBeast library for HTTP parsing and the BoostAsio library forasynchronous IO For details see the Using the Beast front end in the Object Gateway Configurationand Administration Guide for Red Hat Ceph Storage 4

Support for S3 MFA-Delete

With this release the Ceph Object Gateway supports S3 MFA-Delete using Time-Based One-TimePassword (TOTP) one-time passwords as an authentication factor This feature adds security againstinappropriate data removal You can configure buckets to require a TOTP one-time token in addition tostandard S3 authentication to delete data

Addition of storage classes and lifecycle transitions to the Ceph Object Gateway

The Ceph Object Gateway now provides support for storage classes an S3 compatible representationof the Ceph Object Gatewayrsquos underlying placement targets and for lifecycle transitions a mechanismto migrate objects between classes Storage classes and lifecycle transitions provide a higher level ofcontrol over data placement for the applications that need it They have many potential uses includingperformance tuning for specific workloads as well as automatic migration of inactive data to coldstorage

Users can now create new IAM policies and roles using REST APIs

With the release of Red Hat Ceph Storage 4 REST APIs for IAM roles and user policies are now availablein the same namespace as S3 APIs and can be accessed using the same endpoint as S3 APIs in theCeph Object Gateway This allows end users to create new IAM policies and roles using REST APIs

37 PACKAGES

Ability to install a Ceph cluster using a web-based interface

With this release the Cockpit web-based interface is supported Cockpit allows you to install a Red HatCeph Storage 4 cluster and other components such as Metadata Servers Ceph clients or the CephObject Gateway on base-metal or in containers For details see the Installing Red Hat Ceph Storageusing the Cockpit Web User Interface chapter in the Red Hat Ceph Storage 4 Installation Guide Notethat minimal experience with Red Hat Ceph Storage is required

38 RADOS

Ceph on-wire encryption

Starting with Red Hat Ceph Storage 4 you can enable encryption for all Ceph traffic over the network

Red Hat Ceph Storage 40 Release Notes

8

with the introduction of the messenger version 2 protocol For details see the Ceph on-wire encryptionchapter in the Architecture Guide and Encryption in transit section in the Data Security and HardeningGuide for Red Hat Ceph Storage 4

OSD BlueStore is now fully supported

BlueStore is a new back end for the OSD daemons that allows for storing objects directly on the blockdevices Because BlueStore does not need any file system interface it improves performance of Cephstorage clusters To learn more about the BlueStore OSD back end see the OSD BlueStore chapter inthe Administration Guide for Red Hat Ceph Storage 4

Red Hat Enterprise Linux in FIPS mode

With this release you can install Red Hat Ceph Storage on Red Hat Enterprise Linux where the FIPSmode is enabled

Changes the ceph df output and a new ceph osd df command

The output of the ceph df command has been improved Notably the RAW USED and RAW USEDvalues now show the preallocated space for the db and wal BlueStore partitions The ceph osd dfcommand shows the OSD utilization stats such as the amount of written data

Asynchronous recovery for non-acting OSD sets

Previously recovery with Ceph was a synchronous process by blocking write operations to objects untilthose objects were recovered In this release the recovery process is now asynchronous by not blockingwrite operations to objects only in the non-acting set of OSDs This new feature requires having morethan the minimum number of replicas as to have enough OSDs in the non-acting set

The new configuration option osd_async_recovery_min_cost controls how much asynchronousrecovery to do The default value for this option is 100 A higher value means asynchronous recovery willbe less whereas a lower value means asynchronous recovery will be more

Configuration is now stored in Monitors accessible by using ceph config

In this release Red Hat Ceph Storage centralizes configuration in Monitors instead of using the Cephconfiguration file (cephconf) Previously changing configuration included manually updating cephconf distributing it to appropriate nodes and restarting all affected daemons Now the Monitorsmanage a configuration database that has the same semantic structure as cephconf The database isaccessible by the ceph config command Any changes to the configuration are applied to daemons orclients in the system immediately and restarting them is no longer needed Use the ceph config -hcommand for details on the available set of commands Note that a Ceph configuration file is stillrequired to identify the Monitors nodes

Placement groups can now be auto-scaled

Red Hat Ceph Storage 4 introduces the ability to auto-scale placement groups (PGs) The number ofplacement groups (PGs) in a pool plays a significant role in how a cluster peers distributes data andrebalances Auto-scaling the number of PGs can make managing the cluster easier The new pg-autoscaling command provides recommendations for scaling PGs or automatically scales PGs basedon how the cluster is being used For more details about auto-scaling PGs see the Auto-scalingplacement groups section in the Storage Strategies Guide for Red Hat Ceph Storage 4

Introduction of diskprediction module

The Red Hat Ceph Storage diskprediction module gathers metrics to predict disk failures before theyhappen The module has two modes cloud and local With this release only the local mode is supportedThe local mode does not require any external server for data analysis It uses an internal predictormodule for disk prediction service and then returns the disk prediction result to the Ceph system

CHAPTER 3 NEW FEATURES

9

To enable the diskprediction module

ceph mgr module enable diskprediction_local

To set the prediction mode

ceph config set global device_failure_prediction_mode local

To disable the diskprediction module

ceph config set global device_failure_prediction_mode none

New configurable option mon_memory_target

Red Hat Ceph Storage 4 introduces a new configurable option mon_memory_target used to set thetarget amount of bytes for Monitor memory usage It specifies the amount of memory to allocate andmanage using the priority cache tuner for the associated Monitor daemon caches The default value of mon_memory_target is set to 2 GiB and you can change it during runtime with

ceph config set global mon_memory_target size

Prior to this release as a cluster scaled the Monitor specific RSS usage exceeded the limits that wereset using the mon_osd_cache_size option which led to issues This enhancement allows for improvedmanagement of memory allocated to the monitor caches and keeps the usage within specified limits

39 BLOCK DEVICES (RBD)

Erasure coding for Ceph Block Device

Erasure coding for Ceph Block Device (RBD) is now fully supported This feature allows RBD to storetheir data in an erasure coded pool For details see the Erasure Coding with Overwrites section in theStorage Strategies for Red Hat Ceph Storage 4

RBD performance monitoring and metrics gathering tools

Red Hat Ceph Storage 4 now incorporates new Ceph Block Device performance monitoring utilities foraggregated RBD image metrics for IOPS throughput and latency Per-image RBD metrics are nowavailable using the Ceph Manager Prometheus module the Ceph Dashboard and the rbd CLI using the rbd perf image iostat or rbd perf image iotop commands

Cloned images can be created from non-primary images

Creating cloned child RBD images from mirrored non-primary parent image is now supportedPreviously cloning of mirrored images was only supported for primary images When cloning goldenimages for virtual machines this restriction prevented the creation of new cloned images from thegolden non-primary image This update removes this restriction and cloned images can be created fromnon-primary mirrored images

Segregating RBD images within isolated namespaces within the same pool

RBD images can now be segregated within isolated namespaces within the same pool When using CephBlock Devices directly without a higher-level system such as OpenStack or OpenShift ContainerStorage it was not possible to restrict user access to specific RBD images When combined with CephXcapabilities users can be restricted to specific pool namespaces to restrict access to RBD images

Red Hat Ceph Storage 40 Release Notes

10

Moving RBD images between different pools within the same cluster

This version of Red Hat Ceph Storage adds the ability to move RBD images between different poolswithin the same cluster For details see the Moving images between pools section in the Block DeviceGuide for Red Hat Ceph Storage 4

Long-running RBD operations can run in the background

Long-running RBD operations such as image removal or cloned image flattening can now bescheduled to run in the background RBD operations that involve iterating over every backing RADOSobject for the image can take a long time depending on the size of the image When using the CLI toperform one of these operations the rbd CLI is blocked until the operation is complete Theseoperations can now be scheduled to run by the Ceph Manager as a background task by using the ceph rbd task add commands The progress of these tasks is visible on the Ceph dashboard as well as byusing the CLI

310 RBD MIRRORING

Support for multiple active instances of RBD mirror daemon in a single storage cluster

Red Hat Ceph Storage 4 now supports deploying multiple active instances of the RBD mirror daemon ina single storage cluster This enables multiple RBD mirror daemons to perform replication for the RBDimages or pools using an algorithm for chunking the images across the number of active mirroringdaemons

CHAPTER 3 NEW FEATURES

11

CHAPTER 4 BUG FIXESThis section describes bugs with significant impact on users that were fixed in this release of Red HatCeph Storage In addition the section includes descriptions of fixed known issues found in previousversions

41 THE CEPH-ANSIBLE UTILITY

The Ansible playbook no longer takes hours to complete the fuser command

Previously on a system running thousands of processes the fuser command in the Ansible playbookcould take several minutes or hours to complete because it iterates over all PID present in the procdirectory Because of this the handler tasks take a long time to complete checking if a Ceph process isalready running With this update instead of using the fuser command the Ansible playbook now checksthe socket file in the procnetunix directory and the handler tasks that check the Ceph socketcompletes almost instantly

(BZ1717011)

The purge-docker-clusteryml Ansible playbook no longer fails

Previously the purge-docker-clusteryml Ansible playbook could fail when trying to unmap RADOSBlock Devices (RBDs) because either the binary was absent or because the Atomic host versionprovided was too old With this update Ansible now uses the sysfs method to unmap devices if thereare any and the purge-docker-clusteryml playbook no longer fails

(BZ1766064)

42 OBJECT GATEWAY

The Expiration Days S3 Lifecycle parameter can now be set to 0

The Ceph Object Gateway did not accept the value of 0 for the Expiration Days Lifecycleconfiguration parameter Consequently setting the expiration to 0 could not be used to triggerbackground delete operation of objects With this update Expiration Days can be set to 0 as expected

(BZ1493476)

Archive zone fetches the current version of the source objects

An object is synced from multiple source zones to archive zone This behavior can lead to differentversions of the same object on the archive zone This update eliminates duplicate versions from gettingcreated by ensuring that the archive zone fetches the current version of the source object

(BZ1760862)

43 RADOS

A message to set expected_num_objects is no longer shown when using BlueStore

With this update a message that recommends setting the expected_num_obejcts parameter duringthe creation of the BlueStore pools has been removed because this message does not apply when usingthe BlueStore OSD back end

(BZ1650922)

Red Hat Ceph Storage 40 Release Notes

12

Deprecated JSON fields have been removed

This update removes deprecated fields from the JSON output of the ceph status command

(BZ1739233)

CHAPTER 4 BUG FIXES

13

CHAPTER 5 TECHNOLOGY PREVIEWSThis section provides an overview of Technology Preview features introduced or updated in this releaseof Red Hat Ceph Storage

IMPORTANT

Technology Preview features are not supported with Red Hat production service levelagreements (SLAs) might not be functionally complete and Red Hat does notrecommend to use them for production These features provide early access toupcoming product features enabling customers to test functionality and providefeedback during the development process

For more information on Red Hat Technology Preview features support scope seehttpsaccessredhatcomsupportofferingstechpreview

51 CEPH FILE SYSTEM

CephFS snapshots

Ceph File System (CephFS) supports taking snapshots as a Technology Preview Snapshots create animmutable view of the file system at the point in time they are taken

52 BLOCK DEVICES (RBD)

Mapping RBD images to NBD images

The rbd-nbd utility maps RADOS Block Device (RBD) images to Network Block Devices (NBD) andenables Ceph clients to access volumes and images in Kubernetes environments To use rbd-nbd installthe rbd-nbd package For details see the rbd-nbd(7) manual page

53 OBJECT GATEWAY

Object Gateway archive site

With this release an archive site is supported as a Technology Preview The archive site allows you tohave a history of versions of S3 objects that can only be eliminated through the gateways associatedwith the archive zone Including an archive zone in a multizone configuration allows you to have theflexibility of an S3 object history in only one zone while saving the space that the replicas of the versionsS3 objects would consume in the rest of the zones

Tiering within a cluster by disk type

This release adds the ability to tier within a cluster by disk type as a Technology Preview by using thecapabilities for mapping for example pools to placement targets and storage classes Using lifecycletransition rules it is possible to cause objects to migrate between storage pools based on policy

S3 bucket notifications

S3 bucket notifications are now supported as a Technology Preview When certain events are triggeredon an S3 bucket the notifications can be sent from the Ceph Object Gateway to HTTP AdvancedMessage Queuing Protocol (AMQP) 91 and Kafka endpoints Additionally the notifications can bestored in a ldquoPubSubrdquo zone instead of or in addition to sending them to the endpoints ldquoPubSubrdquo is apublish-subscribe model that enables recipients to pull notifications from Ceph

Red Hat Ceph Storage 40 Release Notes

14

To use the S3 notifications install the librabbitmq and librdkafka packages

CHAPTER 5 TECHNOLOGY PREVIEWS

15

CHAPTER 6 KNOWN ISSUESThis section documents known issues found in this release of Red Hat Ceph Storage

61 THE CEPH-ANSIBLE UTILITY

Red Hat Ceph Storage installation on Red Hat OpenStack Platform fails

The ceph-ansible utility becomes unresponsive when attempting to install Red Hat Ceph Storagetogether with the Red Hat OpenStack Platform 16 and it returns an error similar to the following one

Error unable to exec into ceph-mon-dcn1-computehci1-2 no container with name or ID ceph-mon-dcn1-computehci1-2 found no such container

To work around this issue update the following part of the handler_osdsyml file located in the ceph-ansiblerolesceph-handlertasks directory

- name unset noup flag command container_exec_cmd | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

To

- name unset noup flag command hostvars[groups[mon_group_name][0]][container_exec_cmd] | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

And start the installation process again

(BZ1792320)

Ansible does not unset the norebalance flag after it completes

The rolling-updateyml Ansible playbook does not unset the norebalance flag after it completes Towork around this issue unset the flag manually

(BZ1793564)

Ansible fails to upgrade a multisite Ceph Object Gateway when the Dashboard is enabled

When the Red Hat Ceph Storage Dashboard is enabled an attempt to use Ansible to upgrade to afurther version of Red Hat Ceph Storage fails when trying to upgrade the secondary Ceph ObjectGateway site in a multisite setup This bug does not occur on the primary site or if the Dashboard is notenabled

(BZ1794351)

62 CEPH MANAGEMENT DASHBOARD

The Dashboard does not show correct values for certain configuration options

Both the Red Hat Ceph Storage Dashboard and the underlying ceph config show command do not

Red Hat Ceph Storage 40 Release Notes

16

return the current values for certain configuration options such as fsid This is probably because certaincore options that are not meant for further modification after deploying a cluster are not updated and adefault value is used As a result the Dashboard does not show correct values for certain configurationoptions

(BZ1765493 BZ1772310)

NFS Ganesha in Dashboard

The Red Hat Ceph Storage Dashboard currently does not support managing NFS Ganesha

(BZ1772316)

Dashboard does not support email verification

The Red Hat Ceph Storage Dashboard does not support any email verification when changing a userspassword This behavior is intentional because the Dashboard supports Single Sign-On (SSO) and thisfeature can be delegated to the SSO provider

(BZ1778608)

The OSD histogram graph for read and write operations is not clear

The Red Hat Ceph Storage Dashboard does not display any numbers or description in the OSDhistogram graph for read and write operations and therefore the graph is not clear

(BZ1779170)

Dashboard returns an error when a ceph-mgr module is enabled from the ceph CLI

When enabling a Ceph Manager (ceph-mgr) module such as telemetry from the ceph CLI the Red HatCeph Storage Dashboard displays the following error message

0 - Unknown Error

In addition the Dashboard does not mark the module as enabled until the Refresh button is clicked

(BZ1785077)

The Dashboard allows modifying the LUN ID and WWN which can lead to data corruption

The Red Hat Ceph Storage Dashboard allows you to modify the LUN ID and its World Wide Name(WWN) which is not required after creating the LUN Moreover editing these parameters can bedangerous for certain initiators that do not fully support this feature by default Consequently editingthese parameters after creating them can lead to data corruption To avoid this do not modify the LUNID and WWN in the Dashboard

(BZ1786455)

The Dashboard does not provide correct Ceph iSCSI error messages

If the Ceph iSCSI returns an error for example the HTTP 400 code when trying to delete an iSCSItarget while a user is logged in the Red Hat Ceph Storage Dashboard does not forward that error codeand message to the Dashboard user using the pop-up notifications but displays a generic 500 InternalServer Error Consequently the message that the Dashboard provides is not informative and evenmisleading an expected behavior (users cannot delete a busy resource) is perceived as an operationalfailure (internal server error) To work around this issue see the Dashboard logs

(BZ1786457)

CHAPTER 6 KNOWN ISSUES

17

The Dashboard requires to disable iptables rules

The Red Hat Ceph Storage Dashboard is unable to perform any iSCSI operations such as creating agateway unless all iptables rules are manually disabled on the Ceph iSCSI node To do so use thefollowing command as root or the sudo user

iptables -F

Note that after a reboot the rules are enabled again Either disable them again or delete thempermanently

(BZ1792818)

63 PACKAGES

Current version of Grafana causes certain bugs in the Dashboard

Red Hat Ceph Storage 4 uses the Grafana version 524 This version causes the following bugs in theRed Hat Ceph Storage Dashboard

When navigating to Pools gt Overall Performance Grafana returns the following error

TypeError lc[ttype] is undefinedtrue

When viewing a poolrsquos performance details (Pools gt select a pool from the list gt PerformanceDetails) the Grafana bar is displayed along with other graphs and values but it should not bethere

These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red HatCeph Storage

(BZ1786107 BZ1762197 BZ1765536)

64 RED HAT ENTERPRISE LINUX

Ansible cannot start NFS Ganesha if SELinux is in enforcing mode

When using SELinux in enforcing mode on Red Hat Enterprise Linux 81 the ceph-ansible utility fails tostart the NFS Ganesha service because SELinux policy currently does not allow creating a directoryrequired for NFS Ganesha

(BZ1794027 BZ1796160)

Red Hat Ceph Storage 40 Release Notes

18

CHAPTER 7 DEPRECATED FUNCTIONALITYThis section provides an overview of functionality that has been deprecated in all minor releases up tothis release of Red Hat Ceph Storage

Ubuntu is no longer supported

Installing a Red Hat Ceph Storage 4 cluster on Ubuntu is no longer supported Use Red HatEnterprise Linux as the underlying operating system

71 THE CEPH-ANSIBLE UTILITY

Configuring iSCSI gateway using ceph-ansible is no longer supported

Configuring the Ceph iSCSI gateway by using the ceph-ansible utility is no longer supported Use ceph-ansible to install the gateway and then use either the gwcli utility of the Red Hat Ceph StorageDashboard to configure the gateway For details see the Using the Ceph iSCSI Gateway chapter in theBlock Device Guide for Red Hat Ceph Storage 4

72 THE CEPH-DISK UTILITY

ceph-disk is deprecated

With this release the ceph-disk utility is no longer supported The ceph-volume utility is used insteadFor details see the Why does ceph-volume replace `ceph-disk` section in the Administration Guide forRed Hat Ceph Storage 4

73 RADOS

FileStore is no longer supported in production

The FileStore OSD back end is now deprecated because the new BlueStore back end is now fullysupported in production For details see the How to migrate the object store from FileStore toBlueStore section in the Red Hat Ceph Storage 4 Installation Guide

Ceph configuration is now deprecated

The Ceph configuration file (cephconf) is now deprecated in favor of new centralized configurationstored in Monitors For details see the Configuration is now stored in Monitors accessible by using `cephconfig` release note

CHAPTER 7 DEPRECATED FUNCTIONALITY

19

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES
Page 6: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

CHAPTER 1 INTRODUCTIONRed Hat Ceph Storage is a massively scalable open software-defined storage platform that combinesthe most stable version of the Ceph storage system with a Ceph management platform deploymentutilities and support services

The Red Hat Ceph Storage documentation is available athttpsaccessredhatcomdocumentationenred-hat-ceph-storage

CHAPTER 1 INTRODUCTION

3

CHAPTER 2 ACKNOWLEDGMENTSRed Hat Ceph Storage 40 contains many contributions from the Red Hat Ceph Storage teamAdditionally the Ceph project is seeing amazing growth in the quality and quantity of contributions fromindividuals and organizations in the Ceph community We would like to thank all members of the Red HatCeph Storage team all of the individual contributors in the Ceph community and additionally (but notlimited to) the contributions from organizations such as

Intel

Fujitsu

UnitedStack

Yahoo

UbuntuKylin

Mellanox

CERN

Deutsche Telekom

Mirantis

SanDisk

SUSE

Red Hat Ceph Storage 40 Release Notes

4

CHAPTER 3 NEW FEATURESThis section lists all major updates enhancements and new features introduced in this release of RedHat Ceph Storage

The main features added by this release are

General availability of the BlueStore OSD back end

Ceph Management Dashboard

Ceph Object Gateway Beast Asio web server

Erasure coding for Ceph Block Device

Auto-scaling placement groups

Web-based installation interface

On-wire encryption

RBD performance monitoring and metrics gathering tools

Red Hat Enterprise Linux in FIPS mode

31 THE CEPH-ANSIBLE UTILITY

Ceph OSDs created with ceph-disk are migrated to ceph-volume during upgrade

When upgrading to Red Hat Ceph Storage 4 all running Ceph OSDs previously created by the ceph-disk utility will be migrated to the ceph-volume utility because ceph-disk has been deprecated in thisrelease

For bare-metal and container deployments of Red Hat Ceph Storage the ceph-volume utility does asimple scan and takes over the existing Ceph OSDs deployed by the ceph-disk utility Also do not usethese migrated devices in configurations for subsequent deployments Note that you cannot create anynew Ceph OSDs during the upgrade process

After the upgrade all Ceph OSDs created by ceph-disk will start and operate like any Ceph OSDscreated by ceph-volume

Ansible playbooks for scaling of all Ceph services

Previously ceph-ansible playbooks offered limited scale up and scale down features only to core Cephproducts such as Monitors and OSDs With this update additional Ansible playbooks allow for scaling ofall Ceph services

Ceph iSCSI packages merged into single package

The ceph-iscsi-cli and ceph-iscsi-config packages have been merged to one package named ceph-iscsi

The nfs-ganesha service is now supported as a standalone deployment

Red Hat Openstack Directory (OSPd) requires the ceph-ansible utility to be able to deploy the nfs-ganesha service and configure it so that it points to an external unmanaged pre-existing Ceph clusterAs of Red Hat Ceph Storage 4 ceph-ansible allows the deployment of an internal nfs-ganesha service

CHAPTER 3 NEW FEATURES

5

with an external Ceph cluster

Ceph container can now write logs to a respective daemon file

The previous way of logging for containerized Ceph environments did not allow limiting the journalctloutput when looking at log data by using the sosreport collection With this release logging can beenabled or disabled for a particular Ceph daemon with the following command

ceph config set daemonid log_to_file true

Where daemon is the type of the daemon and id is its ID For example to enable logging for the Monitordaemon with ID mon0

ceph config set monmon0 log_to_file true

This new feature makes debugging easier

Ability to configure Ceph Object Gateway to use TLS encryption

This release of Red Hat Ceph Storage provides the ability to configure the Ceph Object Gatewaylistener with an SSL certificate for TLS encryption by using the radosgw_frontend_ssl_certificatevariable to secure the Transmission Control Protocol (TCP) traffic

Ansible playbook for migrating OSDs from FileStore to BlueStore

A new Ansible playbook has been added to migrate OSDs from FileStore to BlueStore The object storemigration is not done as part of the upgrade process to Red Hat Ceph Storage 4 Do the migration afterthe upgrade completes For details see the How to migrate the object store from FileStore to BlueStoresection in the Red Hat Ceph Storage Administration Guide

32 CEPH MANAGEMENT DASHBOARD

Improvements to information for pool usage

With this enhancement valuable information was added to the pools table The following columns wereadded usage read bytes write bytes read operations and write operations Also the PlacementGroups column was renamed to Pg Status

Red Hat Ceph Storage Dashboard alerts

Red Hat Ceph Storage Dashboard supports alerts based on Ceph metrics and configured thresholdsThe Prometheus AlertManager configures gathers and triggers the alerts The alerts are displayed inthe Dashboard as a pop-up notifications in the upper-right corner You can view details of recent alertsin Cluster gt Alerts You can configure the alerts only in Prometheus but you can temporarily mute themfrom the Dashboard by creating Alert Silences in Cluster gt Silences

Displaying and hiding Ceph components in Dashboard

In the Red Hat Ceph Storage Dashboard you can display or hide Ceph components such as Ceph iSCSIRBD mirroring Ceph Block Devices Ceph File System or Ceph Object Gateway This feature allows youto hide components that are not configured

Ceph dashboard has been added to the Ceph Ansible playbooks

With this release the Ceph dashboard installation code was merged into the Ceph Ansible playbooksCeph Ansible does a containerized deployment of the Ceph dashboard regardless of the Red HatCeph Storage deployment type bare metal or containers These four new roles were added ceph-

Red Hat Ceph Storage 40 Release Notes

6

grafana ceph-dashboard ceph-prometheus and ceph-node-exporter

Red Hat Ceph Storage Dashboard alerts

Red Hat Ceph Storage Dashboard supports alerts based on Ceph metrics and configured thresholdsThe Prometheus AlertManager configures gathers and triggers the alerts

Viewing the cluster hierarchy from the Dashboard

Red Hat Ceph Storage Dashboard provides the ability to view the cluster hierarchy For details see theViewing the CRUSH map section in the Dashboard Guide for Red Hat Ceph Storage 4

33 CEPH FILE SYSTEM

ceph -w now shows information about CephFS scrubs

Previously it was not possible to check the ongoing Ceph File System (CephFS) scrubs status asidefrom checking the Metadata server (MDS) logs With this update the ceph -w command showsinformation about active CephFS scrubs to better understand the status

ceph-mgr volumes module for managing CephFS exports

This release provides Ceph Manager (ceph-mgr) volumes module to manage Ceph File System(CephFS) exports The volumes module implements the following file system export abstractions

FS volumes an abstraction for CephFS file systems

FS subvolumes an abstraction for independent CephFS directory trees

FS subvolume groups an abstraction for a directory higher than FS subvolumes to effectpolicies such as file layouts across a set of subvolumes

In addition these new commands are now supported

fs subvolume ls for listing subvolumes

fs subvolumegroup ls for listing subvolumes groups

fs subvolume snapshot ls for listing subvolume snapshots

fs subvolumegroup snapshot ls for listing subvolume group snapshots

fs subvolume rm for removing snapshots

34 CEPH MEDIC

ceph-medic can check the health of Ceph running in containers

With this release the ceph-medic utility can now check the health of a Red Hat Ceph Storage clusterrunning within a container

35 ISCSI GATEWAY

Non-administrators can now be used for the ceph-iscsi service

As of Red Hat Ceph Storage 4 non-administrative Ceph users may be used for the ceph-iscsi service

CHAPTER 3 NEW FEATURES

7

As of Red Hat Ceph Storage 4 non-administrative Ceph users may be used for the ceph-iscsi serviceby setting the cluster_client_name in the etccephiscsi-gatewaycfg on all iSCSI Gateways Thisallows resources to be restricted based on users

Running Ceph iSCSI Gateways can now be removed

As of Red Hat Ceph Storage 4 running iSCSI Gateways can now be removed from a Ceph iSCSI clusterfor maintenance or to reallocate resources The gatewayrsquos iSCSI target and its portals will be stoppedand all iSCSI target objects for that gateway will be removed from the kernel and the gatewayconfiguration Removing gateways that are down is not yet supported

36 OBJECT GATEWAY

The Beast HTTP front end

In Red Hat Ceph Storage 4 the default HTTP front end for the Ceph Object Gateway is Beast TheBeast front end uses the BoostBeast library for HTTP parsing and the BoostAsio library forasynchronous IO For details see the Using the Beast front end in the Object Gateway Configurationand Administration Guide for Red Hat Ceph Storage 4

Support for S3 MFA-Delete

With this release the Ceph Object Gateway supports S3 MFA-Delete using Time-Based One-TimePassword (TOTP) one-time passwords as an authentication factor This feature adds security againstinappropriate data removal You can configure buckets to require a TOTP one-time token in addition tostandard S3 authentication to delete data

Addition of storage classes and lifecycle transitions to the Ceph Object Gateway

The Ceph Object Gateway now provides support for storage classes an S3 compatible representationof the Ceph Object Gatewayrsquos underlying placement targets and for lifecycle transitions a mechanismto migrate objects between classes Storage classes and lifecycle transitions provide a higher level ofcontrol over data placement for the applications that need it They have many potential uses includingperformance tuning for specific workloads as well as automatic migration of inactive data to coldstorage

Users can now create new IAM policies and roles using REST APIs

With the release of Red Hat Ceph Storage 4 REST APIs for IAM roles and user policies are now availablein the same namespace as S3 APIs and can be accessed using the same endpoint as S3 APIs in theCeph Object Gateway This allows end users to create new IAM policies and roles using REST APIs

37 PACKAGES

Ability to install a Ceph cluster using a web-based interface

With this release the Cockpit web-based interface is supported Cockpit allows you to install a Red HatCeph Storage 4 cluster and other components such as Metadata Servers Ceph clients or the CephObject Gateway on base-metal or in containers For details see the Installing Red Hat Ceph Storageusing the Cockpit Web User Interface chapter in the Red Hat Ceph Storage 4 Installation Guide Notethat minimal experience with Red Hat Ceph Storage is required

38 RADOS

Ceph on-wire encryption

Starting with Red Hat Ceph Storage 4 you can enable encryption for all Ceph traffic over the network

Red Hat Ceph Storage 40 Release Notes

8

with the introduction of the messenger version 2 protocol For details see the Ceph on-wire encryptionchapter in the Architecture Guide and Encryption in transit section in the Data Security and HardeningGuide for Red Hat Ceph Storage 4

OSD BlueStore is now fully supported

BlueStore is a new back end for the OSD daemons that allows for storing objects directly on the blockdevices Because BlueStore does not need any file system interface it improves performance of Cephstorage clusters To learn more about the BlueStore OSD back end see the OSD BlueStore chapter inthe Administration Guide for Red Hat Ceph Storage 4

Red Hat Enterprise Linux in FIPS mode

With this release you can install Red Hat Ceph Storage on Red Hat Enterprise Linux where the FIPSmode is enabled

Changes the ceph df output and a new ceph osd df command

The output of the ceph df command has been improved Notably the RAW USED and RAW USEDvalues now show the preallocated space for the db and wal BlueStore partitions The ceph osd dfcommand shows the OSD utilization stats such as the amount of written data

Asynchronous recovery for non-acting OSD sets

Previously recovery with Ceph was a synchronous process by blocking write operations to objects untilthose objects were recovered In this release the recovery process is now asynchronous by not blockingwrite operations to objects only in the non-acting set of OSDs This new feature requires having morethan the minimum number of replicas as to have enough OSDs in the non-acting set

The new configuration option osd_async_recovery_min_cost controls how much asynchronousrecovery to do The default value for this option is 100 A higher value means asynchronous recovery willbe less whereas a lower value means asynchronous recovery will be more

Configuration is now stored in Monitors accessible by using ceph config

In this release Red Hat Ceph Storage centralizes configuration in Monitors instead of using the Cephconfiguration file (cephconf) Previously changing configuration included manually updating cephconf distributing it to appropriate nodes and restarting all affected daemons Now the Monitorsmanage a configuration database that has the same semantic structure as cephconf The database isaccessible by the ceph config command Any changes to the configuration are applied to daemons orclients in the system immediately and restarting them is no longer needed Use the ceph config -hcommand for details on the available set of commands Note that a Ceph configuration file is stillrequired to identify the Monitors nodes

Placement groups can now be auto-scaled

Red Hat Ceph Storage 4 introduces the ability to auto-scale placement groups (PGs) The number ofplacement groups (PGs) in a pool plays a significant role in how a cluster peers distributes data andrebalances Auto-scaling the number of PGs can make managing the cluster easier The new pg-autoscaling command provides recommendations for scaling PGs or automatically scales PGs basedon how the cluster is being used For more details about auto-scaling PGs see the Auto-scalingplacement groups section in the Storage Strategies Guide for Red Hat Ceph Storage 4

Introduction of diskprediction module

The Red Hat Ceph Storage diskprediction module gathers metrics to predict disk failures before theyhappen The module has two modes cloud and local With this release only the local mode is supportedThe local mode does not require any external server for data analysis It uses an internal predictormodule for disk prediction service and then returns the disk prediction result to the Ceph system

CHAPTER 3 NEW FEATURES

9

To enable the diskprediction module

ceph mgr module enable diskprediction_local

To set the prediction mode

ceph config set global device_failure_prediction_mode local

To disable the diskprediction module

ceph config set global device_failure_prediction_mode none

New configurable option mon_memory_target

Red Hat Ceph Storage 4 introduces a new configurable option mon_memory_target used to set thetarget amount of bytes for Monitor memory usage It specifies the amount of memory to allocate andmanage using the priority cache tuner for the associated Monitor daemon caches The default value of mon_memory_target is set to 2 GiB and you can change it during runtime with

ceph config set global mon_memory_target size

Prior to this release as a cluster scaled the Monitor specific RSS usage exceeded the limits that wereset using the mon_osd_cache_size option which led to issues This enhancement allows for improvedmanagement of memory allocated to the monitor caches and keeps the usage within specified limits

39 BLOCK DEVICES (RBD)

Erasure coding for Ceph Block Device

Erasure coding for Ceph Block Device (RBD) is now fully supported This feature allows RBD to storetheir data in an erasure coded pool For details see the Erasure Coding with Overwrites section in theStorage Strategies for Red Hat Ceph Storage 4

RBD performance monitoring and metrics gathering tools

Red Hat Ceph Storage 4 now incorporates new Ceph Block Device performance monitoring utilities foraggregated RBD image metrics for IOPS throughput and latency Per-image RBD metrics are nowavailable using the Ceph Manager Prometheus module the Ceph Dashboard and the rbd CLI using the rbd perf image iostat or rbd perf image iotop commands

Cloned images can be created from non-primary images

Creating cloned child RBD images from mirrored non-primary parent image is now supportedPreviously cloning of mirrored images was only supported for primary images When cloning goldenimages for virtual machines this restriction prevented the creation of new cloned images from thegolden non-primary image This update removes this restriction and cloned images can be created fromnon-primary mirrored images

Segregating RBD images within isolated namespaces within the same pool

RBD images can now be segregated within isolated namespaces within the same pool When using CephBlock Devices directly without a higher-level system such as OpenStack or OpenShift ContainerStorage it was not possible to restrict user access to specific RBD images When combined with CephXcapabilities users can be restricted to specific pool namespaces to restrict access to RBD images

Red Hat Ceph Storage 40 Release Notes

10

Moving RBD images between different pools within the same cluster

This version of Red Hat Ceph Storage adds the ability to move RBD images between different poolswithin the same cluster For details see the Moving images between pools section in the Block DeviceGuide for Red Hat Ceph Storage 4

Long-running RBD operations can run in the background

Long-running RBD operations such as image removal or cloned image flattening can now bescheduled to run in the background RBD operations that involve iterating over every backing RADOSobject for the image can take a long time depending on the size of the image When using the CLI toperform one of these operations the rbd CLI is blocked until the operation is complete Theseoperations can now be scheduled to run by the Ceph Manager as a background task by using the ceph rbd task add commands The progress of these tasks is visible on the Ceph dashboard as well as byusing the CLI

310 RBD MIRRORING

Support for multiple active instances of RBD mirror daemon in a single storage cluster

Red Hat Ceph Storage 4 now supports deploying multiple active instances of the RBD mirror daemon ina single storage cluster This enables multiple RBD mirror daemons to perform replication for the RBDimages or pools using an algorithm for chunking the images across the number of active mirroringdaemons

CHAPTER 3 NEW FEATURES

11

CHAPTER 4 BUG FIXESThis section describes bugs with significant impact on users that were fixed in this release of Red HatCeph Storage In addition the section includes descriptions of fixed known issues found in previousversions

41 THE CEPH-ANSIBLE UTILITY

The Ansible playbook no longer takes hours to complete the fuser command

Previously on a system running thousands of processes the fuser command in the Ansible playbookcould take several minutes or hours to complete because it iterates over all PID present in the procdirectory Because of this the handler tasks take a long time to complete checking if a Ceph process isalready running With this update instead of using the fuser command the Ansible playbook now checksthe socket file in the procnetunix directory and the handler tasks that check the Ceph socketcompletes almost instantly

(BZ1717011)

The purge-docker-clusteryml Ansible playbook no longer fails

Previously the purge-docker-clusteryml Ansible playbook could fail when trying to unmap RADOSBlock Devices (RBDs) because either the binary was absent or because the Atomic host versionprovided was too old With this update Ansible now uses the sysfs method to unmap devices if thereare any and the purge-docker-clusteryml playbook no longer fails

(BZ1766064)

42 OBJECT GATEWAY

The Expiration Days S3 Lifecycle parameter can now be set to 0

The Ceph Object Gateway did not accept the value of 0 for the Expiration Days Lifecycleconfiguration parameter Consequently setting the expiration to 0 could not be used to triggerbackground delete operation of objects With this update Expiration Days can be set to 0 as expected

(BZ1493476)

Archive zone fetches the current version of the source objects

An object is synced from multiple source zones to archive zone This behavior can lead to differentversions of the same object on the archive zone This update eliminates duplicate versions from gettingcreated by ensuring that the archive zone fetches the current version of the source object

(BZ1760862)

43 RADOS

A message to set expected_num_objects is no longer shown when using BlueStore

With this update a message that recommends setting the expected_num_obejcts parameter duringthe creation of the BlueStore pools has been removed because this message does not apply when usingthe BlueStore OSD back end

(BZ1650922)

Red Hat Ceph Storage 40 Release Notes

12

Deprecated JSON fields have been removed

This update removes deprecated fields from the JSON output of the ceph status command

(BZ1739233)

CHAPTER 4 BUG FIXES

13

CHAPTER 5 TECHNOLOGY PREVIEWSThis section provides an overview of Technology Preview features introduced or updated in this releaseof Red Hat Ceph Storage

IMPORTANT

Technology Preview features are not supported with Red Hat production service levelagreements (SLAs) might not be functionally complete and Red Hat does notrecommend to use them for production These features provide early access toupcoming product features enabling customers to test functionality and providefeedback during the development process

For more information on Red Hat Technology Preview features support scope seehttpsaccessredhatcomsupportofferingstechpreview

51 CEPH FILE SYSTEM

CephFS snapshots

Ceph File System (CephFS) supports taking snapshots as a Technology Preview Snapshots create animmutable view of the file system at the point in time they are taken

52 BLOCK DEVICES (RBD)

Mapping RBD images to NBD images

The rbd-nbd utility maps RADOS Block Device (RBD) images to Network Block Devices (NBD) andenables Ceph clients to access volumes and images in Kubernetes environments To use rbd-nbd installthe rbd-nbd package For details see the rbd-nbd(7) manual page

53 OBJECT GATEWAY

Object Gateway archive site

With this release an archive site is supported as a Technology Preview The archive site allows you tohave a history of versions of S3 objects that can only be eliminated through the gateways associatedwith the archive zone Including an archive zone in a multizone configuration allows you to have theflexibility of an S3 object history in only one zone while saving the space that the replicas of the versionsS3 objects would consume in the rest of the zones

Tiering within a cluster by disk type

This release adds the ability to tier within a cluster by disk type as a Technology Preview by using thecapabilities for mapping for example pools to placement targets and storage classes Using lifecycletransition rules it is possible to cause objects to migrate between storage pools based on policy

S3 bucket notifications

S3 bucket notifications are now supported as a Technology Preview When certain events are triggeredon an S3 bucket the notifications can be sent from the Ceph Object Gateway to HTTP AdvancedMessage Queuing Protocol (AMQP) 91 and Kafka endpoints Additionally the notifications can bestored in a ldquoPubSubrdquo zone instead of or in addition to sending them to the endpoints ldquoPubSubrdquo is apublish-subscribe model that enables recipients to pull notifications from Ceph

Red Hat Ceph Storage 40 Release Notes

14

To use the S3 notifications install the librabbitmq and librdkafka packages

CHAPTER 5 TECHNOLOGY PREVIEWS

15

CHAPTER 6 KNOWN ISSUESThis section documents known issues found in this release of Red Hat Ceph Storage

61 THE CEPH-ANSIBLE UTILITY

Red Hat Ceph Storage installation on Red Hat OpenStack Platform fails

The ceph-ansible utility becomes unresponsive when attempting to install Red Hat Ceph Storagetogether with the Red Hat OpenStack Platform 16 and it returns an error similar to the following one

Error unable to exec into ceph-mon-dcn1-computehci1-2 no container with name or ID ceph-mon-dcn1-computehci1-2 found no such container

To work around this issue update the following part of the handler_osdsyml file located in the ceph-ansiblerolesceph-handlertasks directory

- name unset noup flag command container_exec_cmd | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

To

- name unset noup flag command hostvars[groups[mon_group_name][0]][container_exec_cmd] | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

And start the installation process again

(BZ1792320)

Ansible does not unset the norebalance flag after it completes

The rolling-updateyml Ansible playbook does not unset the norebalance flag after it completes Towork around this issue unset the flag manually

(BZ1793564)

Ansible fails to upgrade a multisite Ceph Object Gateway when the Dashboard is enabled

When the Red Hat Ceph Storage Dashboard is enabled an attempt to use Ansible to upgrade to afurther version of Red Hat Ceph Storage fails when trying to upgrade the secondary Ceph ObjectGateway site in a multisite setup This bug does not occur on the primary site or if the Dashboard is notenabled

(BZ1794351)

62 CEPH MANAGEMENT DASHBOARD

The Dashboard does not show correct values for certain configuration options

Both the Red Hat Ceph Storage Dashboard and the underlying ceph config show command do not

Red Hat Ceph Storage 40 Release Notes

16

return the current values for certain configuration options such as fsid This is probably because certaincore options that are not meant for further modification after deploying a cluster are not updated and adefault value is used As a result the Dashboard does not show correct values for certain configurationoptions

(BZ1765493 BZ1772310)

NFS Ganesha in Dashboard

The Red Hat Ceph Storage Dashboard currently does not support managing NFS Ganesha

(BZ1772316)

Dashboard does not support email verification

The Red Hat Ceph Storage Dashboard does not support any email verification when changing a userspassword This behavior is intentional because the Dashboard supports Single Sign-On (SSO) and thisfeature can be delegated to the SSO provider

(BZ1778608)

The OSD histogram graph for read and write operations is not clear

The Red Hat Ceph Storage Dashboard does not display any numbers or description in the OSDhistogram graph for read and write operations and therefore the graph is not clear

(BZ1779170)

Dashboard returns an error when a ceph-mgr module is enabled from the ceph CLI

When enabling a Ceph Manager (ceph-mgr) module such as telemetry from the ceph CLI the Red HatCeph Storage Dashboard displays the following error message

0 - Unknown Error

In addition the Dashboard does not mark the module as enabled until the Refresh button is clicked

(BZ1785077)

The Dashboard allows modifying the LUN ID and WWN which can lead to data corruption

The Red Hat Ceph Storage Dashboard allows you to modify the LUN ID and its World Wide Name(WWN) which is not required after creating the LUN Moreover editing these parameters can bedangerous for certain initiators that do not fully support this feature by default Consequently editingthese parameters after creating them can lead to data corruption To avoid this do not modify the LUNID and WWN in the Dashboard

(BZ1786455)

The Dashboard does not provide correct Ceph iSCSI error messages

If the Ceph iSCSI returns an error for example the HTTP 400 code when trying to delete an iSCSItarget while a user is logged in the Red Hat Ceph Storage Dashboard does not forward that error codeand message to the Dashboard user using the pop-up notifications but displays a generic 500 InternalServer Error Consequently the message that the Dashboard provides is not informative and evenmisleading an expected behavior (users cannot delete a busy resource) is perceived as an operationalfailure (internal server error) To work around this issue see the Dashboard logs

(BZ1786457)

CHAPTER 6 KNOWN ISSUES

17

The Dashboard requires to disable iptables rules

The Red Hat Ceph Storage Dashboard is unable to perform any iSCSI operations such as creating agateway unless all iptables rules are manually disabled on the Ceph iSCSI node To do so use thefollowing command as root or the sudo user

iptables -F

Note that after a reboot the rules are enabled again Either disable them again or delete thempermanently

(BZ1792818)

63 PACKAGES

Current version of Grafana causes certain bugs in the Dashboard

Red Hat Ceph Storage 4 uses the Grafana version 524 This version causes the following bugs in theRed Hat Ceph Storage Dashboard

When navigating to Pools gt Overall Performance Grafana returns the following error

TypeError lc[ttype] is undefinedtrue

When viewing a poolrsquos performance details (Pools gt select a pool from the list gt PerformanceDetails) the Grafana bar is displayed along with other graphs and values but it should not bethere

These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red HatCeph Storage

(BZ1786107 BZ1762197 BZ1765536)

64 RED HAT ENTERPRISE LINUX

Ansible cannot start NFS Ganesha if SELinux is in enforcing mode

When using SELinux in enforcing mode on Red Hat Enterprise Linux 81 the ceph-ansible utility fails tostart the NFS Ganesha service because SELinux policy currently does not allow creating a directoryrequired for NFS Ganesha

(BZ1794027 BZ1796160)

Red Hat Ceph Storage 40 Release Notes

18

CHAPTER 7 DEPRECATED FUNCTIONALITYThis section provides an overview of functionality that has been deprecated in all minor releases up tothis release of Red Hat Ceph Storage

Ubuntu is no longer supported

Installing a Red Hat Ceph Storage 4 cluster on Ubuntu is no longer supported Use Red HatEnterprise Linux as the underlying operating system

71 THE CEPH-ANSIBLE UTILITY

Configuring iSCSI gateway using ceph-ansible is no longer supported

Configuring the Ceph iSCSI gateway by using the ceph-ansible utility is no longer supported Use ceph-ansible to install the gateway and then use either the gwcli utility of the Red Hat Ceph StorageDashboard to configure the gateway For details see the Using the Ceph iSCSI Gateway chapter in theBlock Device Guide for Red Hat Ceph Storage 4

72 THE CEPH-DISK UTILITY

ceph-disk is deprecated

With this release the ceph-disk utility is no longer supported The ceph-volume utility is used insteadFor details see the Why does ceph-volume replace `ceph-disk` section in the Administration Guide forRed Hat Ceph Storage 4

73 RADOS

FileStore is no longer supported in production

The FileStore OSD back end is now deprecated because the new BlueStore back end is now fullysupported in production For details see the How to migrate the object store from FileStore toBlueStore section in the Red Hat Ceph Storage 4 Installation Guide

Ceph configuration is now deprecated

The Ceph configuration file (cephconf) is now deprecated in favor of new centralized configurationstored in Monitors For details see the Configuration is now stored in Monitors accessible by using `cephconfig` release note

CHAPTER 7 DEPRECATED FUNCTIONALITY

19

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES
Page 7: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

CHAPTER 2 ACKNOWLEDGMENTSRed Hat Ceph Storage 40 contains many contributions from the Red Hat Ceph Storage teamAdditionally the Ceph project is seeing amazing growth in the quality and quantity of contributions fromindividuals and organizations in the Ceph community We would like to thank all members of the Red HatCeph Storage team all of the individual contributors in the Ceph community and additionally (but notlimited to) the contributions from organizations such as

Intel

Fujitsu

UnitedStack

Yahoo

UbuntuKylin

Mellanox

CERN

Deutsche Telekom

Mirantis

SanDisk

SUSE

Red Hat Ceph Storage 40 Release Notes

4

CHAPTER 3 NEW FEATURESThis section lists all major updates enhancements and new features introduced in this release of RedHat Ceph Storage

The main features added by this release are

General availability of the BlueStore OSD back end

Ceph Management Dashboard

Ceph Object Gateway Beast Asio web server

Erasure coding for Ceph Block Device

Auto-scaling placement groups

Web-based installation interface

On-wire encryption

RBD performance monitoring and metrics gathering tools

Red Hat Enterprise Linux in FIPS mode

31 THE CEPH-ANSIBLE UTILITY

Ceph OSDs created with ceph-disk are migrated to ceph-volume during upgrade

When upgrading to Red Hat Ceph Storage 4 all running Ceph OSDs previously created by the ceph-disk utility will be migrated to the ceph-volume utility because ceph-disk has been deprecated in thisrelease

For bare-metal and container deployments of Red Hat Ceph Storage the ceph-volume utility does asimple scan and takes over the existing Ceph OSDs deployed by the ceph-disk utility Also do not usethese migrated devices in configurations for subsequent deployments Note that you cannot create anynew Ceph OSDs during the upgrade process

After the upgrade all Ceph OSDs created by ceph-disk will start and operate like any Ceph OSDscreated by ceph-volume

Ansible playbooks for scaling of all Ceph services

Previously ceph-ansible playbooks offered limited scale up and scale down features only to core Cephproducts such as Monitors and OSDs With this update additional Ansible playbooks allow for scaling ofall Ceph services

Ceph iSCSI packages merged into single package

The ceph-iscsi-cli and ceph-iscsi-config packages have been merged to one package named ceph-iscsi

The nfs-ganesha service is now supported as a standalone deployment

Red Hat Openstack Directory (OSPd) requires the ceph-ansible utility to be able to deploy the nfs-ganesha service and configure it so that it points to an external unmanaged pre-existing Ceph clusterAs of Red Hat Ceph Storage 4 ceph-ansible allows the deployment of an internal nfs-ganesha service

CHAPTER 3 NEW FEATURES

5

with an external Ceph cluster

Ceph container can now write logs to a respective daemon file

The previous way of logging for containerized Ceph environments did not allow limiting the journalctloutput when looking at log data by using the sosreport collection With this release logging can beenabled or disabled for a particular Ceph daemon with the following command

ceph config set daemonid log_to_file true

Where daemon is the type of the daemon and id is its ID For example to enable logging for the Monitordaemon with ID mon0

ceph config set monmon0 log_to_file true

This new feature makes debugging easier

Ability to configure Ceph Object Gateway to use TLS encryption

This release of Red Hat Ceph Storage provides the ability to configure the Ceph Object Gatewaylistener with an SSL certificate for TLS encryption by using the radosgw_frontend_ssl_certificatevariable to secure the Transmission Control Protocol (TCP) traffic

Ansible playbook for migrating OSDs from FileStore to BlueStore

A new Ansible playbook has been added to migrate OSDs from FileStore to BlueStore The object storemigration is not done as part of the upgrade process to Red Hat Ceph Storage 4 Do the migration afterthe upgrade completes For details see the How to migrate the object store from FileStore to BlueStoresection in the Red Hat Ceph Storage Administration Guide

32 CEPH MANAGEMENT DASHBOARD

Improvements to information for pool usage

With this enhancement valuable information was added to the pools table The following columns wereadded usage read bytes write bytes read operations and write operations Also the PlacementGroups column was renamed to Pg Status

Red Hat Ceph Storage Dashboard alerts

Red Hat Ceph Storage Dashboard supports alerts based on Ceph metrics and configured thresholdsThe Prometheus AlertManager configures gathers and triggers the alerts The alerts are displayed inthe Dashboard as a pop-up notifications in the upper-right corner You can view details of recent alertsin Cluster gt Alerts You can configure the alerts only in Prometheus but you can temporarily mute themfrom the Dashboard by creating Alert Silences in Cluster gt Silences

Displaying and hiding Ceph components in Dashboard

In the Red Hat Ceph Storage Dashboard you can display or hide Ceph components such as Ceph iSCSIRBD mirroring Ceph Block Devices Ceph File System or Ceph Object Gateway This feature allows youto hide components that are not configured

Ceph dashboard has been added to the Ceph Ansible playbooks

With this release the Ceph dashboard installation code was merged into the Ceph Ansible playbooksCeph Ansible does a containerized deployment of the Ceph dashboard regardless of the Red HatCeph Storage deployment type bare metal or containers These four new roles were added ceph-

Red Hat Ceph Storage 40 Release Notes

6

grafana ceph-dashboard ceph-prometheus and ceph-node-exporter

Red Hat Ceph Storage Dashboard alerts

Red Hat Ceph Storage Dashboard supports alerts based on Ceph metrics and configured thresholdsThe Prometheus AlertManager configures gathers and triggers the alerts

Viewing the cluster hierarchy from the Dashboard

Red Hat Ceph Storage Dashboard provides the ability to view the cluster hierarchy For details see theViewing the CRUSH map section in the Dashboard Guide for Red Hat Ceph Storage 4

33 CEPH FILE SYSTEM

ceph -w now shows information about CephFS scrubs

Previously it was not possible to check the ongoing Ceph File System (CephFS) scrubs status asidefrom checking the Metadata server (MDS) logs With this update the ceph -w command showsinformation about active CephFS scrubs to better understand the status

ceph-mgr volumes module for managing CephFS exports

This release provides Ceph Manager (ceph-mgr) volumes module to manage Ceph File System(CephFS) exports The volumes module implements the following file system export abstractions

FS volumes an abstraction for CephFS file systems

FS subvolumes an abstraction for independent CephFS directory trees

FS subvolume groups an abstraction for a directory higher than FS subvolumes to effectpolicies such as file layouts across a set of subvolumes

In addition these new commands are now supported

fs subvolume ls for listing subvolumes

fs subvolumegroup ls for listing subvolumes groups

fs subvolume snapshot ls for listing subvolume snapshots

fs subvolumegroup snapshot ls for listing subvolume group snapshots

fs subvolume rm for removing snapshots

34 CEPH MEDIC

ceph-medic can check the health of Ceph running in containers

With this release the ceph-medic utility can now check the health of a Red Hat Ceph Storage clusterrunning within a container

35 ISCSI GATEWAY

Non-administrators can now be used for the ceph-iscsi service

As of Red Hat Ceph Storage 4 non-administrative Ceph users may be used for the ceph-iscsi service

CHAPTER 3 NEW FEATURES

7

As of Red Hat Ceph Storage 4 non-administrative Ceph users may be used for the ceph-iscsi serviceby setting the cluster_client_name in the etccephiscsi-gatewaycfg on all iSCSI Gateways Thisallows resources to be restricted based on users

Running Ceph iSCSI Gateways can now be removed

As of Red Hat Ceph Storage 4 running iSCSI Gateways can now be removed from a Ceph iSCSI clusterfor maintenance or to reallocate resources The gatewayrsquos iSCSI target and its portals will be stoppedand all iSCSI target objects for that gateway will be removed from the kernel and the gatewayconfiguration Removing gateways that are down is not yet supported

36 OBJECT GATEWAY

The Beast HTTP front end

In Red Hat Ceph Storage 4 the default HTTP front end for the Ceph Object Gateway is Beast TheBeast front end uses the BoostBeast library for HTTP parsing and the BoostAsio library forasynchronous IO For details see the Using the Beast front end in the Object Gateway Configurationand Administration Guide for Red Hat Ceph Storage 4

Support for S3 MFA-Delete

With this release the Ceph Object Gateway supports S3 MFA-Delete using Time-Based One-TimePassword (TOTP) one-time passwords as an authentication factor This feature adds security againstinappropriate data removal You can configure buckets to require a TOTP one-time token in addition tostandard S3 authentication to delete data

Addition of storage classes and lifecycle transitions to the Ceph Object Gateway

The Ceph Object Gateway now provides support for storage classes an S3 compatible representationof the Ceph Object Gatewayrsquos underlying placement targets and for lifecycle transitions a mechanismto migrate objects between classes Storage classes and lifecycle transitions provide a higher level ofcontrol over data placement for the applications that need it They have many potential uses includingperformance tuning for specific workloads as well as automatic migration of inactive data to coldstorage

Users can now create new IAM policies and roles using REST APIs

With the release of Red Hat Ceph Storage 4 REST APIs for IAM roles and user policies are now availablein the same namespace as S3 APIs and can be accessed using the same endpoint as S3 APIs in theCeph Object Gateway This allows end users to create new IAM policies and roles using REST APIs

37 PACKAGES

Ability to install a Ceph cluster using a web-based interface

With this release the Cockpit web-based interface is supported Cockpit allows you to install a Red HatCeph Storage 4 cluster and other components such as Metadata Servers Ceph clients or the CephObject Gateway on base-metal or in containers For details see the Installing Red Hat Ceph Storageusing the Cockpit Web User Interface chapter in the Red Hat Ceph Storage 4 Installation Guide Notethat minimal experience with Red Hat Ceph Storage is required

38 RADOS

Ceph on-wire encryption

Starting with Red Hat Ceph Storage 4 you can enable encryption for all Ceph traffic over the network

Red Hat Ceph Storage 40 Release Notes

8

with the introduction of the messenger version 2 protocol For details see the Ceph on-wire encryptionchapter in the Architecture Guide and Encryption in transit section in the Data Security and HardeningGuide for Red Hat Ceph Storage 4

OSD BlueStore is now fully supported

BlueStore is a new back end for the OSD daemons that allows for storing objects directly on the blockdevices Because BlueStore does not need any file system interface it improves performance of Cephstorage clusters To learn more about the BlueStore OSD back end see the OSD BlueStore chapter inthe Administration Guide for Red Hat Ceph Storage 4

Red Hat Enterprise Linux in FIPS mode

With this release you can install Red Hat Ceph Storage on Red Hat Enterprise Linux where the FIPSmode is enabled

Changes the ceph df output and a new ceph osd df command

The output of the ceph df command has been improved Notably the RAW USED and RAW USEDvalues now show the preallocated space for the db and wal BlueStore partitions The ceph osd dfcommand shows the OSD utilization stats such as the amount of written data

Asynchronous recovery for non-acting OSD sets

Previously recovery with Ceph was a synchronous process by blocking write operations to objects untilthose objects were recovered In this release the recovery process is now asynchronous by not blockingwrite operations to objects only in the non-acting set of OSDs This new feature requires having morethan the minimum number of replicas as to have enough OSDs in the non-acting set

The new configuration option osd_async_recovery_min_cost controls how much asynchronousrecovery to do The default value for this option is 100 A higher value means asynchronous recovery willbe less whereas a lower value means asynchronous recovery will be more

Configuration is now stored in Monitors accessible by using ceph config

In this release Red Hat Ceph Storage centralizes configuration in Monitors instead of using the Cephconfiguration file (cephconf) Previously changing configuration included manually updating cephconf distributing it to appropriate nodes and restarting all affected daemons Now the Monitorsmanage a configuration database that has the same semantic structure as cephconf The database isaccessible by the ceph config command Any changes to the configuration are applied to daemons orclients in the system immediately and restarting them is no longer needed Use the ceph config -hcommand for details on the available set of commands Note that a Ceph configuration file is stillrequired to identify the Monitors nodes

Placement groups can now be auto-scaled

Red Hat Ceph Storage 4 introduces the ability to auto-scale placement groups (PGs) The number ofplacement groups (PGs) in a pool plays a significant role in how a cluster peers distributes data andrebalances Auto-scaling the number of PGs can make managing the cluster easier The new pg-autoscaling command provides recommendations for scaling PGs or automatically scales PGs basedon how the cluster is being used For more details about auto-scaling PGs see the Auto-scalingplacement groups section in the Storage Strategies Guide for Red Hat Ceph Storage 4

Introduction of diskprediction module

The Red Hat Ceph Storage diskprediction module gathers metrics to predict disk failures before theyhappen The module has two modes cloud and local With this release only the local mode is supportedThe local mode does not require any external server for data analysis It uses an internal predictormodule for disk prediction service and then returns the disk prediction result to the Ceph system

CHAPTER 3 NEW FEATURES

9

To enable the diskprediction module

ceph mgr module enable diskprediction_local

To set the prediction mode

ceph config set global device_failure_prediction_mode local

To disable the diskprediction module

ceph config set global device_failure_prediction_mode none

New configurable option mon_memory_target

Red Hat Ceph Storage 4 introduces a new configurable option mon_memory_target used to set thetarget amount of bytes for Monitor memory usage It specifies the amount of memory to allocate andmanage using the priority cache tuner for the associated Monitor daemon caches The default value of mon_memory_target is set to 2 GiB and you can change it during runtime with

ceph config set global mon_memory_target size

Prior to this release as a cluster scaled the Monitor specific RSS usage exceeded the limits that wereset using the mon_osd_cache_size option which led to issues This enhancement allows for improvedmanagement of memory allocated to the monitor caches and keeps the usage within specified limits

39 BLOCK DEVICES (RBD)

Erasure coding for Ceph Block Device

Erasure coding for Ceph Block Device (RBD) is now fully supported This feature allows RBD to storetheir data in an erasure coded pool For details see the Erasure Coding with Overwrites section in theStorage Strategies for Red Hat Ceph Storage 4

RBD performance monitoring and metrics gathering tools

Red Hat Ceph Storage 4 now incorporates new Ceph Block Device performance monitoring utilities foraggregated RBD image metrics for IOPS throughput and latency Per-image RBD metrics are nowavailable using the Ceph Manager Prometheus module the Ceph Dashboard and the rbd CLI using the rbd perf image iostat or rbd perf image iotop commands

Cloned images can be created from non-primary images

Creating cloned child RBD images from mirrored non-primary parent image is now supportedPreviously cloning of mirrored images was only supported for primary images When cloning goldenimages for virtual machines this restriction prevented the creation of new cloned images from thegolden non-primary image This update removes this restriction and cloned images can be created fromnon-primary mirrored images

Segregating RBD images within isolated namespaces within the same pool

RBD images can now be segregated within isolated namespaces within the same pool When using CephBlock Devices directly without a higher-level system such as OpenStack or OpenShift ContainerStorage it was not possible to restrict user access to specific RBD images When combined with CephXcapabilities users can be restricted to specific pool namespaces to restrict access to RBD images

Red Hat Ceph Storage 40 Release Notes

10

Moving RBD images between different pools within the same cluster

This version of Red Hat Ceph Storage adds the ability to move RBD images between different poolswithin the same cluster For details see the Moving images between pools section in the Block DeviceGuide for Red Hat Ceph Storage 4

Long-running RBD operations can run in the background

Long-running RBD operations such as image removal or cloned image flattening can now bescheduled to run in the background RBD operations that involve iterating over every backing RADOSobject for the image can take a long time depending on the size of the image When using the CLI toperform one of these operations the rbd CLI is blocked until the operation is complete Theseoperations can now be scheduled to run by the Ceph Manager as a background task by using the ceph rbd task add commands The progress of these tasks is visible on the Ceph dashboard as well as byusing the CLI

310 RBD MIRRORING

Support for multiple active instances of RBD mirror daemon in a single storage cluster

Red Hat Ceph Storage 4 now supports deploying multiple active instances of the RBD mirror daemon ina single storage cluster This enables multiple RBD mirror daemons to perform replication for the RBDimages or pools using an algorithm for chunking the images across the number of active mirroringdaemons

CHAPTER 3 NEW FEATURES

11

CHAPTER 4 BUG FIXESThis section describes bugs with significant impact on users that were fixed in this release of Red HatCeph Storage In addition the section includes descriptions of fixed known issues found in previousversions

41 THE CEPH-ANSIBLE UTILITY

The Ansible playbook no longer takes hours to complete the fuser command

Previously on a system running thousands of processes the fuser command in the Ansible playbookcould take several minutes or hours to complete because it iterates over all PID present in the procdirectory Because of this the handler tasks take a long time to complete checking if a Ceph process isalready running With this update instead of using the fuser command the Ansible playbook now checksthe socket file in the procnetunix directory and the handler tasks that check the Ceph socketcompletes almost instantly

(BZ1717011)

The purge-docker-clusteryml Ansible playbook no longer fails

Previously the purge-docker-clusteryml Ansible playbook could fail when trying to unmap RADOSBlock Devices (RBDs) because either the binary was absent or because the Atomic host versionprovided was too old With this update Ansible now uses the sysfs method to unmap devices if thereare any and the purge-docker-clusteryml playbook no longer fails

(BZ1766064)

42 OBJECT GATEWAY

The Expiration Days S3 Lifecycle parameter can now be set to 0

The Ceph Object Gateway did not accept the value of 0 for the Expiration Days Lifecycleconfiguration parameter Consequently setting the expiration to 0 could not be used to triggerbackground delete operation of objects With this update Expiration Days can be set to 0 as expected

(BZ1493476)

Archive zone fetches the current version of the source objects

An object is synced from multiple source zones to archive zone This behavior can lead to differentversions of the same object on the archive zone This update eliminates duplicate versions from gettingcreated by ensuring that the archive zone fetches the current version of the source object

(BZ1760862)

43 RADOS

A message to set expected_num_objects is no longer shown when using BlueStore

With this update a message that recommends setting the expected_num_obejcts parameter duringthe creation of the BlueStore pools has been removed because this message does not apply when usingthe BlueStore OSD back end

(BZ1650922)

Red Hat Ceph Storage 40 Release Notes

12

Deprecated JSON fields have been removed

This update removes deprecated fields from the JSON output of the ceph status command

(BZ1739233)

CHAPTER 4 BUG FIXES

13

CHAPTER 5 TECHNOLOGY PREVIEWSThis section provides an overview of Technology Preview features introduced or updated in this releaseof Red Hat Ceph Storage

IMPORTANT

Technology Preview features are not supported with Red Hat production service levelagreements (SLAs) might not be functionally complete and Red Hat does notrecommend to use them for production These features provide early access toupcoming product features enabling customers to test functionality and providefeedback during the development process

For more information on Red Hat Technology Preview features support scope seehttpsaccessredhatcomsupportofferingstechpreview

51 CEPH FILE SYSTEM

CephFS snapshots

Ceph File System (CephFS) supports taking snapshots as a Technology Preview Snapshots create animmutable view of the file system at the point in time they are taken

52 BLOCK DEVICES (RBD)

Mapping RBD images to NBD images

The rbd-nbd utility maps RADOS Block Device (RBD) images to Network Block Devices (NBD) andenables Ceph clients to access volumes and images in Kubernetes environments To use rbd-nbd installthe rbd-nbd package For details see the rbd-nbd(7) manual page

53 OBJECT GATEWAY

Object Gateway archive site

With this release an archive site is supported as a Technology Preview The archive site allows you tohave a history of versions of S3 objects that can only be eliminated through the gateways associatedwith the archive zone Including an archive zone in a multizone configuration allows you to have theflexibility of an S3 object history in only one zone while saving the space that the replicas of the versionsS3 objects would consume in the rest of the zones

Tiering within a cluster by disk type

This release adds the ability to tier within a cluster by disk type as a Technology Preview by using thecapabilities for mapping for example pools to placement targets and storage classes Using lifecycletransition rules it is possible to cause objects to migrate between storage pools based on policy

S3 bucket notifications

S3 bucket notifications are now supported as a Technology Preview When certain events are triggeredon an S3 bucket the notifications can be sent from the Ceph Object Gateway to HTTP AdvancedMessage Queuing Protocol (AMQP) 91 and Kafka endpoints Additionally the notifications can bestored in a ldquoPubSubrdquo zone instead of or in addition to sending them to the endpoints ldquoPubSubrdquo is apublish-subscribe model that enables recipients to pull notifications from Ceph

Red Hat Ceph Storage 40 Release Notes

14

To use the S3 notifications install the librabbitmq and librdkafka packages

CHAPTER 5 TECHNOLOGY PREVIEWS

15

CHAPTER 6 KNOWN ISSUESThis section documents known issues found in this release of Red Hat Ceph Storage

61 THE CEPH-ANSIBLE UTILITY

Red Hat Ceph Storage installation on Red Hat OpenStack Platform fails

The ceph-ansible utility becomes unresponsive when attempting to install Red Hat Ceph Storagetogether with the Red Hat OpenStack Platform 16 and it returns an error similar to the following one

Error unable to exec into ceph-mon-dcn1-computehci1-2 no container with name or ID ceph-mon-dcn1-computehci1-2 found no such container

To work around this issue update the following part of the handler_osdsyml file located in the ceph-ansiblerolesceph-handlertasks directory

- name unset noup flag command container_exec_cmd | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

To

- name unset noup flag command hostvars[groups[mon_group_name][0]][container_exec_cmd] | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

And start the installation process again

(BZ1792320)

Ansible does not unset the norebalance flag after it completes

The rolling-updateyml Ansible playbook does not unset the norebalance flag after it completes Towork around this issue unset the flag manually

(BZ1793564)

Ansible fails to upgrade a multisite Ceph Object Gateway when the Dashboard is enabled

When the Red Hat Ceph Storage Dashboard is enabled an attempt to use Ansible to upgrade to afurther version of Red Hat Ceph Storage fails when trying to upgrade the secondary Ceph ObjectGateway site in a multisite setup This bug does not occur on the primary site or if the Dashboard is notenabled

(BZ1794351)

62 CEPH MANAGEMENT DASHBOARD

The Dashboard does not show correct values for certain configuration options

Both the Red Hat Ceph Storage Dashboard and the underlying ceph config show command do not

Red Hat Ceph Storage 40 Release Notes

16

return the current values for certain configuration options such as fsid This is probably because certaincore options that are not meant for further modification after deploying a cluster are not updated and adefault value is used As a result the Dashboard does not show correct values for certain configurationoptions

(BZ1765493 BZ1772310)

NFS Ganesha in Dashboard

The Red Hat Ceph Storage Dashboard currently does not support managing NFS Ganesha

(BZ1772316)

Dashboard does not support email verification

The Red Hat Ceph Storage Dashboard does not support any email verification when changing a userspassword This behavior is intentional because the Dashboard supports Single Sign-On (SSO) and thisfeature can be delegated to the SSO provider

(BZ1778608)

The OSD histogram graph for read and write operations is not clear

The Red Hat Ceph Storage Dashboard does not display any numbers or description in the OSDhistogram graph for read and write operations and therefore the graph is not clear

(BZ1779170)

Dashboard returns an error when a ceph-mgr module is enabled from the ceph CLI

When enabling a Ceph Manager (ceph-mgr) module such as telemetry from the ceph CLI the Red HatCeph Storage Dashboard displays the following error message

0 - Unknown Error

In addition the Dashboard does not mark the module as enabled until the Refresh button is clicked

(BZ1785077)

The Dashboard allows modifying the LUN ID and WWN which can lead to data corruption

The Red Hat Ceph Storage Dashboard allows you to modify the LUN ID and its World Wide Name(WWN) which is not required after creating the LUN Moreover editing these parameters can bedangerous for certain initiators that do not fully support this feature by default Consequently editingthese parameters after creating them can lead to data corruption To avoid this do not modify the LUNID and WWN in the Dashboard

(BZ1786455)

The Dashboard does not provide correct Ceph iSCSI error messages

If the Ceph iSCSI returns an error for example the HTTP 400 code when trying to delete an iSCSItarget while a user is logged in the Red Hat Ceph Storage Dashboard does not forward that error codeand message to the Dashboard user using the pop-up notifications but displays a generic 500 InternalServer Error Consequently the message that the Dashboard provides is not informative and evenmisleading an expected behavior (users cannot delete a busy resource) is perceived as an operationalfailure (internal server error) To work around this issue see the Dashboard logs

(BZ1786457)

CHAPTER 6 KNOWN ISSUES

17

The Dashboard requires to disable iptables rules

The Red Hat Ceph Storage Dashboard is unable to perform any iSCSI operations such as creating agateway unless all iptables rules are manually disabled on the Ceph iSCSI node To do so use thefollowing command as root or the sudo user

iptables -F

Note that after a reboot the rules are enabled again Either disable them again or delete thempermanently

(BZ1792818)

63 PACKAGES

Current version of Grafana causes certain bugs in the Dashboard

Red Hat Ceph Storage 4 uses the Grafana version 524 This version causes the following bugs in theRed Hat Ceph Storage Dashboard

When navigating to Pools gt Overall Performance Grafana returns the following error

TypeError lc[ttype] is undefinedtrue

When viewing a poolrsquos performance details (Pools gt select a pool from the list gt PerformanceDetails) the Grafana bar is displayed along with other graphs and values but it should not bethere

These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red HatCeph Storage

(BZ1786107 BZ1762197 BZ1765536)

64 RED HAT ENTERPRISE LINUX

Ansible cannot start NFS Ganesha if SELinux is in enforcing mode

When using SELinux in enforcing mode on Red Hat Enterprise Linux 81 the ceph-ansible utility fails tostart the NFS Ganesha service because SELinux policy currently does not allow creating a directoryrequired for NFS Ganesha

(BZ1794027 BZ1796160)

Red Hat Ceph Storage 40 Release Notes

18

CHAPTER 7 DEPRECATED FUNCTIONALITYThis section provides an overview of functionality that has been deprecated in all minor releases up tothis release of Red Hat Ceph Storage

Ubuntu is no longer supported

Installing a Red Hat Ceph Storage 4 cluster on Ubuntu is no longer supported Use Red HatEnterprise Linux as the underlying operating system

71 THE CEPH-ANSIBLE UTILITY

Configuring iSCSI gateway using ceph-ansible is no longer supported

Configuring the Ceph iSCSI gateway by using the ceph-ansible utility is no longer supported Use ceph-ansible to install the gateway and then use either the gwcli utility of the Red Hat Ceph StorageDashboard to configure the gateway For details see the Using the Ceph iSCSI Gateway chapter in theBlock Device Guide for Red Hat Ceph Storage 4

72 THE CEPH-DISK UTILITY

ceph-disk is deprecated

With this release the ceph-disk utility is no longer supported The ceph-volume utility is used insteadFor details see the Why does ceph-volume replace `ceph-disk` section in the Administration Guide forRed Hat Ceph Storage 4

73 RADOS

FileStore is no longer supported in production

The FileStore OSD back end is now deprecated because the new BlueStore back end is now fullysupported in production For details see the How to migrate the object store from FileStore toBlueStore section in the Red Hat Ceph Storage 4 Installation Guide

Ceph configuration is now deprecated

The Ceph configuration file (cephconf) is now deprecated in favor of new centralized configurationstored in Monitors For details see the Configuration is now stored in Monitors accessible by using `cephconfig` release note

CHAPTER 7 DEPRECATED FUNCTIONALITY

19

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES
Page 8: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

CHAPTER 3 NEW FEATURESThis section lists all major updates enhancements and new features introduced in this release of RedHat Ceph Storage

The main features added by this release are

General availability of the BlueStore OSD back end

Ceph Management Dashboard

Ceph Object Gateway Beast Asio web server

Erasure coding for Ceph Block Device

Auto-scaling placement groups

Web-based installation interface

On-wire encryption

RBD performance monitoring and metrics gathering tools

Red Hat Enterprise Linux in FIPS mode

31 THE CEPH-ANSIBLE UTILITY

Ceph OSDs created with ceph-disk are migrated to ceph-volume during upgrade

When upgrading to Red Hat Ceph Storage 4 all running Ceph OSDs previously created by the ceph-disk utility will be migrated to the ceph-volume utility because ceph-disk has been deprecated in thisrelease

For bare-metal and container deployments of Red Hat Ceph Storage the ceph-volume utility does asimple scan and takes over the existing Ceph OSDs deployed by the ceph-disk utility Also do not usethese migrated devices in configurations for subsequent deployments Note that you cannot create anynew Ceph OSDs during the upgrade process

After the upgrade all Ceph OSDs created by ceph-disk will start and operate like any Ceph OSDscreated by ceph-volume

Ansible playbooks for scaling of all Ceph services

Previously ceph-ansible playbooks offered limited scale up and scale down features only to core Cephproducts such as Monitors and OSDs With this update additional Ansible playbooks allow for scaling ofall Ceph services

Ceph iSCSI packages merged into single package

The ceph-iscsi-cli and ceph-iscsi-config packages have been merged to one package named ceph-iscsi

The nfs-ganesha service is now supported as a standalone deployment

Red Hat Openstack Directory (OSPd) requires the ceph-ansible utility to be able to deploy the nfs-ganesha service and configure it so that it points to an external unmanaged pre-existing Ceph clusterAs of Red Hat Ceph Storage 4 ceph-ansible allows the deployment of an internal nfs-ganesha service

CHAPTER 3 NEW FEATURES

5

with an external Ceph cluster

Ceph container can now write logs to a respective daemon file

The previous way of logging for containerized Ceph environments did not allow limiting the journalctloutput when looking at log data by using the sosreport collection With this release logging can beenabled or disabled for a particular Ceph daemon with the following command

ceph config set daemonid log_to_file true

Where daemon is the type of the daemon and id is its ID For example to enable logging for the Monitordaemon with ID mon0

ceph config set monmon0 log_to_file true

This new feature makes debugging easier

Ability to configure Ceph Object Gateway to use TLS encryption

This release of Red Hat Ceph Storage provides the ability to configure the Ceph Object Gatewaylistener with an SSL certificate for TLS encryption by using the radosgw_frontend_ssl_certificatevariable to secure the Transmission Control Protocol (TCP) traffic

Ansible playbook for migrating OSDs from FileStore to BlueStore

A new Ansible playbook has been added to migrate OSDs from FileStore to BlueStore The object storemigration is not done as part of the upgrade process to Red Hat Ceph Storage 4 Do the migration afterthe upgrade completes For details see the How to migrate the object store from FileStore to BlueStoresection in the Red Hat Ceph Storage Administration Guide

32 CEPH MANAGEMENT DASHBOARD

Improvements to information for pool usage

With this enhancement valuable information was added to the pools table The following columns wereadded usage read bytes write bytes read operations and write operations Also the PlacementGroups column was renamed to Pg Status

Red Hat Ceph Storage Dashboard alerts

Red Hat Ceph Storage Dashboard supports alerts based on Ceph metrics and configured thresholdsThe Prometheus AlertManager configures gathers and triggers the alerts The alerts are displayed inthe Dashboard as a pop-up notifications in the upper-right corner You can view details of recent alertsin Cluster gt Alerts You can configure the alerts only in Prometheus but you can temporarily mute themfrom the Dashboard by creating Alert Silences in Cluster gt Silences

Displaying and hiding Ceph components in Dashboard

In the Red Hat Ceph Storage Dashboard you can display or hide Ceph components such as Ceph iSCSIRBD mirroring Ceph Block Devices Ceph File System or Ceph Object Gateway This feature allows youto hide components that are not configured

Ceph dashboard has been added to the Ceph Ansible playbooks

With this release the Ceph dashboard installation code was merged into the Ceph Ansible playbooksCeph Ansible does a containerized deployment of the Ceph dashboard regardless of the Red HatCeph Storage deployment type bare metal or containers These four new roles were added ceph-

Red Hat Ceph Storage 40 Release Notes

6

grafana ceph-dashboard ceph-prometheus and ceph-node-exporter

Red Hat Ceph Storage Dashboard alerts

Red Hat Ceph Storage Dashboard supports alerts based on Ceph metrics and configured thresholdsThe Prometheus AlertManager configures gathers and triggers the alerts

Viewing the cluster hierarchy from the Dashboard

Red Hat Ceph Storage Dashboard provides the ability to view the cluster hierarchy For details see theViewing the CRUSH map section in the Dashboard Guide for Red Hat Ceph Storage 4

33 CEPH FILE SYSTEM

ceph -w now shows information about CephFS scrubs

Previously it was not possible to check the ongoing Ceph File System (CephFS) scrubs status asidefrom checking the Metadata server (MDS) logs With this update the ceph -w command showsinformation about active CephFS scrubs to better understand the status

ceph-mgr volumes module for managing CephFS exports

This release provides Ceph Manager (ceph-mgr) volumes module to manage Ceph File System(CephFS) exports The volumes module implements the following file system export abstractions

FS volumes an abstraction for CephFS file systems

FS subvolumes an abstraction for independent CephFS directory trees

FS subvolume groups an abstraction for a directory higher than FS subvolumes to effectpolicies such as file layouts across a set of subvolumes

In addition these new commands are now supported

fs subvolume ls for listing subvolumes

fs subvolumegroup ls for listing subvolumes groups

fs subvolume snapshot ls for listing subvolume snapshots

fs subvolumegroup snapshot ls for listing subvolume group snapshots

fs subvolume rm for removing snapshots

34 CEPH MEDIC

ceph-medic can check the health of Ceph running in containers

With this release the ceph-medic utility can now check the health of a Red Hat Ceph Storage clusterrunning within a container

35 ISCSI GATEWAY

Non-administrators can now be used for the ceph-iscsi service

As of Red Hat Ceph Storage 4 non-administrative Ceph users may be used for the ceph-iscsi service

CHAPTER 3 NEW FEATURES

7

As of Red Hat Ceph Storage 4 non-administrative Ceph users may be used for the ceph-iscsi serviceby setting the cluster_client_name in the etccephiscsi-gatewaycfg on all iSCSI Gateways Thisallows resources to be restricted based on users

Running Ceph iSCSI Gateways can now be removed

As of Red Hat Ceph Storage 4 running iSCSI Gateways can now be removed from a Ceph iSCSI clusterfor maintenance or to reallocate resources The gatewayrsquos iSCSI target and its portals will be stoppedand all iSCSI target objects for that gateway will be removed from the kernel and the gatewayconfiguration Removing gateways that are down is not yet supported

36 OBJECT GATEWAY

The Beast HTTP front end

In Red Hat Ceph Storage 4 the default HTTP front end for the Ceph Object Gateway is Beast TheBeast front end uses the BoostBeast library for HTTP parsing and the BoostAsio library forasynchronous IO For details see the Using the Beast front end in the Object Gateway Configurationand Administration Guide for Red Hat Ceph Storage 4

Support for S3 MFA-Delete

With this release the Ceph Object Gateway supports S3 MFA-Delete using Time-Based One-TimePassword (TOTP) one-time passwords as an authentication factor This feature adds security againstinappropriate data removal You can configure buckets to require a TOTP one-time token in addition tostandard S3 authentication to delete data

Addition of storage classes and lifecycle transitions to the Ceph Object Gateway

The Ceph Object Gateway now provides support for storage classes an S3 compatible representationof the Ceph Object Gatewayrsquos underlying placement targets and for lifecycle transitions a mechanismto migrate objects between classes Storage classes and lifecycle transitions provide a higher level ofcontrol over data placement for the applications that need it They have many potential uses includingperformance tuning for specific workloads as well as automatic migration of inactive data to coldstorage

Users can now create new IAM policies and roles using REST APIs

With the release of Red Hat Ceph Storage 4 REST APIs for IAM roles and user policies are now availablein the same namespace as S3 APIs and can be accessed using the same endpoint as S3 APIs in theCeph Object Gateway This allows end users to create new IAM policies and roles using REST APIs

37 PACKAGES

Ability to install a Ceph cluster using a web-based interface

With this release the Cockpit web-based interface is supported Cockpit allows you to install a Red HatCeph Storage 4 cluster and other components such as Metadata Servers Ceph clients or the CephObject Gateway on base-metal or in containers For details see the Installing Red Hat Ceph Storageusing the Cockpit Web User Interface chapter in the Red Hat Ceph Storage 4 Installation Guide Notethat minimal experience with Red Hat Ceph Storage is required

38 RADOS

Ceph on-wire encryption

Starting with Red Hat Ceph Storage 4 you can enable encryption for all Ceph traffic over the network

Red Hat Ceph Storage 40 Release Notes

8

with the introduction of the messenger version 2 protocol For details see the Ceph on-wire encryptionchapter in the Architecture Guide and Encryption in transit section in the Data Security and HardeningGuide for Red Hat Ceph Storage 4

OSD BlueStore is now fully supported

BlueStore is a new back end for the OSD daemons that allows for storing objects directly on the blockdevices Because BlueStore does not need any file system interface it improves performance of Cephstorage clusters To learn more about the BlueStore OSD back end see the OSD BlueStore chapter inthe Administration Guide for Red Hat Ceph Storage 4

Red Hat Enterprise Linux in FIPS mode

With this release you can install Red Hat Ceph Storage on Red Hat Enterprise Linux where the FIPSmode is enabled

Changes the ceph df output and a new ceph osd df command

The output of the ceph df command has been improved Notably the RAW USED and RAW USEDvalues now show the preallocated space for the db and wal BlueStore partitions The ceph osd dfcommand shows the OSD utilization stats such as the amount of written data

Asynchronous recovery for non-acting OSD sets

Previously recovery with Ceph was a synchronous process by blocking write operations to objects untilthose objects were recovered In this release the recovery process is now asynchronous by not blockingwrite operations to objects only in the non-acting set of OSDs This new feature requires having morethan the minimum number of replicas as to have enough OSDs in the non-acting set

The new configuration option osd_async_recovery_min_cost controls how much asynchronousrecovery to do The default value for this option is 100 A higher value means asynchronous recovery willbe less whereas a lower value means asynchronous recovery will be more

Configuration is now stored in Monitors accessible by using ceph config

In this release Red Hat Ceph Storage centralizes configuration in Monitors instead of using the Cephconfiguration file (cephconf) Previously changing configuration included manually updating cephconf distributing it to appropriate nodes and restarting all affected daemons Now the Monitorsmanage a configuration database that has the same semantic structure as cephconf The database isaccessible by the ceph config command Any changes to the configuration are applied to daemons orclients in the system immediately and restarting them is no longer needed Use the ceph config -hcommand for details on the available set of commands Note that a Ceph configuration file is stillrequired to identify the Monitors nodes

Placement groups can now be auto-scaled

Red Hat Ceph Storage 4 introduces the ability to auto-scale placement groups (PGs) The number ofplacement groups (PGs) in a pool plays a significant role in how a cluster peers distributes data andrebalances Auto-scaling the number of PGs can make managing the cluster easier The new pg-autoscaling command provides recommendations for scaling PGs or automatically scales PGs basedon how the cluster is being used For more details about auto-scaling PGs see the Auto-scalingplacement groups section in the Storage Strategies Guide for Red Hat Ceph Storage 4

Introduction of diskprediction module

The Red Hat Ceph Storage diskprediction module gathers metrics to predict disk failures before theyhappen The module has two modes cloud and local With this release only the local mode is supportedThe local mode does not require any external server for data analysis It uses an internal predictormodule for disk prediction service and then returns the disk prediction result to the Ceph system

CHAPTER 3 NEW FEATURES

9

To enable the diskprediction module

ceph mgr module enable diskprediction_local

To set the prediction mode

ceph config set global device_failure_prediction_mode local

To disable the diskprediction module

ceph config set global device_failure_prediction_mode none

New configurable option mon_memory_target

Red Hat Ceph Storage 4 introduces a new configurable option mon_memory_target used to set thetarget amount of bytes for Monitor memory usage It specifies the amount of memory to allocate andmanage using the priority cache tuner for the associated Monitor daemon caches The default value of mon_memory_target is set to 2 GiB and you can change it during runtime with

ceph config set global mon_memory_target size

Prior to this release as a cluster scaled the Monitor specific RSS usage exceeded the limits that wereset using the mon_osd_cache_size option which led to issues This enhancement allows for improvedmanagement of memory allocated to the monitor caches and keeps the usage within specified limits

39 BLOCK DEVICES (RBD)

Erasure coding for Ceph Block Device

Erasure coding for Ceph Block Device (RBD) is now fully supported This feature allows RBD to storetheir data in an erasure coded pool For details see the Erasure Coding with Overwrites section in theStorage Strategies for Red Hat Ceph Storage 4

RBD performance monitoring and metrics gathering tools

Red Hat Ceph Storage 4 now incorporates new Ceph Block Device performance monitoring utilities foraggregated RBD image metrics for IOPS throughput and latency Per-image RBD metrics are nowavailable using the Ceph Manager Prometheus module the Ceph Dashboard and the rbd CLI using the rbd perf image iostat or rbd perf image iotop commands

Cloned images can be created from non-primary images

Creating cloned child RBD images from mirrored non-primary parent image is now supportedPreviously cloning of mirrored images was only supported for primary images When cloning goldenimages for virtual machines this restriction prevented the creation of new cloned images from thegolden non-primary image This update removes this restriction and cloned images can be created fromnon-primary mirrored images

Segregating RBD images within isolated namespaces within the same pool

RBD images can now be segregated within isolated namespaces within the same pool When using CephBlock Devices directly without a higher-level system such as OpenStack or OpenShift ContainerStorage it was not possible to restrict user access to specific RBD images When combined with CephXcapabilities users can be restricted to specific pool namespaces to restrict access to RBD images

Red Hat Ceph Storage 40 Release Notes

10

Moving RBD images between different pools within the same cluster

This version of Red Hat Ceph Storage adds the ability to move RBD images between different poolswithin the same cluster For details see the Moving images between pools section in the Block DeviceGuide for Red Hat Ceph Storage 4

Long-running RBD operations can run in the background

Long-running RBD operations such as image removal or cloned image flattening can now bescheduled to run in the background RBD operations that involve iterating over every backing RADOSobject for the image can take a long time depending on the size of the image When using the CLI toperform one of these operations the rbd CLI is blocked until the operation is complete Theseoperations can now be scheduled to run by the Ceph Manager as a background task by using the ceph rbd task add commands The progress of these tasks is visible on the Ceph dashboard as well as byusing the CLI

310 RBD MIRRORING

Support for multiple active instances of RBD mirror daemon in a single storage cluster

Red Hat Ceph Storage 4 now supports deploying multiple active instances of the RBD mirror daemon ina single storage cluster This enables multiple RBD mirror daemons to perform replication for the RBDimages or pools using an algorithm for chunking the images across the number of active mirroringdaemons

CHAPTER 3 NEW FEATURES

11

CHAPTER 4 BUG FIXESThis section describes bugs with significant impact on users that were fixed in this release of Red HatCeph Storage In addition the section includes descriptions of fixed known issues found in previousversions

41 THE CEPH-ANSIBLE UTILITY

The Ansible playbook no longer takes hours to complete the fuser command

Previously on a system running thousands of processes the fuser command in the Ansible playbookcould take several minutes or hours to complete because it iterates over all PID present in the procdirectory Because of this the handler tasks take a long time to complete checking if a Ceph process isalready running With this update instead of using the fuser command the Ansible playbook now checksthe socket file in the procnetunix directory and the handler tasks that check the Ceph socketcompletes almost instantly

(BZ1717011)

The purge-docker-clusteryml Ansible playbook no longer fails

Previously the purge-docker-clusteryml Ansible playbook could fail when trying to unmap RADOSBlock Devices (RBDs) because either the binary was absent or because the Atomic host versionprovided was too old With this update Ansible now uses the sysfs method to unmap devices if thereare any and the purge-docker-clusteryml playbook no longer fails

(BZ1766064)

42 OBJECT GATEWAY

The Expiration Days S3 Lifecycle parameter can now be set to 0

The Ceph Object Gateway did not accept the value of 0 for the Expiration Days Lifecycleconfiguration parameter Consequently setting the expiration to 0 could not be used to triggerbackground delete operation of objects With this update Expiration Days can be set to 0 as expected

(BZ1493476)

Archive zone fetches the current version of the source objects

An object is synced from multiple source zones to archive zone This behavior can lead to differentversions of the same object on the archive zone This update eliminates duplicate versions from gettingcreated by ensuring that the archive zone fetches the current version of the source object

(BZ1760862)

43 RADOS

A message to set expected_num_objects is no longer shown when using BlueStore

With this update a message that recommends setting the expected_num_obejcts parameter duringthe creation of the BlueStore pools has been removed because this message does not apply when usingthe BlueStore OSD back end

(BZ1650922)

Red Hat Ceph Storage 40 Release Notes

12

Deprecated JSON fields have been removed

This update removes deprecated fields from the JSON output of the ceph status command

(BZ1739233)

CHAPTER 4 BUG FIXES

13

CHAPTER 5 TECHNOLOGY PREVIEWSThis section provides an overview of Technology Preview features introduced or updated in this releaseof Red Hat Ceph Storage

IMPORTANT

Technology Preview features are not supported with Red Hat production service levelagreements (SLAs) might not be functionally complete and Red Hat does notrecommend to use them for production These features provide early access toupcoming product features enabling customers to test functionality and providefeedback during the development process

For more information on Red Hat Technology Preview features support scope seehttpsaccessredhatcomsupportofferingstechpreview

51 CEPH FILE SYSTEM

CephFS snapshots

Ceph File System (CephFS) supports taking snapshots as a Technology Preview Snapshots create animmutable view of the file system at the point in time they are taken

52 BLOCK DEVICES (RBD)

Mapping RBD images to NBD images

The rbd-nbd utility maps RADOS Block Device (RBD) images to Network Block Devices (NBD) andenables Ceph clients to access volumes and images in Kubernetes environments To use rbd-nbd installthe rbd-nbd package For details see the rbd-nbd(7) manual page

53 OBJECT GATEWAY

Object Gateway archive site

With this release an archive site is supported as a Technology Preview The archive site allows you tohave a history of versions of S3 objects that can only be eliminated through the gateways associatedwith the archive zone Including an archive zone in a multizone configuration allows you to have theflexibility of an S3 object history in only one zone while saving the space that the replicas of the versionsS3 objects would consume in the rest of the zones

Tiering within a cluster by disk type

This release adds the ability to tier within a cluster by disk type as a Technology Preview by using thecapabilities for mapping for example pools to placement targets and storage classes Using lifecycletransition rules it is possible to cause objects to migrate between storage pools based on policy

S3 bucket notifications

S3 bucket notifications are now supported as a Technology Preview When certain events are triggeredon an S3 bucket the notifications can be sent from the Ceph Object Gateway to HTTP AdvancedMessage Queuing Protocol (AMQP) 91 and Kafka endpoints Additionally the notifications can bestored in a ldquoPubSubrdquo zone instead of or in addition to sending them to the endpoints ldquoPubSubrdquo is apublish-subscribe model that enables recipients to pull notifications from Ceph

Red Hat Ceph Storage 40 Release Notes

14

To use the S3 notifications install the librabbitmq and librdkafka packages

CHAPTER 5 TECHNOLOGY PREVIEWS

15

CHAPTER 6 KNOWN ISSUESThis section documents known issues found in this release of Red Hat Ceph Storage

61 THE CEPH-ANSIBLE UTILITY

Red Hat Ceph Storage installation on Red Hat OpenStack Platform fails

The ceph-ansible utility becomes unresponsive when attempting to install Red Hat Ceph Storagetogether with the Red Hat OpenStack Platform 16 and it returns an error similar to the following one

Error unable to exec into ceph-mon-dcn1-computehci1-2 no container with name or ID ceph-mon-dcn1-computehci1-2 found no such container

To work around this issue update the following part of the handler_osdsyml file located in the ceph-ansiblerolesceph-handlertasks directory

- name unset noup flag command container_exec_cmd | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

To

- name unset noup flag command hostvars[groups[mon_group_name][0]][container_exec_cmd] | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

And start the installation process again

(BZ1792320)

Ansible does not unset the norebalance flag after it completes

The rolling-updateyml Ansible playbook does not unset the norebalance flag after it completes Towork around this issue unset the flag manually

(BZ1793564)

Ansible fails to upgrade a multisite Ceph Object Gateway when the Dashboard is enabled

When the Red Hat Ceph Storage Dashboard is enabled an attempt to use Ansible to upgrade to afurther version of Red Hat Ceph Storage fails when trying to upgrade the secondary Ceph ObjectGateway site in a multisite setup This bug does not occur on the primary site or if the Dashboard is notenabled

(BZ1794351)

62 CEPH MANAGEMENT DASHBOARD

The Dashboard does not show correct values for certain configuration options

Both the Red Hat Ceph Storage Dashboard and the underlying ceph config show command do not

Red Hat Ceph Storage 40 Release Notes

16

return the current values for certain configuration options such as fsid This is probably because certaincore options that are not meant for further modification after deploying a cluster are not updated and adefault value is used As a result the Dashboard does not show correct values for certain configurationoptions

(BZ1765493 BZ1772310)

NFS Ganesha in Dashboard

The Red Hat Ceph Storage Dashboard currently does not support managing NFS Ganesha

(BZ1772316)

Dashboard does not support email verification

The Red Hat Ceph Storage Dashboard does not support any email verification when changing a userspassword This behavior is intentional because the Dashboard supports Single Sign-On (SSO) and thisfeature can be delegated to the SSO provider

(BZ1778608)

The OSD histogram graph for read and write operations is not clear

The Red Hat Ceph Storage Dashboard does not display any numbers or description in the OSDhistogram graph for read and write operations and therefore the graph is not clear

(BZ1779170)

Dashboard returns an error when a ceph-mgr module is enabled from the ceph CLI

When enabling a Ceph Manager (ceph-mgr) module such as telemetry from the ceph CLI the Red HatCeph Storage Dashboard displays the following error message

0 - Unknown Error

In addition the Dashboard does not mark the module as enabled until the Refresh button is clicked

(BZ1785077)

The Dashboard allows modifying the LUN ID and WWN which can lead to data corruption

The Red Hat Ceph Storage Dashboard allows you to modify the LUN ID and its World Wide Name(WWN) which is not required after creating the LUN Moreover editing these parameters can bedangerous for certain initiators that do not fully support this feature by default Consequently editingthese parameters after creating them can lead to data corruption To avoid this do not modify the LUNID and WWN in the Dashboard

(BZ1786455)

The Dashboard does not provide correct Ceph iSCSI error messages

If the Ceph iSCSI returns an error for example the HTTP 400 code when trying to delete an iSCSItarget while a user is logged in the Red Hat Ceph Storage Dashboard does not forward that error codeand message to the Dashboard user using the pop-up notifications but displays a generic 500 InternalServer Error Consequently the message that the Dashboard provides is not informative and evenmisleading an expected behavior (users cannot delete a busy resource) is perceived as an operationalfailure (internal server error) To work around this issue see the Dashboard logs

(BZ1786457)

CHAPTER 6 KNOWN ISSUES

17

The Dashboard requires to disable iptables rules

The Red Hat Ceph Storage Dashboard is unable to perform any iSCSI operations such as creating agateway unless all iptables rules are manually disabled on the Ceph iSCSI node To do so use thefollowing command as root or the sudo user

iptables -F

Note that after a reboot the rules are enabled again Either disable them again or delete thempermanently

(BZ1792818)

63 PACKAGES

Current version of Grafana causes certain bugs in the Dashboard

Red Hat Ceph Storage 4 uses the Grafana version 524 This version causes the following bugs in theRed Hat Ceph Storage Dashboard

When navigating to Pools gt Overall Performance Grafana returns the following error

TypeError lc[ttype] is undefinedtrue

When viewing a poolrsquos performance details (Pools gt select a pool from the list gt PerformanceDetails) the Grafana bar is displayed along with other graphs and values but it should not bethere

These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red HatCeph Storage

(BZ1786107 BZ1762197 BZ1765536)

64 RED HAT ENTERPRISE LINUX

Ansible cannot start NFS Ganesha if SELinux is in enforcing mode

When using SELinux in enforcing mode on Red Hat Enterprise Linux 81 the ceph-ansible utility fails tostart the NFS Ganesha service because SELinux policy currently does not allow creating a directoryrequired for NFS Ganesha

(BZ1794027 BZ1796160)

Red Hat Ceph Storage 40 Release Notes

18

CHAPTER 7 DEPRECATED FUNCTIONALITYThis section provides an overview of functionality that has been deprecated in all minor releases up tothis release of Red Hat Ceph Storage

Ubuntu is no longer supported

Installing a Red Hat Ceph Storage 4 cluster on Ubuntu is no longer supported Use Red HatEnterprise Linux as the underlying operating system

71 THE CEPH-ANSIBLE UTILITY

Configuring iSCSI gateway using ceph-ansible is no longer supported

Configuring the Ceph iSCSI gateway by using the ceph-ansible utility is no longer supported Use ceph-ansible to install the gateway and then use either the gwcli utility of the Red Hat Ceph StorageDashboard to configure the gateway For details see the Using the Ceph iSCSI Gateway chapter in theBlock Device Guide for Red Hat Ceph Storage 4

72 THE CEPH-DISK UTILITY

ceph-disk is deprecated

With this release the ceph-disk utility is no longer supported The ceph-volume utility is used insteadFor details see the Why does ceph-volume replace `ceph-disk` section in the Administration Guide forRed Hat Ceph Storage 4

73 RADOS

FileStore is no longer supported in production

The FileStore OSD back end is now deprecated because the new BlueStore back end is now fullysupported in production For details see the How to migrate the object store from FileStore toBlueStore section in the Red Hat Ceph Storage 4 Installation Guide

Ceph configuration is now deprecated

The Ceph configuration file (cephconf) is now deprecated in favor of new centralized configurationstored in Monitors For details see the Configuration is now stored in Monitors accessible by using `cephconfig` release note

CHAPTER 7 DEPRECATED FUNCTIONALITY

19

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES
Page 9: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

with an external Ceph cluster

Ceph container can now write logs to a respective daemon file

The previous way of logging for containerized Ceph environments did not allow limiting the journalctloutput when looking at log data by using the sosreport collection With this release logging can beenabled or disabled for a particular Ceph daemon with the following command

ceph config set daemonid log_to_file true

Where daemon is the type of the daemon and id is its ID For example to enable logging for the Monitordaemon with ID mon0

ceph config set monmon0 log_to_file true

This new feature makes debugging easier

Ability to configure Ceph Object Gateway to use TLS encryption

This release of Red Hat Ceph Storage provides the ability to configure the Ceph Object Gatewaylistener with an SSL certificate for TLS encryption by using the radosgw_frontend_ssl_certificatevariable to secure the Transmission Control Protocol (TCP) traffic

Ansible playbook for migrating OSDs from FileStore to BlueStore

A new Ansible playbook has been added to migrate OSDs from FileStore to BlueStore The object storemigration is not done as part of the upgrade process to Red Hat Ceph Storage 4 Do the migration afterthe upgrade completes For details see the How to migrate the object store from FileStore to BlueStoresection in the Red Hat Ceph Storage Administration Guide

32 CEPH MANAGEMENT DASHBOARD

Improvements to information for pool usage

With this enhancement valuable information was added to the pools table The following columns wereadded usage read bytes write bytes read operations and write operations Also the PlacementGroups column was renamed to Pg Status

Red Hat Ceph Storage Dashboard alerts

Red Hat Ceph Storage Dashboard supports alerts based on Ceph metrics and configured thresholdsThe Prometheus AlertManager configures gathers and triggers the alerts The alerts are displayed inthe Dashboard as a pop-up notifications in the upper-right corner You can view details of recent alertsin Cluster gt Alerts You can configure the alerts only in Prometheus but you can temporarily mute themfrom the Dashboard by creating Alert Silences in Cluster gt Silences

Displaying and hiding Ceph components in Dashboard

In the Red Hat Ceph Storage Dashboard you can display or hide Ceph components such as Ceph iSCSIRBD mirroring Ceph Block Devices Ceph File System or Ceph Object Gateway This feature allows youto hide components that are not configured

Ceph dashboard has been added to the Ceph Ansible playbooks

With this release the Ceph dashboard installation code was merged into the Ceph Ansible playbooksCeph Ansible does a containerized deployment of the Ceph dashboard regardless of the Red HatCeph Storage deployment type bare metal or containers These four new roles were added ceph-

Red Hat Ceph Storage 40 Release Notes

6

grafana ceph-dashboard ceph-prometheus and ceph-node-exporter

Red Hat Ceph Storage Dashboard alerts

Red Hat Ceph Storage Dashboard supports alerts based on Ceph metrics and configured thresholdsThe Prometheus AlertManager configures gathers and triggers the alerts

Viewing the cluster hierarchy from the Dashboard

Red Hat Ceph Storage Dashboard provides the ability to view the cluster hierarchy For details see theViewing the CRUSH map section in the Dashboard Guide for Red Hat Ceph Storage 4

33 CEPH FILE SYSTEM

ceph -w now shows information about CephFS scrubs

Previously it was not possible to check the ongoing Ceph File System (CephFS) scrubs status asidefrom checking the Metadata server (MDS) logs With this update the ceph -w command showsinformation about active CephFS scrubs to better understand the status

ceph-mgr volumes module for managing CephFS exports

This release provides Ceph Manager (ceph-mgr) volumes module to manage Ceph File System(CephFS) exports The volumes module implements the following file system export abstractions

FS volumes an abstraction for CephFS file systems

FS subvolumes an abstraction for independent CephFS directory trees

FS subvolume groups an abstraction for a directory higher than FS subvolumes to effectpolicies such as file layouts across a set of subvolumes

In addition these new commands are now supported

fs subvolume ls for listing subvolumes

fs subvolumegroup ls for listing subvolumes groups

fs subvolume snapshot ls for listing subvolume snapshots

fs subvolumegroup snapshot ls for listing subvolume group snapshots

fs subvolume rm for removing snapshots

34 CEPH MEDIC

ceph-medic can check the health of Ceph running in containers

With this release the ceph-medic utility can now check the health of a Red Hat Ceph Storage clusterrunning within a container

35 ISCSI GATEWAY

Non-administrators can now be used for the ceph-iscsi service

As of Red Hat Ceph Storage 4 non-administrative Ceph users may be used for the ceph-iscsi service

CHAPTER 3 NEW FEATURES

7

As of Red Hat Ceph Storage 4 non-administrative Ceph users may be used for the ceph-iscsi serviceby setting the cluster_client_name in the etccephiscsi-gatewaycfg on all iSCSI Gateways Thisallows resources to be restricted based on users

Running Ceph iSCSI Gateways can now be removed

As of Red Hat Ceph Storage 4 running iSCSI Gateways can now be removed from a Ceph iSCSI clusterfor maintenance or to reallocate resources The gatewayrsquos iSCSI target and its portals will be stoppedand all iSCSI target objects for that gateway will be removed from the kernel and the gatewayconfiguration Removing gateways that are down is not yet supported

36 OBJECT GATEWAY

The Beast HTTP front end

In Red Hat Ceph Storage 4 the default HTTP front end for the Ceph Object Gateway is Beast TheBeast front end uses the BoostBeast library for HTTP parsing and the BoostAsio library forasynchronous IO For details see the Using the Beast front end in the Object Gateway Configurationand Administration Guide for Red Hat Ceph Storage 4

Support for S3 MFA-Delete

With this release the Ceph Object Gateway supports S3 MFA-Delete using Time-Based One-TimePassword (TOTP) one-time passwords as an authentication factor This feature adds security againstinappropriate data removal You can configure buckets to require a TOTP one-time token in addition tostandard S3 authentication to delete data

Addition of storage classes and lifecycle transitions to the Ceph Object Gateway

The Ceph Object Gateway now provides support for storage classes an S3 compatible representationof the Ceph Object Gatewayrsquos underlying placement targets and for lifecycle transitions a mechanismto migrate objects between classes Storage classes and lifecycle transitions provide a higher level ofcontrol over data placement for the applications that need it They have many potential uses includingperformance tuning for specific workloads as well as automatic migration of inactive data to coldstorage

Users can now create new IAM policies and roles using REST APIs

With the release of Red Hat Ceph Storage 4 REST APIs for IAM roles and user policies are now availablein the same namespace as S3 APIs and can be accessed using the same endpoint as S3 APIs in theCeph Object Gateway This allows end users to create new IAM policies and roles using REST APIs

37 PACKAGES

Ability to install a Ceph cluster using a web-based interface

With this release the Cockpit web-based interface is supported Cockpit allows you to install a Red HatCeph Storage 4 cluster and other components such as Metadata Servers Ceph clients or the CephObject Gateway on base-metal or in containers For details see the Installing Red Hat Ceph Storageusing the Cockpit Web User Interface chapter in the Red Hat Ceph Storage 4 Installation Guide Notethat minimal experience with Red Hat Ceph Storage is required

38 RADOS

Ceph on-wire encryption

Starting with Red Hat Ceph Storage 4 you can enable encryption for all Ceph traffic over the network

Red Hat Ceph Storage 40 Release Notes

8

with the introduction of the messenger version 2 protocol For details see the Ceph on-wire encryptionchapter in the Architecture Guide and Encryption in transit section in the Data Security and HardeningGuide for Red Hat Ceph Storage 4

OSD BlueStore is now fully supported

BlueStore is a new back end for the OSD daemons that allows for storing objects directly on the blockdevices Because BlueStore does not need any file system interface it improves performance of Cephstorage clusters To learn more about the BlueStore OSD back end see the OSD BlueStore chapter inthe Administration Guide for Red Hat Ceph Storage 4

Red Hat Enterprise Linux in FIPS mode

With this release you can install Red Hat Ceph Storage on Red Hat Enterprise Linux where the FIPSmode is enabled

Changes the ceph df output and a new ceph osd df command

The output of the ceph df command has been improved Notably the RAW USED and RAW USEDvalues now show the preallocated space for the db and wal BlueStore partitions The ceph osd dfcommand shows the OSD utilization stats such as the amount of written data

Asynchronous recovery for non-acting OSD sets

Previously recovery with Ceph was a synchronous process by blocking write operations to objects untilthose objects were recovered In this release the recovery process is now asynchronous by not blockingwrite operations to objects only in the non-acting set of OSDs This new feature requires having morethan the minimum number of replicas as to have enough OSDs in the non-acting set

The new configuration option osd_async_recovery_min_cost controls how much asynchronousrecovery to do The default value for this option is 100 A higher value means asynchronous recovery willbe less whereas a lower value means asynchronous recovery will be more

Configuration is now stored in Monitors accessible by using ceph config

In this release Red Hat Ceph Storage centralizes configuration in Monitors instead of using the Cephconfiguration file (cephconf) Previously changing configuration included manually updating cephconf distributing it to appropriate nodes and restarting all affected daemons Now the Monitorsmanage a configuration database that has the same semantic structure as cephconf The database isaccessible by the ceph config command Any changes to the configuration are applied to daemons orclients in the system immediately and restarting them is no longer needed Use the ceph config -hcommand for details on the available set of commands Note that a Ceph configuration file is stillrequired to identify the Monitors nodes

Placement groups can now be auto-scaled

Red Hat Ceph Storage 4 introduces the ability to auto-scale placement groups (PGs) The number ofplacement groups (PGs) in a pool plays a significant role in how a cluster peers distributes data andrebalances Auto-scaling the number of PGs can make managing the cluster easier The new pg-autoscaling command provides recommendations for scaling PGs or automatically scales PGs basedon how the cluster is being used For more details about auto-scaling PGs see the Auto-scalingplacement groups section in the Storage Strategies Guide for Red Hat Ceph Storage 4

Introduction of diskprediction module

The Red Hat Ceph Storage diskprediction module gathers metrics to predict disk failures before theyhappen The module has two modes cloud and local With this release only the local mode is supportedThe local mode does not require any external server for data analysis It uses an internal predictormodule for disk prediction service and then returns the disk prediction result to the Ceph system

CHAPTER 3 NEW FEATURES

9

To enable the diskprediction module

ceph mgr module enable diskprediction_local

To set the prediction mode

ceph config set global device_failure_prediction_mode local

To disable the diskprediction module

ceph config set global device_failure_prediction_mode none

New configurable option mon_memory_target

Red Hat Ceph Storage 4 introduces a new configurable option mon_memory_target used to set thetarget amount of bytes for Monitor memory usage It specifies the amount of memory to allocate andmanage using the priority cache tuner for the associated Monitor daemon caches The default value of mon_memory_target is set to 2 GiB and you can change it during runtime with

ceph config set global mon_memory_target size

Prior to this release as a cluster scaled the Monitor specific RSS usage exceeded the limits that wereset using the mon_osd_cache_size option which led to issues This enhancement allows for improvedmanagement of memory allocated to the monitor caches and keeps the usage within specified limits

39 BLOCK DEVICES (RBD)

Erasure coding for Ceph Block Device

Erasure coding for Ceph Block Device (RBD) is now fully supported This feature allows RBD to storetheir data in an erasure coded pool For details see the Erasure Coding with Overwrites section in theStorage Strategies for Red Hat Ceph Storage 4

RBD performance monitoring and metrics gathering tools

Red Hat Ceph Storage 4 now incorporates new Ceph Block Device performance monitoring utilities foraggregated RBD image metrics for IOPS throughput and latency Per-image RBD metrics are nowavailable using the Ceph Manager Prometheus module the Ceph Dashboard and the rbd CLI using the rbd perf image iostat or rbd perf image iotop commands

Cloned images can be created from non-primary images

Creating cloned child RBD images from mirrored non-primary parent image is now supportedPreviously cloning of mirrored images was only supported for primary images When cloning goldenimages for virtual machines this restriction prevented the creation of new cloned images from thegolden non-primary image This update removes this restriction and cloned images can be created fromnon-primary mirrored images

Segregating RBD images within isolated namespaces within the same pool

RBD images can now be segregated within isolated namespaces within the same pool When using CephBlock Devices directly without a higher-level system such as OpenStack or OpenShift ContainerStorage it was not possible to restrict user access to specific RBD images When combined with CephXcapabilities users can be restricted to specific pool namespaces to restrict access to RBD images

Red Hat Ceph Storage 40 Release Notes

10

Moving RBD images between different pools within the same cluster

This version of Red Hat Ceph Storage adds the ability to move RBD images between different poolswithin the same cluster For details see the Moving images between pools section in the Block DeviceGuide for Red Hat Ceph Storage 4

Long-running RBD operations can run in the background

Long-running RBD operations such as image removal or cloned image flattening can now bescheduled to run in the background RBD operations that involve iterating over every backing RADOSobject for the image can take a long time depending on the size of the image When using the CLI toperform one of these operations the rbd CLI is blocked until the operation is complete Theseoperations can now be scheduled to run by the Ceph Manager as a background task by using the ceph rbd task add commands The progress of these tasks is visible on the Ceph dashboard as well as byusing the CLI

310 RBD MIRRORING

Support for multiple active instances of RBD mirror daemon in a single storage cluster

Red Hat Ceph Storage 4 now supports deploying multiple active instances of the RBD mirror daemon ina single storage cluster This enables multiple RBD mirror daemons to perform replication for the RBDimages or pools using an algorithm for chunking the images across the number of active mirroringdaemons

CHAPTER 3 NEW FEATURES

11

CHAPTER 4 BUG FIXESThis section describes bugs with significant impact on users that were fixed in this release of Red HatCeph Storage In addition the section includes descriptions of fixed known issues found in previousversions

41 THE CEPH-ANSIBLE UTILITY

The Ansible playbook no longer takes hours to complete the fuser command

Previously on a system running thousands of processes the fuser command in the Ansible playbookcould take several minutes or hours to complete because it iterates over all PID present in the procdirectory Because of this the handler tasks take a long time to complete checking if a Ceph process isalready running With this update instead of using the fuser command the Ansible playbook now checksthe socket file in the procnetunix directory and the handler tasks that check the Ceph socketcompletes almost instantly

(BZ1717011)

The purge-docker-clusteryml Ansible playbook no longer fails

Previously the purge-docker-clusteryml Ansible playbook could fail when trying to unmap RADOSBlock Devices (RBDs) because either the binary was absent or because the Atomic host versionprovided was too old With this update Ansible now uses the sysfs method to unmap devices if thereare any and the purge-docker-clusteryml playbook no longer fails

(BZ1766064)

42 OBJECT GATEWAY

The Expiration Days S3 Lifecycle parameter can now be set to 0

The Ceph Object Gateway did not accept the value of 0 for the Expiration Days Lifecycleconfiguration parameter Consequently setting the expiration to 0 could not be used to triggerbackground delete operation of objects With this update Expiration Days can be set to 0 as expected

(BZ1493476)

Archive zone fetches the current version of the source objects

An object is synced from multiple source zones to archive zone This behavior can lead to differentversions of the same object on the archive zone This update eliminates duplicate versions from gettingcreated by ensuring that the archive zone fetches the current version of the source object

(BZ1760862)

43 RADOS

A message to set expected_num_objects is no longer shown when using BlueStore

With this update a message that recommends setting the expected_num_obejcts parameter duringthe creation of the BlueStore pools has been removed because this message does not apply when usingthe BlueStore OSD back end

(BZ1650922)

Red Hat Ceph Storage 40 Release Notes

12

Deprecated JSON fields have been removed

This update removes deprecated fields from the JSON output of the ceph status command

(BZ1739233)

CHAPTER 4 BUG FIXES

13

CHAPTER 5 TECHNOLOGY PREVIEWSThis section provides an overview of Technology Preview features introduced or updated in this releaseof Red Hat Ceph Storage

IMPORTANT

Technology Preview features are not supported with Red Hat production service levelagreements (SLAs) might not be functionally complete and Red Hat does notrecommend to use them for production These features provide early access toupcoming product features enabling customers to test functionality and providefeedback during the development process

For more information on Red Hat Technology Preview features support scope seehttpsaccessredhatcomsupportofferingstechpreview

51 CEPH FILE SYSTEM

CephFS snapshots

Ceph File System (CephFS) supports taking snapshots as a Technology Preview Snapshots create animmutable view of the file system at the point in time they are taken

52 BLOCK DEVICES (RBD)

Mapping RBD images to NBD images

The rbd-nbd utility maps RADOS Block Device (RBD) images to Network Block Devices (NBD) andenables Ceph clients to access volumes and images in Kubernetes environments To use rbd-nbd installthe rbd-nbd package For details see the rbd-nbd(7) manual page

53 OBJECT GATEWAY

Object Gateway archive site

With this release an archive site is supported as a Technology Preview The archive site allows you tohave a history of versions of S3 objects that can only be eliminated through the gateways associatedwith the archive zone Including an archive zone in a multizone configuration allows you to have theflexibility of an S3 object history in only one zone while saving the space that the replicas of the versionsS3 objects would consume in the rest of the zones

Tiering within a cluster by disk type

This release adds the ability to tier within a cluster by disk type as a Technology Preview by using thecapabilities for mapping for example pools to placement targets and storage classes Using lifecycletransition rules it is possible to cause objects to migrate between storage pools based on policy

S3 bucket notifications

S3 bucket notifications are now supported as a Technology Preview When certain events are triggeredon an S3 bucket the notifications can be sent from the Ceph Object Gateway to HTTP AdvancedMessage Queuing Protocol (AMQP) 91 and Kafka endpoints Additionally the notifications can bestored in a ldquoPubSubrdquo zone instead of or in addition to sending them to the endpoints ldquoPubSubrdquo is apublish-subscribe model that enables recipients to pull notifications from Ceph

Red Hat Ceph Storage 40 Release Notes

14

To use the S3 notifications install the librabbitmq and librdkafka packages

CHAPTER 5 TECHNOLOGY PREVIEWS

15

CHAPTER 6 KNOWN ISSUESThis section documents known issues found in this release of Red Hat Ceph Storage

61 THE CEPH-ANSIBLE UTILITY

Red Hat Ceph Storage installation on Red Hat OpenStack Platform fails

The ceph-ansible utility becomes unresponsive when attempting to install Red Hat Ceph Storagetogether with the Red Hat OpenStack Platform 16 and it returns an error similar to the following one

Error unable to exec into ceph-mon-dcn1-computehci1-2 no container with name or ID ceph-mon-dcn1-computehci1-2 found no such container

To work around this issue update the following part of the handler_osdsyml file located in the ceph-ansiblerolesceph-handlertasks directory

- name unset noup flag command container_exec_cmd | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

To

- name unset noup flag command hostvars[groups[mon_group_name][0]][container_exec_cmd] | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

And start the installation process again

(BZ1792320)

Ansible does not unset the norebalance flag after it completes

The rolling-updateyml Ansible playbook does not unset the norebalance flag after it completes Towork around this issue unset the flag manually

(BZ1793564)

Ansible fails to upgrade a multisite Ceph Object Gateway when the Dashboard is enabled

When the Red Hat Ceph Storage Dashboard is enabled an attempt to use Ansible to upgrade to afurther version of Red Hat Ceph Storage fails when trying to upgrade the secondary Ceph ObjectGateway site in a multisite setup This bug does not occur on the primary site or if the Dashboard is notenabled

(BZ1794351)

62 CEPH MANAGEMENT DASHBOARD

The Dashboard does not show correct values for certain configuration options

Both the Red Hat Ceph Storage Dashboard and the underlying ceph config show command do not

Red Hat Ceph Storage 40 Release Notes

16

return the current values for certain configuration options such as fsid This is probably because certaincore options that are not meant for further modification after deploying a cluster are not updated and adefault value is used As a result the Dashboard does not show correct values for certain configurationoptions

(BZ1765493 BZ1772310)

NFS Ganesha in Dashboard

The Red Hat Ceph Storage Dashboard currently does not support managing NFS Ganesha

(BZ1772316)

Dashboard does not support email verification

The Red Hat Ceph Storage Dashboard does not support any email verification when changing a userspassword This behavior is intentional because the Dashboard supports Single Sign-On (SSO) and thisfeature can be delegated to the SSO provider

(BZ1778608)

The OSD histogram graph for read and write operations is not clear

The Red Hat Ceph Storage Dashboard does not display any numbers or description in the OSDhistogram graph for read and write operations and therefore the graph is not clear

(BZ1779170)

Dashboard returns an error when a ceph-mgr module is enabled from the ceph CLI

When enabling a Ceph Manager (ceph-mgr) module such as telemetry from the ceph CLI the Red HatCeph Storage Dashboard displays the following error message

0 - Unknown Error

In addition the Dashboard does not mark the module as enabled until the Refresh button is clicked

(BZ1785077)

The Dashboard allows modifying the LUN ID and WWN which can lead to data corruption

The Red Hat Ceph Storage Dashboard allows you to modify the LUN ID and its World Wide Name(WWN) which is not required after creating the LUN Moreover editing these parameters can bedangerous for certain initiators that do not fully support this feature by default Consequently editingthese parameters after creating them can lead to data corruption To avoid this do not modify the LUNID and WWN in the Dashboard

(BZ1786455)

The Dashboard does not provide correct Ceph iSCSI error messages

If the Ceph iSCSI returns an error for example the HTTP 400 code when trying to delete an iSCSItarget while a user is logged in the Red Hat Ceph Storage Dashboard does not forward that error codeand message to the Dashboard user using the pop-up notifications but displays a generic 500 InternalServer Error Consequently the message that the Dashboard provides is not informative and evenmisleading an expected behavior (users cannot delete a busy resource) is perceived as an operationalfailure (internal server error) To work around this issue see the Dashboard logs

(BZ1786457)

CHAPTER 6 KNOWN ISSUES

17

The Dashboard requires to disable iptables rules

The Red Hat Ceph Storage Dashboard is unable to perform any iSCSI operations such as creating agateway unless all iptables rules are manually disabled on the Ceph iSCSI node To do so use thefollowing command as root or the sudo user

iptables -F

Note that after a reboot the rules are enabled again Either disable them again or delete thempermanently

(BZ1792818)

63 PACKAGES

Current version of Grafana causes certain bugs in the Dashboard

Red Hat Ceph Storage 4 uses the Grafana version 524 This version causes the following bugs in theRed Hat Ceph Storage Dashboard

When navigating to Pools gt Overall Performance Grafana returns the following error

TypeError lc[ttype] is undefinedtrue

When viewing a poolrsquos performance details (Pools gt select a pool from the list gt PerformanceDetails) the Grafana bar is displayed along with other graphs and values but it should not bethere

These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red HatCeph Storage

(BZ1786107 BZ1762197 BZ1765536)

64 RED HAT ENTERPRISE LINUX

Ansible cannot start NFS Ganesha if SELinux is in enforcing mode

When using SELinux in enforcing mode on Red Hat Enterprise Linux 81 the ceph-ansible utility fails tostart the NFS Ganesha service because SELinux policy currently does not allow creating a directoryrequired for NFS Ganesha

(BZ1794027 BZ1796160)

Red Hat Ceph Storage 40 Release Notes

18

CHAPTER 7 DEPRECATED FUNCTIONALITYThis section provides an overview of functionality that has been deprecated in all minor releases up tothis release of Red Hat Ceph Storage

Ubuntu is no longer supported

Installing a Red Hat Ceph Storage 4 cluster on Ubuntu is no longer supported Use Red HatEnterprise Linux as the underlying operating system

71 THE CEPH-ANSIBLE UTILITY

Configuring iSCSI gateway using ceph-ansible is no longer supported

Configuring the Ceph iSCSI gateway by using the ceph-ansible utility is no longer supported Use ceph-ansible to install the gateway and then use either the gwcli utility of the Red Hat Ceph StorageDashboard to configure the gateway For details see the Using the Ceph iSCSI Gateway chapter in theBlock Device Guide for Red Hat Ceph Storage 4

72 THE CEPH-DISK UTILITY

ceph-disk is deprecated

With this release the ceph-disk utility is no longer supported The ceph-volume utility is used insteadFor details see the Why does ceph-volume replace `ceph-disk` section in the Administration Guide forRed Hat Ceph Storage 4

73 RADOS

FileStore is no longer supported in production

The FileStore OSD back end is now deprecated because the new BlueStore back end is now fullysupported in production For details see the How to migrate the object store from FileStore toBlueStore section in the Red Hat Ceph Storage 4 Installation Guide

Ceph configuration is now deprecated

The Ceph configuration file (cephconf) is now deprecated in favor of new centralized configurationstored in Monitors For details see the Configuration is now stored in Monitors accessible by using `cephconfig` release note

CHAPTER 7 DEPRECATED FUNCTIONALITY

19

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES
Page 10: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

grafana ceph-dashboard ceph-prometheus and ceph-node-exporter

Red Hat Ceph Storage Dashboard alerts

Red Hat Ceph Storage Dashboard supports alerts based on Ceph metrics and configured thresholdsThe Prometheus AlertManager configures gathers and triggers the alerts

Viewing the cluster hierarchy from the Dashboard

Red Hat Ceph Storage Dashboard provides the ability to view the cluster hierarchy For details see theViewing the CRUSH map section in the Dashboard Guide for Red Hat Ceph Storage 4

33 CEPH FILE SYSTEM

ceph -w now shows information about CephFS scrubs

Previously it was not possible to check the ongoing Ceph File System (CephFS) scrubs status asidefrom checking the Metadata server (MDS) logs With this update the ceph -w command showsinformation about active CephFS scrubs to better understand the status

ceph-mgr volumes module for managing CephFS exports

This release provides Ceph Manager (ceph-mgr) volumes module to manage Ceph File System(CephFS) exports The volumes module implements the following file system export abstractions

FS volumes an abstraction for CephFS file systems

FS subvolumes an abstraction for independent CephFS directory trees

FS subvolume groups an abstraction for a directory higher than FS subvolumes to effectpolicies such as file layouts across a set of subvolumes

In addition these new commands are now supported

fs subvolume ls for listing subvolumes

fs subvolumegroup ls for listing subvolumes groups

fs subvolume snapshot ls for listing subvolume snapshots

fs subvolumegroup snapshot ls for listing subvolume group snapshots

fs subvolume rm for removing snapshots

34 CEPH MEDIC

ceph-medic can check the health of Ceph running in containers

With this release the ceph-medic utility can now check the health of a Red Hat Ceph Storage clusterrunning within a container

35 ISCSI GATEWAY

Non-administrators can now be used for the ceph-iscsi service

As of Red Hat Ceph Storage 4 non-administrative Ceph users may be used for the ceph-iscsi service

CHAPTER 3 NEW FEATURES

7

As of Red Hat Ceph Storage 4 non-administrative Ceph users may be used for the ceph-iscsi serviceby setting the cluster_client_name in the etccephiscsi-gatewaycfg on all iSCSI Gateways Thisallows resources to be restricted based on users

Running Ceph iSCSI Gateways can now be removed

As of Red Hat Ceph Storage 4 running iSCSI Gateways can now be removed from a Ceph iSCSI clusterfor maintenance or to reallocate resources The gatewayrsquos iSCSI target and its portals will be stoppedand all iSCSI target objects for that gateway will be removed from the kernel and the gatewayconfiguration Removing gateways that are down is not yet supported

36 OBJECT GATEWAY

The Beast HTTP front end

In Red Hat Ceph Storage 4 the default HTTP front end for the Ceph Object Gateway is Beast TheBeast front end uses the BoostBeast library for HTTP parsing and the BoostAsio library forasynchronous IO For details see the Using the Beast front end in the Object Gateway Configurationand Administration Guide for Red Hat Ceph Storage 4

Support for S3 MFA-Delete

With this release the Ceph Object Gateway supports S3 MFA-Delete using Time-Based One-TimePassword (TOTP) one-time passwords as an authentication factor This feature adds security againstinappropriate data removal You can configure buckets to require a TOTP one-time token in addition tostandard S3 authentication to delete data

Addition of storage classes and lifecycle transitions to the Ceph Object Gateway

The Ceph Object Gateway now provides support for storage classes an S3 compatible representationof the Ceph Object Gatewayrsquos underlying placement targets and for lifecycle transitions a mechanismto migrate objects between classes Storage classes and lifecycle transitions provide a higher level ofcontrol over data placement for the applications that need it They have many potential uses includingperformance tuning for specific workloads as well as automatic migration of inactive data to coldstorage

Users can now create new IAM policies and roles using REST APIs

With the release of Red Hat Ceph Storage 4 REST APIs for IAM roles and user policies are now availablein the same namespace as S3 APIs and can be accessed using the same endpoint as S3 APIs in theCeph Object Gateway This allows end users to create new IAM policies and roles using REST APIs

37 PACKAGES

Ability to install a Ceph cluster using a web-based interface

With this release the Cockpit web-based interface is supported Cockpit allows you to install a Red HatCeph Storage 4 cluster and other components such as Metadata Servers Ceph clients or the CephObject Gateway on base-metal or in containers For details see the Installing Red Hat Ceph Storageusing the Cockpit Web User Interface chapter in the Red Hat Ceph Storage 4 Installation Guide Notethat minimal experience with Red Hat Ceph Storage is required

38 RADOS

Ceph on-wire encryption

Starting with Red Hat Ceph Storage 4 you can enable encryption for all Ceph traffic over the network

Red Hat Ceph Storage 40 Release Notes

8

with the introduction of the messenger version 2 protocol For details see the Ceph on-wire encryptionchapter in the Architecture Guide and Encryption in transit section in the Data Security and HardeningGuide for Red Hat Ceph Storage 4

OSD BlueStore is now fully supported

BlueStore is a new back end for the OSD daemons that allows for storing objects directly on the blockdevices Because BlueStore does not need any file system interface it improves performance of Cephstorage clusters To learn more about the BlueStore OSD back end see the OSD BlueStore chapter inthe Administration Guide for Red Hat Ceph Storage 4

Red Hat Enterprise Linux in FIPS mode

With this release you can install Red Hat Ceph Storage on Red Hat Enterprise Linux where the FIPSmode is enabled

Changes the ceph df output and a new ceph osd df command

The output of the ceph df command has been improved Notably the RAW USED and RAW USEDvalues now show the preallocated space for the db and wal BlueStore partitions The ceph osd dfcommand shows the OSD utilization stats such as the amount of written data

Asynchronous recovery for non-acting OSD sets

Previously recovery with Ceph was a synchronous process by blocking write operations to objects untilthose objects were recovered In this release the recovery process is now asynchronous by not blockingwrite operations to objects only in the non-acting set of OSDs This new feature requires having morethan the minimum number of replicas as to have enough OSDs in the non-acting set

The new configuration option osd_async_recovery_min_cost controls how much asynchronousrecovery to do The default value for this option is 100 A higher value means asynchronous recovery willbe less whereas a lower value means asynchronous recovery will be more

Configuration is now stored in Monitors accessible by using ceph config

In this release Red Hat Ceph Storage centralizes configuration in Monitors instead of using the Cephconfiguration file (cephconf) Previously changing configuration included manually updating cephconf distributing it to appropriate nodes and restarting all affected daemons Now the Monitorsmanage a configuration database that has the same semantic structure as cephconf The database isaccessible by the ceph config command Any changes to the configuration are applied to daemons orclients in the system immediately and restarting them is no longer needed Use the ceph config -hcommand for details on the available set of commands Note that a Ceph configuration file is stillrequired to identify the Monitors nodes

Placement groups can now be auto-scaled

Red Hat Ceph Storage 4 introduces the ability to auto-scale placement groups (PGs) The number ofplacement groups (PGs) in a pool plays a significant role in how a cluster peers distributes data andrebalances Auto-scaling the number of PGs can make managing the cluster easier The new pg-autoscaling command provides recommendations for scaling PGs or automatically scales PGs basedon how the cluster is being used For more details about auto-scaling PGs see the Auto-scalingplacement groups section in the Storage Strategies Guide for Red Hat Ceph Storage 4

Introduction of diskprediction module

The Red Hat Ceph Storage diskprediction module gathers metrics to predict disk failures before theyhappen The module has two modes cloud and local With this release only the local mode is supportedThe local mode does not require any external server for data analysis It uses an internal predictormodule for disk prediction service and then returns the disk prediction result to the Ceph system

CHAPTER 3 NEW FEATURES

9

To enable the diskprediction module

ceph mgr module enable diskprediction_local

To set the prediction mode

ceph config set global device_failure_prediction_mode local

To disable the diskprediction module

ceph config set global device_failure_prediction_mode none

New configurable option mon_memory_target

Red Hat Ceph Storage 4 introduces a new configurable option mon_memory_target used to set thetarget amount of bytes for Monitor memory usage It specifies the amount of memory to allocate andmanage using the priority cache tuner for the associated Monitor daemon caches The default value of mon_memory_target is set to 2 GiB and you can change it during runtime with

ceph config set global mon_memory_target size

Prior to this release as a cluster scaled the Monitor specific RSS usage exceeded the limits that wereset using the mon_osd_cache_size option which led to issues This enhancement allows for improvedmanagement of memory allocated to the monitor caches and keeps the usage within specified limits

39 BLOCK DEVICES (RBD)

Erasure coding for Ceph Block Device

Erasure coding for Ceph Block Device (RBD) is now fully supported This feature allows RBD to storetheir data in an erasure coded pool For details see the Erasure Coding with Overwrites section in theStorage Strategies for Red Hat Ceph Storage 4

RBD performance monitoring and metrics gathering tools

Red Hat Ceph Storage 4 now incorporates new Ceph Block Device performance monitoring utilities foraggregated RBD image metrics for IOPS throughput and latency Per-image RBD metrics are nowavailable using the Ceph Manager Prometheus module the Ceph Dashboard and the rbd CLI using the rbd perf image iostat or rbd perf image iotop commands

Cloned images can be created from non-primary images

Creating cloned child RBD images from mirrored non-primary parent image is now supportedPreviously cloning of mirrored images was only supported for primary images When cloning goldenimages for virtual machines this restriction prevented the creation of new cloned images from thegolden non-primary image This update removes this restriction and cloned images can be created fromnon-primary mirrored images

Segregating RBD images within isolated namespaces within the same pool

RBD images can now be segregated within isolated namespaces within the same pool When using CephBlock Devices directly without a higher-level system such as OpenStack or OpenShift ContainerStorage it was not possible to restrict user access to specific RBD images When combined with CephXcapabilities users can be restricted to specific pool namespaces to restrict access to RBD images

Red Hat Ceph Storage 40 Release Notes

10

Moving RBD images between different pools within the same cluster

This version of Red Hat Ceph Storage adds the ability to move RBD images between different poolswithin the same cluster For details see the Moving images between pools section in the Block DeviceGuide for Red Hat Ceph Storage 4

Long-running RBD operations can run in the background

Long-running RBD operations such as image removal or cloned image flattening can now bescheduled to run in the background RBD operations that involve iterating over every backing RADOSobject for the image can take a long time depending on the size of the image When using the CLI toperform one of these operations the rbd CLI is blocked until the operation is complete Theseoperations can now be scheduled to run by the Ceph Manager as a background task by using the ceph rbd task add commands The progress of these tasks is visible on the Ceph dashboard as well as byusing the CLI

310 RBD MIRRORING

Support for multiple active instances of RBD mirror daemon in a single storage cluster

Red Hat Ceph Storage 4 now supports deploying multiple active instances of the RBD mirror daemon ina single storage cluster This enables multiple RBD mirror daemons to perform replication for the RBDimages or pools using an algorithm for chunking the images across the number of active mirroringdaemons

CHAPTER 3 NEW FEATURES

11

CHAPTER 4 BUG FIXESThis section describes bugs with significant impact on users that were fixed in this release of Red HatCeph Storage In addition the section includes descriptions of fixed known issues found in previousversions

41 THE CEPH-ANSIBLE UTILITY

The Ansible playbook no longer takes hours to complete the fuser command

Previously on a system running thousands of processes the fuser command in the Ansible playbookcould take several minutes or hours to complete because it iterates over all PID present in the procdirectory Because of this the handler tasks take a long time to complete checking if a Ceph process isalready running With this update instead of using the fuser command the Ansible playbook now checksthe socket file in the procnetunix directory and the handler tasks that check the Ceph socketcompletes almost instantly

(BZ1717011)

The purge-docker-clusteryml Ansible playbook no longer fails

Previously the purge-docker-clusteryml Ansible playbook could fail when trying to unmap RADOSBlock Devices (RBDs) because either the binary was absent or because the Atomic host versionprovided was too old With this update Ansible now uses the sysfs method to unmap devices if thereare any and the purge-docker-clusteryml playbook no longer fails

(BZ1766064)

42 OBJECT GATEWAY

The Expiration Days S3 Lifecycle parameter can now be set to 0

The Ceph Object Gateway did not accept the value of 0 for the Expiration Days Lifecycleconfiguration parameter Consequently setting the expiration to 0 could not be used to triggerbackground delete operation of objects With this update Expiration Days can be set to 0 as expected

(BZ1493476)

Archive zone fetches the current version of the source objects

An object is synced from multiple source zones to archive zone This behavior can lead to differentversions of the same object on the archive zone This update eliminates duplicate versions from gettingcreated by ensuring that the archive zone fetches the current version of the source object

(BZ1760862)

43 RADOS

A message to set expected_num_objects is no longer shown when using BlueStore

With this update a message that recommends setting the expected_num_obejcts parameter duringthe creation of the BlueStore pools has been removed because this message does not apply when usingthe BlueStore OSD back end

(BZ1650922)

Red Hat Ceph Storage 40 Release Notes

12

Deprecated JSON fields have been removed

This update removes deprecated fields from the JSON output of the ceph status command

(BZ1739233)

CHAPTER 4 BUG FIXES

13

CHAPTER 5 TECHNOLOGY PREVIEWSThis section provides an overview of Technology Preview features introduced or updated in this releaseof Red Hat Ceph Storage

IMPORTANT

Technology Preview features are not supported with Red Hat production service levelagreements (SLAs) might not be functionally complete and Red Hat does notrecommend to use them for production These features provide early access toupcoming product features enabling customers to test functionality and providefeedback during the development process

For more information on Red Hat Technology Preview features support scope seehttpsaccessredhatcomsupportofferingstechpreview

51 CEPH FILE SYSTEM

CephFS snapshots

Ceph File System (CephFS) supports taking snapshots as a Technology Preview Snapshots create animmutable view of the file system at the point in time they are taken

52 BLOCK DEVICES (RBD)

Mapping RBD images to NBD images

The rbd-nbd utility maps RADOS Block Device (RBD) images to Network Block Devices (NBD) andenables Ceph clients to access volumes and images in Kubernetes environments To use rbd-nbd installthe rbd-nbd package For details see the rbd-nbd(7) manual page

53 OBJECT GATEWAY

Object Gateway archive site

With this release an archive site is supported as a Technology Preview The archive site allows you tohave a history of versions of S3 objects that can only be eliminated through the gateways associatedwith the archive zone Including an archive zone in a multizone configuration allows you to have theflexibility of an S3 object history in only one zone while saving the space that the replicas of the versionsS3 objects would consume in the rest of the zones

Tiering within a cluster by disk type

This release adds the ability to tier within a cluster by disk type as a Technology Preview by using thecapabilities for mapping for example pools to placement targets and storage classes Using lifecycletransition rules it is possible to cause objects to migrate between storage pools based on policy

S3 bucket notifications

S3 bucket notifications are now supported as a Technology Preview When certain events are triggeredon an S3 bucket the notifications can be sent from the Ceph Object Gateway to HTTP AdvancedMessage Queuing Protocol (AMQP) 91 and Kafka endpoints Additionally the notifications can bestored in a ldquoPubSubrdquo zone instead of or in addition to sending them to the endpoints ldquoPubSubrdquo is apublish-subscribe model that enables recipients to pull notifications from Ceph

Red Hat Ceph Storage 40 Release Notes

14

To use the S3 notifications install the librabbitmq and librdkafka packages

CHAPTER 5 TECHNOLOGY PREVIEWS

15

CHAPTER 6 KNOWN ISSUESThis section documents known issues found in this release of Red Hat Ceph Storage

61 THE CEPH-ANSIBLE UTILITY

Red Hat Ceph Storage installation on Red Hat OpenStack Platform fails

The ceph-ansible utility becomes unresponsive when attempting to install Red Hat Ceph Storagetogether with the Red Hat OpenStack Platform 16 and it returns an error similar to the following one

Error unable to exec into ceph-mon-dcn1-computehci1-2 no container with name or ID ceph-mon-dcn1-computehci1-2 found no such container

To work around this issue update the following part of the handler_osdsyml file located in the ceph-ansiblerolesceph-handlertasks directory

- name unset noup flag command container_exec_cmd | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

To

- name unset noup flag command hostvars[groups[mon_group_name][0]][container_exec_cmd] | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

And start the installation process again

(BZ1792320)

Ansible does not unset the norebalance flag after it completes

The rolling-updateyml Ansible playbook does not unset the norebalance flag after it completes Towork around this issue unset the flag manually

(BZ1793564)

Ansible fails to upgrade a multisite Ceph Object Gateway when the Dashboard is enabled

When the Red Hat Ceph Storage Dashboard is enabled an attempt to use Ansible to upgrade to afurther version of Red Hat Ceph Storage fails when trying to upgrade the secondary Ceph ObjectGateway site in a multisite setup This bug does not occur on the primary site or if the Dashboard is notenabled

(BZ1794351)

62 CEPH MANAGEMENT DASHBOARD

The Dashboard does not show correct values for certain configuration options

Both the Red Hat Ceph Storage Dashboard and the underlying ceph config show command do not

Red Hat Ceph Storage 40 Release Notes

16

return the current values for certain configuration options such as fsid This is probably because certaincore options that are not meant for further modification after deploying a cluster are not updated and adefault value is used As a result the Dashboard does not show correct values for certain configurationoptions

(BZ1765493 BZ1772310)

NFS Ganesha in Dashboard

The Red Hat Ceph Storage Dashboard currently does not support managing NFS Ganesha

(BZ1772316)

Dashboard does not support email verification

The Red Hat Ceph Storage Dashboard does not support any email verification when changing a userspassword This behavior is intentional because the Dashboard supports Single Sign-On (SSO) and thisfeature can be delegated to the SSO provider

(BZ1778608)

The OSD histogram graph for read and write operations is not clear

The Red Hat Ceph Storage Dashboard does not display any numbers or description in the OSDhistogram graph for read and write operations and therefore the graph is not clear

(BZ1779170)

Dashboard returns an error when a ceph-mgr module is enabled from the ceph CLI

When enabling a Ceph Manager (ceph-mgr) module such as telemetry from the ceph CLI the Red HatCeph Storage Dashboard displays the following error message

0 - Unknown Error

In addition the Dashboard does not mark the module as enabled until the Refresh button is clicked

(BZ1785077)

The Dashboard allows modifying the LUN ID and WWN which can lead to data corruption

The Red Hat Ceph Storage Dashboard allows you to modify the LUN ID and its World Wide Name(WWN) which is not required after creating the LUN Moreover editing these parameters can bedangerous for certain initiators that do not fully support this feature by default Consequently editingthese parameters after creating them can lead to data corruption To avoid this do not modify the LUNID and WWN in the Dashboard

(BZ1786455)

The Dashboard does not provide correct Ceph iSCSI error messages

If the Ceph iSCSI returns an error for example the HTTP 400 code when trying to delete an iSCSItarget while a user is logged in the Red Hat Ceph Storage Dashboard does not forward that error codeand message to the Dashboard user using the pop-up notifications but displays a generic 500 InternalServer Error Consequently the message that the Dashboard provides is not informative and evenmisleading an expected behavior (users cannot delete a busy resource) is perceived as an operationalfailure (internal server error) To work around this issue see the Dashboard logs

(BZ1786457)

CHAPTER 6 KNOWN ISSUES

17

The Dashboard requires to disable iptables rules

The Red Hat Ceph Storage Dashboard is unable to perform any iSCSI operations such as creating agateway unless all iptables rules are manually disabled on the Ceph iSCSI node To do so use thefollowing command as root or the sudo user

iptables -F

Note that after a reboot the rules are enabled again Either disable them again or delete thempermanently

(BZ1792818)

63 PACKAGES

Current version of Grafana causes certain bugs in the Dashboard

Red Hat Ceph Storage 4 uses the Grafana version 524 This version causes the following bugs in theRed Hat Ceph Storage Dashboard

When navigating to Pools gt Overall Performance Grafana returns the following error

TypeError lc[ttype] is undefinedtrue

When viewing a poolrsquos performance details (Pools gt select a pool from the list gt PerformanceDetails) the Grafana bar is displayed along with other graphs and values but it should not bethere

These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red HatCeph Storage

(BZ1786107 BZ1762197 BZ1765536)

64 RED HAT ENTERPRISE LINUX

Ansible cannot start NFS Ganesha if SELinux is in enforcing mode

When using SELinux in enforcing mode on Red Hat Enterprise Linux 81 the ceph-ansible utility fails tostart the NFS Ganesha service because SELinux policy currently does not allow creating a directoryrequired for NFS Ganesha

(BZ1794027 BZ1796160)

Red Hat Ceph Storage 40 Release Notes

18

CHAPTER 7 DEPRECATED FUNCTIONALITYThis section provides an overview of functionality that has been deprecated in all minor releases up tothis release of Red Hat Ceph Storage

Ubuntu is no longer supported

Installing a Red Hat Ceph Storage 4 cluster on Ubuntu is no longer supported Use Red HatEnterprise Linux as the underlying operating system

71 THE CEPH-ANSIBLE UTILITY

Configuring iSCSI gateway using ceph-ansible is no longer supported

Configuring the Ceph iSCSI gateway by using the ceph-ansible utility is no longer supported Use ceph-ansible to install the gateway and then use either the gwcli utility of the Red Hat Ceph StorageDashboard to configure the gateway For details see the Using the Ceph iSCSI Gateway chapter in theBlock Device Guide for Red Hat Ceph Storage 4

72 THE CEPH-DISK UTILITY

ceph-disk is deprecated

With this release the ceph-disk utility is no longer supported The ceph-volume utility is used insteadFor details see the Why does ceph-volume replace `ceph-disk` section in the Administration Guide forRed Hat Ceph Storage 4

73 RADOS

FileStore is no longer supported in production

The FileStore OSD back end is now deprecated because the new BlueStore back end is now fullysupported in production For details see the How to migrate the object store from FileStore toBlueStore section in the Red Hat Ceph Storage 4 Installation Guide

Ceph configuration is now deprecated

The Ceph configuration file (cephconf) is now deprecated in favor of new centralized configurationstored in Monitors For details see the Configuration is now stored in Monitors accessible by using `cephconfig` release note

CHAPTER 7 DEPRECATED FUNCTIONALITY

19

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES
Page 11: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

As of Red Hat Ceph Storage 4 non-administrative Ceph users may be used for the ceph-iscsi serviceby setting the cluster_client_name in the etccephiscsi-gatewaycfg on all iSCSI Gateways Thisallows resources to be restricted based on users

Running Ceph iSCSI Gateways can now be removed

As of Red Hat Ceph Storage 4 running iSCSI Gateways can now be removed from a Ceph iSCSI clusterfor maintenance or to reallocate resources The gatewayrsquos iSCSI target and its portals will be stoppedand all iSCSI target objects for that gateway will be removed from the kernel and the gatewayconfiguration Removing gateways that are down is not yet supported

36 OBJECT GATEWAY

The Beast HTTP front end

In Red Hat Ceph Storage 4 the default HTTP front end for the Ceph Object Gateway is Beast TheBeast front end uses the BoostBeast library for HTTP parsing and the BoostAsio library forasynchronous IO For details see the Using the Beast front end in the Object Gateway Configurationand Administration Guide for Red Hat Ceph Storage 4

Support for S3 MFA-Delete

With this release the Ceph Object Gateway supports S3 MFA-Delete using Time-Based One-TimePassword (TOTP) one-time passwords as an authentication factor This feature adds security againstinappropriate data removal You can configure buckets to require a TOTP one-time token in addition tostandard S3 authentication to delete data

Addition of storage classes and lifecycle transitions to the Ceph Object Gateway

The Ceph Object Gateway now provides support for storage classes an S3 compatible representationof the Ceph Object Gatewayrsquos underlying placement targets and for lifecycle transitions a mechanismto migrate objects between classes Storage classes and lifecycle transitions provide a higher level ofcontrol over data placement for the applications that need it They have many potential uses includingperformance tuning for specific workloads as well as automatic migration of inactive data to coldstorage

Users can now create new IAM policies and roles using REST APIs

With the release of Red Hat Ceph Storage 4 REST APIs for IAM roles and user policies are now availablein the same namespace as S3 APIs and can be accessed using the same endpoint as S3 APIs in theCeph Object Gateway This allows end users to create new IAM policies and roles using REST APIs

37 PACKAGES

Ability to install a Ceph cluster using a web-based interface

With this release the Cockpit web-based interface is supported Cockpit allows you to install a Red HatCeph Storage 4 cluster and other components such as Metadata Servers Ceph clients or the CephObject Gateway on base-metal or in containers For details see the Installing Red Hat Ceph Storageusing the Cockpit Web User Interface chapter in the Red Hat Ceph Storage 4 Installation Guide Notethat minimal experience with Red Hat Ceph Storage is required

38 RADOS

Ceph on-wire encryption

Starting with Red Hat Ceph Storage 4 you can enable encryption for all Ceph traffic over the network

Red Hat Ceph Storage 40 Release Notes

8

with the introduction of the messenger version 2 protocol For details see the Ceph on-wire encryptionchapter in the Architecture Guide and Encryption in transit section in the Data Security and HardeningGuide for Red Hat Ceph Storage 4

OSD BlueStore is now fully supported

BlueStore is a new back end for the OSD daemons that allows for storing objects directly on the blockdevices Because BlueStore does not need any file system interface it improves performance of Cephstorage clusters To learn more about the BlueStore OSD back end see the OSD BlueStore chapter inthe Administration Guide for Red Hat Ceph Storage 4

Red Hat Enterprise Linux in FIPS mode

With this release you can install Red Hat Ceph Storage on Red Hat Enterprise Linux where the FIPSmode is enabled

Changes the ceph df output and a new ceph osd df command

The output of the ceph df command has been improved Notably the RAW USED and RAW USEDvalues now show the preallocated space for the db and wal BlueStore partitions The ceph osd dfcommand shows the OSD utilization stats such as the amount of written data

Asynchronous recovery for non-acting OSD sets

Previously recovery with Ceph was a synchronous process by blocking write operations to objects untilthose objects were recovered In this release the recovery process is now asynchronous by not blockingwrite operations to objects only in the non-acting set of OSDs This new feature requires having morethan the minimum number of replicas as to have enough OSDs in the non-acting set

The new configuration option osd_async_recovery_min_cost controls how much asynchronousrecovery to do The default value for this option is 100 A higher value means asynchronous recovery willbe less whereas a lower value means asynchronous recovery will be more

Configuration is now stored in Monitors accessible by using ceph config

In this release Red Hat Ceph Storage centralizes configuration in Monitors instead of using the Cephconfiguration file (cephconf) Previously changing configuration included manually updating cephconf distributing it to appropriate nodes and restarting all affected daemons Now the Monitorsmanage a configuration database that has the same semantic structure as cephconf The database isaccessible by the ceph config command Any changes to the configuration are applied to daemons orclients in the system immediately and restarting them is no longer needed Use the ceph config -hcommand for details on the available set of commands Note that a Ceph configuration file is stillrequired to identify the Monitors nodes

Placement groups can now be auto-scaled

Red Hat Ceph Storage 4 introduces the ability to auto-scale placement groups (PGs) The number ofplacement groups (PGs) in a pool plays a significant role in how a cluster peers distributes data andrebalances Auto-scaling the number of PGs can make managing the cluster easier The new pg-autoscaling command provides recommendations for scaling PGs or automatically scales PGs basedon how the cluster is being used For more details about auto-scaling PGs see the Auto-scalingplacement groups section in the Storage Strategies Guide for Red Hat Ceph Storage 4

Introduction of diskprediction module

The Red Hat Ceph Storage diskprediction module gathers metrics to predict disk failures before theyhappen The module has two modes cloud and local With this release only the local mode is supportedThe local mode does not require any external server for data analysis It uses an internal predictormodule for disk prediction service and then returns the disk prediction result to the Ceph system

CHAPTER 3 NEW FEATURES

9

To enable the diskprediction module

ceph mgr module enable diskprediction_local

To set the prediction mode

ceph config set global device_failure_prediction_mode local

To disable the diskprediction module

ceph config set global device_failure_prediction_mode none

New configurable option mon_memory_target

Red Hat Ceph Storage 4 introduces a new configurable option mon_memory_target used to set thetarget amount of bytes for Monitor memory usage It specifies the amount of memory to allocate andmanage using the priority cache tuner for the associated Monitor daemon caches The default value of mon_memory_target is set to 2 GiB and you can change it during runtime with

ceph config set global mon_memory_target size

Prior to this release as a cluster scaled the Monitor specific RSS usage exceeded the limits that wereset using the mon_osd_cache_size option which led to issues This enhancement allows for improvedmanagement of memory allocated to the monitor caches and keeps the usage within specified limits

39 BLOCK DEVICES (RBD)

Erasure coding for Ceph Block Device

Erasure coding for Ceph Block Device (RBD) is now fully supported This feature allows RBD to storetheir data in an erasure coded pool For details see the Erasure Coding with Overwrites section in theStorage Strategies for Red Hat Ceph Storage 4

RBD performance monitoring and metrics gathering tools

Red Hat Ceph Storage 4 now incorporates new Ceph Block Device performance monitoring utilities foraggregated RBD image metrics for IOPS throughput and latency Per-image RBD metrics are nowavailable using the Ceph Manager Prometheus module the Ceph Dashboard and the rbd CLI using the rbd perf image iostat or rbd perf image iotop commands

Cloned images can be created from non-primary images

Creating cloned child RBD images from mirrored non-primary parent image is now supportedPreviously cloning of mirrored images was only supported for primary images When cloning goldenimages for virtual machines this restriction prevented the creation of new cloned images from thegolden non-primary image This update removes this restriction and cloned images can be created fromnon-primary mirrored images

Segregating RBD images within isolated namespaces within the same pool

RBD images can now be segregated within isolated namespaces within the same pool When using CephBlock Devices directly without a higher-level system such as OpenStack or OpenShift ContainerStorage it was not possible to restrict user access to specific RBD images When combined with CephXcapabilities users can be restricted to specific pool namespaces to restrict access to RBD images

Red Hat Ceph Storage 40 Release Notes

10

Moving RBD images between different pools within the same cluster

This version of Red Hat Ceph Storage adds the ability to move RBD images between different poolswithin the same cluster For details see the Moving images between pools section in the Block DeviceGuide for Red Hat Ceph Storage 4

Long-running RBD operations can run in the background

Long-running RBD operations such as image removal or cloned image flattening can now bescheduled to run in the background RBD operations that involve iterating over every backing RADOSobject for the image can take a long time depending on the size of the image When using the CLI toperform one of these operations the rbd CLI is blocked until the operation is complete Theseoperations can now be scheduled to run by the Ceph Manager as a background task by using the ceph rbd task add commands The progress of these tasks is visible on the Ceph dashboard as well as byusing the CLI

310 RBD MIRRORING

Support for multiple active instances of RBD mirror daemon in a single storage cluster

Red Hat Ceph Storage 4 now supports deploying multiple active instances of the RBD mirror daemon ina single storage cluster This enables multiple RBD mirror daemons to perform replication for the RBDimages or pools using an algorithm for chunking the images across the number of active mirroringdaemons

CHAPTER 3 NEW FEATURES

11

CHAPTER 4 BUG FIXESThis section describes bugs with significant impact on users that were fixed in this release of Red HatCeph Storage In addition the section includes descriptions of fixed known issues found in previousversions

41 THE CEPH-ANSIBLE UTILITY

The Ansible playbook no longer takes hours to complete the fuser command

Previously on a system running thousands of processes the fuser command in the Ansible playbookcould take several minutes or hours to complete because it iterates over all PID present in the procdirectory Because of this the handler tasks take a long time to complete checking if a Ceph process isalready running With this update instead of using the fuser command the Ansible playbook now checksthe socket file in the procnetunix directory and the handler tasks that check the Ceph socketcompletes almost instantly

(BZ1717011)

The purge-docker-clusteryml Ansible playbook no longer fails

Previously the purge-docker-clusteryml Ansible playbook could fail when trying to unmap RADOSBlock Devices (RBDs) because either the binary was absent or because the Atomic host versionprovided was too old With this update Ansible now uses the sysfs method to unmap devices if thereare any and the purge-docker-clusteryml playbook no longer fails

(BZ1766064)

42 OBJECT GATEWAY

The Expiration Days S3 Lifecycle parameter can now be set to 0

The Ceph Object Gateway did not accept the value of 0 for the Expiration Days Lifecycleconfiguration parameter Consequently setting the expiration to 0 could not be used to triggerbackground delete operation of objects With this update Expiration Days can be set to 0 as expected

(BZ1493476)

Archive zone fetches the current version of the source objects

An object is synced from multiple source zones to archive zone This behavior can lead to differentversions of the same object on the archive zone This update eliminates duplicate versions from gettingcreated by ensuring that the archive zone fetches the current version of the source object

(BZ1760862)

43 RADOS

A message to set expected_num_objects is no longer shown when using BlueStore

With this update a message that recommends setting the expected_num_obejcts parameter duringthe creation of the BlueStore pools has been removed because this message does not apply when usingthe BlueStore OSD back end

(BZ1650922)

Red Hat Ceph Storage 40 Release Notes

12

Deprecated JSON fields have been removed

This update removes deprecated fields from the JSON output of the ceph status command

(BZ1739233)

CHAPTER 4 BUG FIXES

13

CHAPTER 5 TECHNOLOGY PREVIEWSThis section provides an overview of Technology Preview features introduced or updated in this releaseof Red Hat Ceph Storage

IMPORTANT

Technology Preview features are not supported with Red Hat production service levelagreements (SLAs) might not be functionally complete and Red Hat does notrecommend to use them for production These features provide early access toupcoming product features enabling customers to test functionality and providefeedback during the development process

For more information on Red Hat Technology Preview features support scope seehttpsaccessredhatcomsupportofferingstechpreview

51 CEPH FILE SYSTEM

CephFS snapshots

Ceph File System (CephFS) supports taking snapshots as a Technology Preview Snapshots create animmutable view of the file system at the point in time they are taken

52 BLOCK DEVICES (RBD)

Mapping RBD images to NBD images

The rbd-nbd utility maps RADOS Block Device (RBD) images to Network Block Devices (NBD) andenables Ceph clients to access volumes and images in Kubernetes environments To use rbd-nbd installthe rbd-nbd package For details see the rbd-nbd(7) manual page

53 OBJECT GATEWAY

Object Gateway archive site

With this release an archive site is supported as a Technology Preview The archive site allows you tohave a history of versions of S3 objects that can only be eliminated through the gateways associatedwith the archive zone Including an archive zone in a multizone configuration allows you to have theflexibility of an S3 object history in only one zone while saving the space that the replicas of the versionsS3 objects would consume in the rest of the zones

Tiering within a cluster by disk type

This release adds the ability to tier within a cluster by disk type as a Technology Preview by using thecapabilities for mapping for example pools to placement targets and storage classes Using lifecycletransition rules it is possible to cause objects to migrate between storage pools based on policy

S3 bucket notifications

S3 bucket notifications are now supported as a Technology Preview When certain events are triggeredon an S3 bucket the notifications can be sent from the Ceph Object Gateway to HTTP AdvancedMessage Queuing Protocol (AMQP) 91 and Kafka endpoints Additionally the notifications can bestored in a ldquoPubSubrdquo zone instead of or in addition to sending them to the endpoints ldquoPubSubrdquo is apublish-subscribe model that enables recipients to pull notifications from Ceph

Red Hat Ceph Storage 40 Release Notes

14

To use the S3 notifications install the librabbitmq and librdkafka packages

CHAPTER 5 TECHNOLOGY PREVIEWS

15

CHAPTER 6 KNOWN ISSUESThis section documents known issues found in this release of Red Hat Ceph Storage

61 THE CEPH-ANSIBLE UTILITY

Red Hat Ceph Storage installation on Red Hat OpenStack Platform fails

The ceph-ansible utility becomes unresponsive when attempting to install Red Hat Ceph Storagetogether with the Red Hat OpenStack Platform 16 and it returns an error similar to the following one

Error unable to exec into ceph-mon-dcn1-computehci1-2 no container with name or ID ceph-mon-dcn1-computehci1-2 found no such container

To work around this issue update the following part of the handler_osdsyml file located in the ceph-ansiblerolesceph-handlertasks directory

- name unset noup flag command container_exec_cmd | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

To

- name unset noup flag command hostvars[groups[mon_group_name][0]][container_exec_cmd] | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

And start the installation process again

(BZ1792320)

Ansible does not unset the norebalance flag after it completes

The rolling-updateyml Ansible playbook does not unset the norebalance flag after it completes Towork around this issue unset the flag manually

(BZ1793564)

Ansible fails to upgrade a multisite Ceph Object Gateway when the Dashboard is enabled

When the Red Hat Ceph Storage Dashboard is enabled an attempt to use Ansible to upgrade to afurther version of Red Hat Ceph Storage fails when trying to upgrade the secondary Ceph ObjectGateway site in a multisite setup This bug does not occur on the primary site or if the Dashboard is notenabled

(BZ1794351)

62 CEPH MANAGEMENT DASHBOARD

The Dashboard does not show correct values for certain configuration options

Both the Red Hat Ceph Storage Dashboard and the underlying ceph config show command do not

Red Hat Ceph Storage 40 Release Notes

16

return the current values for certain configuration options such as fsid This is probably because certaincore options that are not meant for further modification after deploying a cluster are not updated and adefault value is used As a result the Dashboard does not show correct values for certain configurationoptions

(BZ1765493 BZ1772310)

NFS Ganesha in Dashboard

The Red Hat Ceph Storage Dashboard currently does not support managing NFS Ganesha

(BZ1772316)

Dashboard does not support email verification

The Red Hat Ceph Storage Dashboard does not support any email verification when changing a userspassword This behavior is intentional because the Dashboard supports Single Sign-On (SSO) and thisfeature can be delegated to the SSO provider

(BZ1778608)

The OSD histogram graph for read and write operations is not clear

The Red Hat Ceph Storage Dashboard does not display any numbers or description in the OSDhistogram graph for read and write operations and therefore the graph is not clear

(BZ1779170)

Dashboard returns an error when a ceph-mgr module is enabled from the ceph CLI

When enabling a Ceph Manager (ceph-mgr) module such as telemetry from the ceph CLI the Red HatCeph Storage Dashboard displays the following error message

0 - Unknown Error

In addition the Dashboard does not mark the module as enabled until the Refresh button is clicked

(BZ1785077)

The Dashboard allows modifying the LUN ID and WWN which can lead to data corruption

The Red Hat Ceph Storage Dashboard allows you to modify the LUN ID and its World Wide Name(WWN) which is not required after creating the LUN Moreover editing these parameters can bedangerous for certain initiators that do not fully support this feature by default Consequently editingthese parameters after creating them can lead to data corruption To avoid this do not modify the LUNID and WWN in the Dashboard

(BZ1786455)

The Dashboard does not provide correct Ceph iSCSI error messages

If the Ceph iSCSI returns an error for example the HTTP 400 code when trying to delete an iSCSItarget while a user is logged in the Red Hat Ceph Storage Dashboard does not forward that error codeand message to the Dashboard user using the pop-up notifications but displays a generic 500 InternalServer Error Consequently the message that the Dashboard provides is not informative and evenmisleading an expected behavior (users cannot delete a busy resource) is perceived as an operationalfailure (internal server error) To work around this issue see the Dashboard logs

(BZ1786457)

CHAPTER 6 KNOWN ISSUES

17

The Dashboard requires to disable iptables rules

The Red Hat Ceph Storage Dashboard is unable to perform any iSCSI operations such as creating agateway unless all iptables rules are manually disabled on the Ceph iSCSI node To do so use thefollowing command as root or the sudo user

iptables -F

Note that after a reboot the rules are enabled again Either disable them again or delete thempermanently

(BZ1792818)

63 PACKAGES

Current version of Grafana causes certain bugs in the Dashboard

Red Hat Ceph Storage 4 uses the Grafana version 524 This version causes the following bugs in theRed Hat Ceph Storage Dashboard

When navigating to Pools gt Overall Performance Grafana returns the following error

TypeError lc[ttype] is undefinedtrue

When viewing a poolrsquos performance details (Pools gt select a pool from the list gt PerformanceDetails) the Grafana bar is displayed along with other graphs and values but it should not bethere

These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red HatCeph Storage

(BZ1786107 BZ1762197 BZ1765536)

64 RED HAT ENTERPRISE LINUX

Ansible cannot start NFS Ganesha if SELinux is in enforcing mode

When using SELinux in enforcing mode on Red Hat Enterprise Linux 81 the ceph-ansible utility fails tostart the NFS Ganesha service because SELinux policy currently does not allow creating a directoryrequired for NFS Ganesha

(BZ1794027 BZ1796160)

Red Hat Ceph Storage 40 Release Notes

18

CHAPTER 7 DEPRECATED FUNCTIONALITYThis section provides an overview of functionality that has been deprecated in all minor releases up tothis release of Red Hat Ceph Storage

Ubuntu is no longer supported

Installing a Red Hat Ceph Storage 4 cluster on Ubuntu is no longer supported Use Red HatEnterprise Linux as the underlying operating system

71 THE CEPH-ANSIBLE UTILITY

Configuring iSCSI gateway using ceph-ansible is no longer supported

Configuring the Ceph iSCSI gateway by using the ceph-ansible utility is no longer supported Use ceph-ansible to install the gateway and then use either the gwcli utility of the Red Hat Ceph StorageDashboard to configure the gateway For details see the Using the Ceph iSCSI Gateway chapter in theBlock Device Guide for Red Hat Ceph Storage 4

72 THE CEPH-DISK UTILITY

ceph-disk is deprecated

With this release the ceph-disk utility is no longer supported The ceph-volume utility is used insteadFor details see the Why does ceph-volume replace `ceph-disk` section in the Administration Guide forRed Hat Ceph Storage 4

73 RADOS

FileStore is no longer supported in production

The FileStore OSD back end is now deprecated because the new BlueStore back end is now fullysupported in production For details see the How to migrate the object store from FileStore toBlueStore section in the Red Hat Ceph Storage 4 Installation Guide

Ceph configuration is now deprecated

The Ceph configuration file (cephconf) is now deprecated in favor of new centralized configurationstored in Monitors For details see the Configuration is now stored in Monitors accessible by using `cephconfig` release note

CHAPTER 7 DEPRECATED FUNCTIONALITY

19

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES
Page 12: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

with the introduction of the messenger version 2 protocol For details see the Ceph on-wire encryptionchapter in the Architecture Guide and Encryption in transit section in the Data Security and HardeningGuide for Red Hat Ceph Storage 4

OSD BlueStore is now fully supported

BlueStore is a new back end for the OSD daemons that allows for storing objects directly on the blockdevices Because BlueStore does not need any file system interface it improves performance of Cephstorage clusters To learn more about the BlueStore OSD back end see the OSD BlueStore chapter inthe Administration Guide for Red Hat Ceph Storage 4

Red Hat Enterprise Linux in FIPS mode

With this release you can install Red Hat Ceph Storage on Red Hat Enterprise Linux where the FIPSmode is enabled

Changes the ceph df output and a new ceph osd df command

The output of the ceph df command has been improved Notably the RAW USED and RAW USEDvalues now show the preallocated space for the db and wal BlueStore partitions The ceph osd dfcommand shows the OSD utilization stats such as the amount of written data

Asynchronous recovery for non-acting OSD sets

Previously recovery with Ceph was a synchronous process by blocking write operations to objects untilthose objects were recovered In this release the recovery process is now asynchronous by not blockingwrite operations to objects only in the non-acting set of OSDs This new feature requires having morethan the minimum number of replicas as to have enough OSDs in the non-acting set

The new configuration option osd_async_recovery_min_cost controls how much asynchronousrecovery to do The default value for this option is 100 A higher value means asynchronous recovery willbe less whereas a lower value means asynchronous recovery will be more

Configuration is now stored in Monitors accessible by using ceph config

In this release Red Hat Ceph Storage centralizes configuration in Monitors instead of using the Cephconfiguration file (cephconf) Previously changing configuration included manually updating cephconf distributing it to appropriate nodes and restarting all affected daemons Now the Monitorsmanage a configuration database that has the same semantic structure as cephconf The database isaccessible by the ceph config command Any changes to the configuration are applied to daemons orclients in the system immediately and restarting them is no longer needed Use the ceph config -hcommand for details on the available set of commands Note that a Ceph configuration file is stillrequired to identify the Monitors nodes

Placement groups can now be auto-scaled

Red Hat Ceph Storage 4 introduces the ability to auto-scale placement groups (PGs) The number ofplacement groups (PGs) in a pool plays a significant role in how a cluster peers distributes data andrebalances Auto-scaling the number of PGs can make managing the cluster easier The new pg-autoscaling command provides recommendations for scaling PGs or automatically scales PGs basedon how the cluster is being used For more details about auto-scaling PGs see the Auto-scalingplacement groups section in the Storage Strategies Guide for Red Hat Ceph Storage 4

Introduction of diskprediction module

The Red Hat Ceph Storage diskprediction module gathers metrics to predict disk failures before theyhappen The module has two modes cloud and local With this release only the local mode is supportedThe local mode does not require any external server for data analysis It uses an internal predictormodule for disk prediction service and then returns the disk prediction result to the Ceph system

CHAPTER 3 NEW FEATURES

9

To enable the diskprediction module

ceph mgr module enable diskprediction_local

To set the prediction mode

ceph config set global device_failure_prediction_mode local

To disable the diskprediction module

ceph config set global device_failure_prediction_mode none

New configurable option mon_memory_target

Red Hat Ceph Storage 4 introduces a new configurable option mon_memory_target used to set thetarget amount of bytes for Monitor memory usage It specifies the amount of memory to allocate andmanage using the priority cache tuner for the associated Monitor daemon caches The default value of mon_memory_target is set to 2 GiB and you can change it during runtime with

ceph config set global mon_memory_target size

Prior to this release as a cluster scaled the Monitor specific RSS usage exceeded the limits that wereset using the mon_osd_cache_size option which led to issues This enhancement allows for improvedmanagement of memory allocated to the monitor caches and keeps the usage within specified limits

39 BLOCK DEVICES (RBD)

Erasure coding for Ceph Block Device

Erasure coding for Ceph Block Device (RBD) is now fully supported This feature allows RBD to storetheir data in an erasure coded pool For details see the Erasure Coding with Overwrites section in theStorage Strategies for Red Hat Ceph Storage 4

RBD performance monitoring and metrics gathering tools

Red Hat Ceph Storage 4 now incorporates new Ceph Block Device performance monitoring utilities foraggregated RBD image metrics for IOPS throughput and latency Per-image RBD metrics are nowavailable using the Ceph Manager Prometheus module the Ceph Dashboard and the rbd CLI using the rbd perf image iostat or rbd perf image iotop commands

Cloned images can be created from non-primary images

Creating cloned child RBD images from mirrored non-primary parent image is now supportedPreviously cloning of mirrored images was only supported for primary images When cloning goldenimages for virtual machines this restriction prevented the creation of new cloned images from thegolden non-primary image This update removes this restriction and cloned images can be created fromnon-primary mirrored images

Segregating RBD images within isolated namespaces within the same pool

RBD images can now be segregated within isolated namespaces within the same pool When using CephBlock Devices directly without a higher-level system such as OpenStack or OpenShift ContainerStorage it was not possible to restrict user access to specific RBD images When combined with CephXcapabilities users can be restricted to specific pool namespaces to restrict access to RBD images

Red Hat Ceph Storage 40 Release Notes

10

Moving RBD images between different pools within the same cluster

This version of Red Hat Ceph Storage adds the ability to move RBD images between different poolswithin the same cluster For details see the Moving images between pools section in the Block DeviceGuide for Red Hat Ceph Storage 4

Long-running RBD operations can run in the background

Long-running RBD operations such as image removal or cloned image flattening can now bescheduled to run in the background RBD operations that involve iterating over every backing RADOSobject for the image can take a long time depending on the size of the image When using the CLI toperform one of these operations the rbd CLI is blocked until the operation is complete Theseoperations can now be scheduled to run by the Ceph Manager as a background task by using the ceph rbd task add commands The progress of these tasks is visible on the Ceph dashboard as well as byusing the CLI

310 RBD MIRRORING

Support for multiple active instances of RBD mirror daemon in a single storage cluster

Red Hat Ceph Storage 4 now supports deploying multiple active instances of the RBD mirror daemon ina single storage cluster This enables multiple RBD mirror daemons to perform replication for the RBDimages or pools using an algorithm for chunking the images across the number of active mirroringdaemons

CHAPTER 3 NEW FEATURES

11

CHAPTER 4 BUG FIXESThis section describes bugs with significant impact on users that were fixed in this release of Red HatCeph Storage In addition the section includes descriptions of fixed known issues found in previousversions

41 THE CEPH-ANSIBLE UTILITY

The Ansible playbook no longer takes hours to complete the fuser command

Previously on a system running thousands of processes the fuser command in the Ansible playbookcould take several minutes or hours to complete because it iterates over all PID present in the procdirectory Because of this the handler tasks take a long time to complete checking if a Ceph process isalready running With this update instead of using the fuser command the Ansible playbook now checksthe socket file in the procnetunix directory and the handler tasks that check the Ceph socketcompletes almost instantly

(BZ1717011)

The purge-docker-clusteryml Ansible playbook no longer fails

Previously the purge-docker-clusteryml Ansible playbook could fail when trying to unmap RADOSBlock Devices (RBDs) because either the binary was absent or because the Atomic host versionprovided was too old With this update Ansible now uses the sysfs method to unmap devices if thereare any and the purge-docker-clusteryml playbook no longer fails

(BZ1766064)

42 OBJECT GATEWAY

The Expiration Days S3 Lifecycle parameter can now be set to 0

The Ceph Object Gateway did not accept the value of 0 for the Expiration Days Lifecycleconfiguration parameter Consequently setting the expiration to 0 could not be used to triggerbackground delete operation of objects With this update Expiration Days can be set to 0 as expected

(BZ1493476)

Archive zone fetches the current version of the source objects

An object is synced from multiple source zones to archive zone This behavior can lead to differentversions of the same object on the archive zone This update eliminates duplicate versions from gettingcreated by ensuring that the archive zone fetches the current version of the source object

(BZ1760862)

43 RADOS

A message to set expected_num_objects is no longer shown when using BlueStore

With this update a message that recommends setting the expected_num_obejcts parameter duringthe creation of the BlueStore pools has been removed because this message does not apply when usingthe BlueStore OSD back end

(BZ1650922)

Red Hat Ceph Storage 40 Release Notes

12

Deprecated JSON fields have been removed

This update removes deprecated fields from the JSON output of the ceph status command

(BZ1739233)

CHAPTER 4 BUG FIXES

13

CHAPTER 5 TECHNOLOGY PREVIEWSThis section provides an overview of Technology Preview features introduced or updated in this releaseof Red Hat Ceph Storage

IMPORTANT

Technology Preview features are not supported with Red Hat production service levelagreements (SLAs) might not be functionally complete and Red Hat does notrecommend to use them for production These features provide early access toupcoming product features enabling customers to test functionality and providefeedback during the development process

For more information on Red Hat Technology Preview features support scope seehttpsaccessredhatcomsupportofferingstechpreview

51 CEPH FILE SYSTEM

CephFS snapshots

Ceph File System (CephFS) supports taking snapshots as a Technology Preview Snapshots create animmutable view of the file system at the point in time they are taken

52 BLOCK DEVICES (RBD)

Mapping RBD images to NBD images

The rbd-nbd utility maps RADOS Block Device (RBD) images to Network Block Devices (NBD) andenables Ceph clients to access volumes and images in Kubernetes environments To use rbd-nbd installthe rbd-nbd package For details see the rbd-nbd(7) manual page

53 OBJECT GATEWAY

Object Gateway archive site

With this release an archive site is supported as a Technology Preview The archive site allows you tohave a history of versions of S3 objects that can only be eliminated through the gateways associatedwith the archive zone Including an archive zone in a multizone configuration allows you to have theflexibility of an S3 object history in only one zone while saving the space that the replicas of the versionsS3 objects would consume in the rest of the zones

Tiering within a cluster by disk type

This release adds the ability to tier within a cluster by disk type as a Technology Preview by using thecapabilities for mapping for example pools to placement targets and storage classes Using lifecycletransition rules it is possible to cause objects to migrate between storage pools based on policy

S3 bucket notifications

S3 bucket notifications are now supported as a Technology Preview When certain events are triggeredon an S3 bucket the notifications can be sent from the Ceph Object Gateway to HTTP AdvancedMessage Queuing Protocol (AMQP) 91 and Kafka endpoints Additionally the notifications can bestored in a ldquoPubSubrdquo zone instead of or in addition to sending them to the endpoints ldquoPubSubrdquo is apublish-subscribe model that enables recipients to pull notifications from Ceph

Red Hat Ceph Storage 40 Release Notes

14

To use the S3 notifications install the librabbitmq and librdkafka packages

CHAPTER 5 TECHNOLOGY PREVIEWS

15

CHAPTER 6 KNOWN ISSUESThis section documents known issues found in this release of Red Hat Ceph Storage

61 THE CEPH-ANSIBLE UTILITY

Red Hat Ceph Storage installation on Red Hat OpenStack Platform fails

The ceph-ansible utility becomes unresponsive when attempting to install Red Hat Ceph Storagetogether with the Red Hat OpenStack Platform 16 and it returns an error similar to the following one

Error unable to exec into ceph-mon-dcn1-computehci1-2 no container with name or ID ceph-mon-dcn1-computehci1-2 found no such container

To work around this issue update the following part of the handler_osdsyml file located in the ceph-ansiblerolesceph-handlertasks directory

- name unset noup flag command container_exec_cmd | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

To

- name unset noup flag command hostvars[groups[mon_group_name][0]][container_exec_cmd] | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

And start the installation process again

(BZ1792320)

Ansible does not unset the norebalance flag after it completes

The rolling-updateyml Ansible playbook does not unset the norebalance flag after it completes Towork around this issue unset the flag manually

(BZ1793564)

Ansible fails to upgrade a multisite Ceph Object Gateway when the Dashboard is enabled

When the Red Hat Ceph Storage Dashboard is enabled an attempt to use Ansible to upgrade to afurther version of Red Hat Ceph Storage fails when trying to upgrade the secondary Ceph ObjectGateway site in a multisite setup This bug does not occur on the primary site or if the Dashboard is notenabled

(BZ1794351)

62 CEPH MANAGEMENT DASHBOARD

The Dashboard does not show correct values for certain configuration options

Both the Red Hat Ceph Storage Dashboard and the underlying ceph config show command do not

Red Hat Ceph Storage 40 Release Notes

16

return the current values for certain configuration options such as fsid This is probably because certaincore options that are not meant for further modification after deploying a cluster are not updated and adefault value is used As a result the Dashboard does not show correct values for certain configurationoptions

(BZ1765493 BZ1772310)

NFS Ganesha in Dashboard

The Red Hat Ceph Storage Dashboard currently does not support managing NFS Ganesha

(BZ1772316)

Dashboard does not support email verification

The Red Hat Ceph Storage Dashboard does not support any email verification when changing a userspassword This behavior is intentional because the Dashboard supports Single Sign-On (SSO) and thisfeature can be delegated to the SSO provider

(BZ1778608)

The OSD histogram graph for read and write operations is not clear

The Red Hat Ceph Storage Dashboard does not display any numbers or description in the OSDhistogram graph for read and write operations and therefore the graph is not clear

(BZ1779170)

Dashboard returns an error when a ceph-mgr module is enabled from the ceph CLI

When enabling a Ceph Manager (ceph-mgr) module such as telemetry from the ceph CLI the Red HatCeph Storage Dashboard displays the following error message

0 - Unknown Error

In addition the Dashboard does not mark the module as enabled until the Refresh button is clicked

(BZ1785077)

The Dashboard allows modifying the LUN ID and WWN which can lead to data corruption

The Red Hat Ceph Storage Dashboard allows you to modify the LUN ID and its World Wide Name(WWN) which is not required after creating the LUN Moreover editing these parameters can bedangerous for certain initiators that do not fully support this feature by default Consequently editingthese parameters after creating them can lead to data corruption To avoid this do not modify the LUNID and WWN in the Dashboard

(BZ1786455)

The Dashboard does not provide correct Ceph iSCSI error messages

If the Ceph iSCSI returns an error for example the HTTP 400 code when trying to delete an iSCSItarget while a user is logged in the Red Hat Ceph Storage Dashboard does not forward that error codeand message to the Dashboard user using the pop-up notifications but displays a generic 500 InternalServer Error Consequently the message that the Dashboard provides is not informative and evenmisleading an expected behavior (users cannot delete a busy resource) is perceived as an operationalfailure (internal server error) To work around this issue see the Dashboard logs

(BZ1786457)

CHAPTER 6 KNOWN ISSUES

17

The Dashboard requires to disable iptables rules

The Red Hat Ceph Storage Dashboard is unable to perform any iSCSI operations such as creating agateway unless all iptables rules are manually disabled on the Ceph iSCSI node To do so use thefollowing command as root or the sudo user

iptables -F

Note that after a reboot the rules are enabled again Either disable them again or delete thempermanently

(BZ1792818)

63 PACKAGES

Current version of Grafana causes certain bugs in the Dashboard

Red Hat Ceph Storage 4 uses the Grafana version 524 This version causes the following bugs in theRed Hat Ceph Storage Dashboard

When navigating to Pools gt Overall Performance Grafana returns the following error

TypeError lc[ttype] is undefinedtrue

When viewing a poolrsquos performance details (Pools gt select a pool from the list gt PerformanceDetails) the Grafana bar is displayed along with other graphs and values but it should not bethere

These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red HatCeph Storage

(BZ1786107 BZ1762197 BZ1765536)

64 RED HAT ENTERPRISE LINUX

Ansible cannot start NFS Ganesha if SELinux is in enforcing mode

When using SELinux in enforcing mode on Red Hat Enterprise Linux 81 the ceph-ansible utility fails tostart the NFS Ganesha service because SELinux policy currently does not allow creating a directoryrequired for NFS Ganesha

(BZ1794027 BZ1796160)

Red Hat Ceph Storage 40 Release Notes

18

CHAPTER 7 DEPRECATED FUNCTIONALITYThis section provides an overview of functionality that has been deprecated in all minor releases up tothis release of Red Hat Ceph Storage

Ubuntu is no longer supported

Installing a Red Hat Ceph Storage 4 cluster on Ubuntu is no longer supported Use Red HatEnterprise Linux as the underlying operating system

71 THE CEPH-ANSIBLE UTILITY

Configuring iSCSI gateway using ceph-ansible is no longer supported

Configuring the Ceph iSCSI gateway by using the ceph-ansible utility is no longer supported Use ceph-ansible to install the gateway and then use either the gwcli utility of the Red Hat Ceph StorageDashboard to configure the gateway For details see the Using the Ceph iSCSI Gateway chapter in theBlock Device Guide for Red Hat Ceph Storage 4

72 THE CEPH-DISK UTILITY

ceph-disk is deprecated

With this release the ceph-disk utility is no longer supported The ceph-volume utility is used insteadFor details see the Why does ceph-volume replace `ceph-disk` section in the Administration Guide forRed Hat Ceph Storage 4

73 RADOS

FileStore is no longer supported in production

The FileStore OSD back end is now deprecated because the new BlueStore back end is now fullysupported in production For details see the How to migrate the object store from FileStore toBlueStore section in the Red Hat Ceph Storage 4 Installation Guide

Ceph configuration is now deprecated

The Ceph configuration file (cephconf) is now deprecated in favor of new centralized configurationstored in Monitors For details see the Configuration is now stored in Monitors accessible by using `cephconfig` release note

CHAPTER 7 DEPRECATED FUNCTIONALITY

19

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES
Page 13: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

To enable the diskprediction module

ceph mgr module enable diskprediction_local

To set the prediction mode

ceph config set global device_failure_prediction_mode local

To disable the diskprediction module

ceph config set global device_failure_prediction_mode none

New configurable option mon_memory_target

Red Hat Ceph Storage 4 introduces a new configurable option mon_memory_target used to set thetarget amount of bytes for Monitor memory usage It specifies the amount of memory to allocate andmanage using the priority cache tuner for the associated Monitor daemon caches The default value of mon_memory_target is set to 2 GiB and you can change it during runtime with

ceph config set global mon_memory_target size

Prior to this release as a cluster scaled the Monitor specific RSS usage exceeded the limits that wereset using the mon_osd_cache_size option which led to issues This enhancement allows for improvedmanagement of memory allocated to the monitor caches and keeps the usage within specified limits

39 BLOCK DEVICES (RBD)

Erasure coding for Ceph Block Device

Erasure coding for Ceph Block Device (RBD) is now fully supported This feature allows RBD to storetheir data in an erasure coded pool For details see the Erasure Coding with Overwrites section in theStorage Strategies for Red Hat Ceph Storage 4

RBD performance monitoring and metrics gathering tools

Red Hat Ceph Storage 4 now incorporates new Ceph Block Device performance monitoring utilities foraggregated RBD image metrics for IOPS throughput and latency Per-image RBD metrics are nowavailable using the Ceph Manager Prometheus module the Ceph Dashboard and the rbd CLI using the rbd perf image iostat or rbd perf image iotop commands

Cloned images can be created from non-primary images

Creating cloned child RBD images from mirrored non-primary parent image is now supportedPreviously cloning of mirrored images was only supported for primary images When cloning goldenimages for virtual machines this restriction prevented the creation of new cloned images from thegolden non-primary image This update removes this restriction and cloned images can be created fromnon-primary mirrored images

Segregating RBD images within isolated namespaces within the same pool

RBD images can now be segregated within isolated namespaces within the same pool When using CephBlock Devices directly without a higher-level system such as OpenStack or OpenShift ContainerStorage it was not possible to restrict user access to specific RBD images When combined with CephXcapabilities users can be restricted to specific pool namespaces to restrict access to RBD images

Red Hat Ceph Storage 40 Release Notes

10

Moving RBD images between different pools within the same cluster

This version of Red Hat Ceph Storage adds the ability to move RBD images between different poolswithin the same cluster For details see the Moving images between pools section in the Block DeviceGuide for Red Hat Ceph Storage 4

Long-running RBD operations can run in the background

Long-running RBD operations such as image removal or cloned image flattening can now bescheduled to run in the background RBD operations that involve iterating over every backing RADOSobject for the image can take a long time depending on the size of the image When using the CLI toperform one of these operations the rbd CLI is blocked until the operation is complete Theseoperations can now be scheduled to run by the Ceph Manager as a background task by using the ceph rbd task add commands The progress of these tasks is visible on the Ceph dashboard as well as byusing the CLI

310 RBD MIRRORING

Support for multiple active instances of RBD mirror daemon in a single storage cluster

Red Hat Ceph Storage 4 now supports deploying multiple active instances of the RBD mirror daemon ina single storage cluster This enables multiple RBD mirror daemons to perform replication for the RBDimages or pools using an algorithm for chunking the images across the number of active mirroringdaemons

CHAPTER 3 NEW FEATURES

11

CHAPTER 4 BUG FIXESThis section describes bugs with significant impact on users that were fixed in this release of Red HatCeph Storage In addition the section includes descriptions of fixed known issues found in previousversions

41 THE CEPH-ANSIBLE UTILITY

The Ansible playbook no longer takes hours to complete the fuser command

Previously on a system running thousands of processes the fuser command in the Ansible playbookcould take several minutes or hours to complete because it iterates over all PID present in the procdirectory Because of this the handler tasks take a long time to complete checking if a Ceph process isalready running With this update instead of using the fuser command the Ansible playbook now checksthe socket file in the procnetunix directory and the handler tasks that check the Ceph socketcompletes almost instantly

(BZ1717011)

The purge-docker-clusteryml Ansible playbook no longer fails

Previously the purge-docker-clusteryml Ansible playbook could fail when trying to unmap RADOSBlock Devices (RBDs) because either the binary was absent or because the Atomic host versionprovided was too old With this update Ansible now uses the sysfs method to unmap devices if thereare any and the purge-docker-clusteryml playbook no longer fails

(BZ1766064)

42 OBJECT GATEWAY

The Expiration Days S3 Lifecycle parameter can now be set to 0

The Ceph Object Gateway did not accept the value of 0 for the Expiration Days Lifecycleconfiguration parameter Consequently setting the expiration to 0 could not be used to triggerbackground delete operation of objects With this update Expiration Days can be set to 0 as expected

(BZ1493476)

Archive zone fetches the current version of the source objects

An object is synced from multiple source zones to archive zone This behavior can lead to differentversions of the same object on the archive zone This update eliminates duplicate versions from gettingcreated by ensuring that the archive zone fetches the current version of the source object

(BZ1760862)

43 RADOS

A message to set expected_num_objects is no longer shown when using BlueStore

With this update a message that recommends setting the expected_num_obejcts parameter duringthe creation of the BlueStore pools has been removed because this message does not apply when usingthe BlueStore OSD back end

(BZ1650922)

Red Hat Ceph Storage 40 Release Notes

12

Deprecated JSON fields have been removed

This update removes deprecated fields from the JSON output of the ceph status command

(BZ1739233)

CHAPTER 4 BUG FIXES

13

CHAPTER 5 TECHNOLOGY PREVIEWSThis section provides an overview of Technology Preview features introduced or updated in this releaseof Red Hat Ceph Storage

IMPORTANT

Technology Preview features are not supported with Red Hat production service levelagreements (SLAs) might not be functionally complete and Red Hat does notrecommend to use them for production These features provide early access toupcoming product features enabling customers to test functionality and providefeedback during the development process

For more information on Red Hat Technology Preview features support scope seehttpsaccessredhatcomsupportofferingstechpreview

51 CEPH FILE SYSTEM

CephFS snapshots

Ceph File System (CephFS) supports taking snapshots as a Technology Preview Snapshots create animmutable view of the file system at the point in time they are taken

52 BLOCK DEVICES (RBD)

Mapping RBD images to NBD images

The rbd-nbd utility maps RADOS Block Device (RBD) images to Network Block Devices (NBD) andenables Ceph clients to access volumes and images in Kubernetes environments To use rbd-nbd installthe rbd-nbd package For details see the rbd-nbd(7) manual page

53 OBJECT GATEWAY

Object Gateway archive site

With this release an archive site is supported as a Technology Preview The archive site allows you tohave a history of versions of S3 objects that can only be eliminated through the gateways associatedwith the archive zone Including an archive zone in a multizone configuration allows you to have theflexibility of an S3 object history in only one zone while saving the space that the replicas of the versionsS3 objects would consume in the rest of the zones

Tiering within a cluster by disk type

This release adds the ability to tier within a cluster by disk type as a Technology Preview by using thecapabilities for mapping for example pools to placement targets and storage classes Using lifecycletransition rules it is possible to cause objects to migrate between storage pools based on policy

S3 bucket notifications

S3 bucket notifications are now supported as a Technology Preview When certain events are triggeredon an S3 bucket the notifications can be sent from the Ceph Object Gateway to HTTP AdvancedMessage Queuing Protocol (AMQP) 91 and Kafka endpoints Additionally the notifications can bestored in a ldquoPubSubrdquo zone instead of or in addition to sending them to the endpoints ldquoPubSubrdquo is apublish-subscribe model that enables recipients to pull notifications from Ceph

Red Hat Ceph Storage 40 Release Notes

14

To use the S3 notifications install the librabbitmq and librdkafka packages

CHAPTER 5 TECHNOLOGY PREVIEWS

15

CHAPTER 6 KNOWN ISSUESThis section documents known issues found in this release of Red Hat Ceph Storage

61 THE CEPH-ANSIBLE UTILITY

Red Hat Ceph Storage installation on Red Hat OpenStack Platform fails

The ceph-ansible utility becomes unresponsive when attempting to install Red Hat Ceph Storagetogether with the Red Hat OpenStack Platform 16 and it returns an error similar to the following one

Error unable to exec into ceph-mon-dcn1-computehci1-2 no container with name or ID ceph-mon-dcn1-computehci1-2 found no such container

To work around this issue update the following part of the handler_osdsyml file located in the ceph-ansiblerolesceph-handlertasks directory

- name unset noup flag command container_exec_cmd | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

To

- name unset noup flag command hostvars[groups[mon_group_name][0]][container_exec_cmd] | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

And start the installation process again

(BZ1792320)

Ansible does not unset the norebalance flag after it completes

The rolling-updateyml Ansible playbook does not unset the norebalance flag after it completes Towork around this issue unset the flag manually

(BZ1793564)

Ansible fails to upgrade a multisite Ceph Object Gateway when the Dashboard is enabled

When the Red Hat Ceph Storage Dashboard is enabled an attempt to use Ansible to upgrade to afurther version of Red Hat Ceph Storage fails when trying to upgrade the secondary Ceph ObjectGateway site in a multisite setup This bug does not occur on the primary site or if the Dashboard is notenabled

(BZ1794351)

62 CEPH MANAGEMENT DASHBOARD

The Dashboard does not show correct values for certain configuration options

Both the Red Hat Ceph Storage Dashboard and the underlying ceph config show command do not

Red Hat Ceph Storage 40 Release Notes

16

return the current values for certain configuration options such as fsid This is probably because certaincore options that are not meant for further modification after deploying a cluster are not updated and adefault value is used As a result the Dashboard does not show correct values for certain configurationoptions

(BZ1765493 BZ1772310)

NFS Ganesha in Dashboard

The Red Hat Ceph Storage Dashboard currently does not support managing NFS Ganesha

(BZ1772316)

Dashboard does not support email verification

The Red Hat Ceph Storage Dashboard does not support any email verification when changing a userspassword This behavior is intentional because the Dashboard supports Single Sign-On (SSO) and thisfeature can be delegated to the SSO provider

(BZ1778608)

The OSD histogram graph for read and write operations is not clear

The Red Hat Ceph Storage Dashboard does not display any numbers or description in the OSDhistogram graph for read and write operations and therefore the graph is not clear

(BZ1779170)

Dashboard returns an error when a ceph-mgr module is enabled from the ceph CLI

When enabling a Ceph Manager (ceph-mgr) module such as telemetry from the ceph CLI the Red HatCeph Storage Dashboard displays the following error message

0 - Unknown Error

In addition the Dashboard does not mark the module as enabled until the Refresh button is clicked

(BZ1785077)

The Dashboard allows modifying the LUN ID and WWN which can lead to data corruption

The Red Hat Ceph Storage Dashboard allows you to modify the LUN ID and its World Wide Name(WWN) which is not required after creating the LUN Moreover editing these parameters can bedangerous for certain initiators that do not fully support this feature by default Consequently editingthese parameters after creating them can lead to data corruption To avoid this do not modify the LUNID and WWN in the Dashboard

(BZ1786455)

The Dashboard does not provide correct Ceph iSCSI error messages

If the Ceph iSCSI returns an error for example the HTTP 400 code when trying to delete an iSCSItarget while a user is logged in the Red Hat Ceph Storage Dashboard does not forward that error codeand message to the Dashboard user using the pop-up notifications but displays a generic 500 InternalServer Error Consequently the message that the Dashboard provides is not informative and evenmisleading an expected behavior (users cannot delete a busy resource) is perceived as an operationalfailure (internal server error) To work around this issue see the Dashboard logs

(BZ1786457)

CHAPTER 6 KNOWN ISSUES

17

The Dashboard requires to disable iptables rules

The Red Hat Ceph Storage Dashboard is unable to perform any iSCSI operations such as creating agateway unless all iptables rules are manually disabled on the Ceph iSCSI node To do so use thefollowing command as root or the sudo user

iptables -F

Note that after a reboot the rules are enabled again Either disable them again or delete thempermanently

(BZ1792818)

63 PACKAGES

Current version of Grafana causes certain bugs in the Dashboard

Red Hat Ceph Storage 4 uses the Grafana version 524 This version causes the following bugs in theRed Hat Ceph Storage Dashboard

When navigating to Pools gt Overall Performance Grafana returns the following error

TypeError lc[ttype] is undefinedtrue

When viewing a poolrsquos performance details (Pools gt select a pool from the list gt PerformanceDetails) the Grafana bar is displayed along with other graphs and values but it should not bethere

These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red HatCeph Storage

(BZ1786107 BZ1762197 BZ1765536)

64 RED HAT ENTERPRISE LINUX

Ansible cannot start NFS Ganesha if SELinux is in enforcing mode

When using SELinux in enforcing mode on Red Hat Enterprise Linux 81 the ceph-ansible utility fails tostart the NFS Ganesha service because SELinux policy currently does not allow creating a directoryrequired for NFS Ganesha

(BZ1794027 BZ1796160)

Red Hat Ceph Storage 40 Release Notes

18

CHAPTER 7 DEPRECATED FUNCTIONALITYThis section provides an overview of functionality that has been deprecated in all minor releases up tothis release of Red Hat Ceph Storage

Ubuntu is no longer supported

Installing a Red Hat Ceph Storage 4 cluster on Ubuntu is no longer supported Use Red HatEnterprise Linux as the underlying operating system

71 THE CEPH-ANSIBLE UTILITY

Configuring iSCSI gateway using ceph-ansible is no longer supported

Configuring the Ceph iSCSI gateway by using the ceph-ansible utility is no longer supported Use ceph-ansible to install the gateway and then use either the gwcli utility of the Red Hat Ceph StorageDashboard to configure the gateway For details see the Using the Ceph iSCSI Gateway chapter in theBlock Device Guide for Red Hat Ceph Storage 4

72 THE CEPH-DISK UTILITY

ceph-disk is deprecated

With this release the ceph-disk utility is no longer supported The ceph-volume utility is used insteadFor details see the Why does ceph-volume replace `ceph-disk` section in the Administration Guide forRed Hat Ceph Storage 4

73 RADOS

FileStore is no longer supported in production

The FileStore OSD back end is now deprecated because the new BlueStore back end is now fullysupported in production For details see the How to migrate the object store from FileStore toBlueStore section in the Red Hat Ceph Storage 4 Installation Guide

Ceph configuration is now deprecated

The Ceph configuration file (cephconf) is now deprecated in favor of new centralized configurationstored in Monitors For details see the Configuration is now stored in Monitors accessible by using `cephconfig` release note

CHAPTER 7 DEPRECATED FUNCTIONALITY

19

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES
Page 14: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

Moving RBD images between different pools within the same cluster

This version of Red Hat Ceph Storage adds the ability to move RBD images between different poolswithin the same cluster For details see the Moving images between pools section in the Block DeviceGuide for Red Hat Ceph Storage 4

Long-running RBD operations can run in the background

Long-running RBD operations such as image removal or cloned image flattening can now bescheduled to run in the background RBD operations that involve iterating over every backing RADOSobject for the image can take a long time depending on the size of the image When using the CLI toperform one of these operations the rbd CLI is blocked until the operation is complete Theseoperations can now be scheduled to run by the Ceph Manager as a background task by using the ceph rbd task add commands The progress of these tasks is visible on the Ceph dashboard as well as byusing the CLI

310 RBD MIRRORING

Support for multiple active instances of RBD mirror daemon in a single storage cluster

Red Hat Ceph Storage 4 now supports deploying multiple active instances of the RBD mirror daemon ina single storage cluster This enables multiple RBD mirror daemons to perform replication for the RBDimages or pools using an algorithm for chunking the images across the number of active mirroringdaemons

CHAPTER 3 NEW FEATURES

11

CHAPTER 4 BUG FIXESThis section describes bugs with significant impact on users that were fixed in this release of Red HatCeph Storage In addition the section includes descriptions of fixed known issues found in previousversions

41 THE CEPH-ANSIBLE UTILITY

The Ansible playbook no longer takes hours to complete the fuser command

Previously on a system running thousands of processes the fuser command in the Ansible playbookcould take several minutes or hours to complete because it iterates over all PID present in the procdirectory Because of this the handler tasks take a long time to complete checking if a Ceph process isalready running With this update instead of using the fuser command the Ansible playbook now checksthe socket file in the procnetunix directory and the handler tasks that check the Ceph socketcompletes almost instantly

(BZ1717011)

The purge-docker-clusteryml Ansible playbook no longer fails

Previously the purge-docker-clusteryml Ansible playbook could fail when trying to unmap RADOSBlock Devices (RBDs) because either the binary was absent or because the Atomic host versionprovided was too old With this update Ansible now uses the sysfs method to unmap devices if thereare any and the purge-docker-clusteryml playbook no longer fails

(BZ1766064)

42 OBJECT GATEWAY

The Expiration Days S3 Lifecycle parameter can now be set to 0

The Ceph Object Gateway did not accept the value of 0 for the Expiration Days Lifecycleconfiguration parameter Consequently setting the expiration to 0 could not be used to triggerbackground delete operation of objects With this update Expiration Days can be set to 0 as expected

(BZ1493476)

Archive zone fetches the current version of the source objects

An object is synced from multiple source zones to archive zone This behavior can lead to differentversions of the same object on the archive zone This update eliminates duplicate versions from gettingcreated by ensuring that the archive zone fetches the current version of the source object

(BZ1760862)

43 RADOS

A message to set expected_num_objects is no longer shown when using BlueStore

With this update a message that recommends setting the expected_num_obejcts parameter duringthe creation of the BlueStore pools has been removed because this message does not apply when usingthe BlueStore OSD back end

(BZ1650922)

Red Hat Ceph Storage 40 Release Notes

12

Deprecated JSON fields have been removed

This update removes deprecated fields from the JSON output of the ceph status command

(BZ1739233)

CHAPTER 4 BUG FIXES

13

CHAPTER 5 TECHNOLOGY PREVIEWSThis section provides an overview of Technology Preview features introduced or updated in this releaseof Red Hat Ceph Storage

IMPORTANT

Technology Preview features are not supported with Red Hat production service levelagreements (SLAs) might not be functionally complete and Red Hat does notrecommend to use them for production These features provide early access toupcoming product features enabling customers to test functionality and providefeedback during the development process

For more information on Red Hat Technology Preview features support scope seehttpsaccessredhatcomsupportofferingstechpreview

51 CEPH FILE SYSTEM

CephFS snapshots

Ceph File System (CephFS) supports taking snapshots as a Technology Preview Snapshots create animmutable view of the file system at the point in time they are taken

52 BLOCK DEVICES (RBD)

Mapping RBD images to NBD images

The rbd-nbd utility maps RADOS Block Device (RBD) images to Network Block Devices (NBD) andenables Ceph clients to access volumes and images in Kubernetes environments To use rbd-nbd installthe rbd-nbd package For details see the rbd-nbd(7) manual page

53 OBJECT GATEWAY

Object Gateway archive site

With this release an archive site is supported as a Technology Preview The archive site allows you tohave a history of versions of S3 objects that can only be eliminated through the gateways associatedwith the archive zone Including an archive zone in a multizone configuration allows you to have theflexibility of an S3 object history in only one zone while saving the space that the replicas of the versionsS3 objects would consume in the rest of the zones

Tiering within a cluster by disk type

This release adds the ability to tier within a cluster by disk type as a Technology Preview by using thecapabilities for mapping for example pools to placement targets and storage classes Using lifecycletransition rules it is possible to cause objects to migrate between storage pools based on policy

S3 bucket notifications

S3 bucket notifications are now supported as a Technology Preview When certain events are triggeredon an S3 bucket the notifications can be sent from the Ceph Object Gateway to HTTP AdvancedMessage Queuing Protocol (AMQP) 91 and Kafka endpoints Additionally the notifications can bestored in a ldquoPubSubrdquo zone instead of or in addition to sending them to the endpoints ldquoPubSubrdquo is apublish-subscribe model that enables recipients to pull notifications from Ceph

Red Hat Ceph Storage 40 Release Notes

14

To use the S3 notifications install the librabbitmq and librdkafka packages

CHAPTER 5 TECHNOLOGY PREVIEWS

15

CHAPTER 6 KNOWN ISSUESThis section documents known issues found in this release of Red Hat Ceph Storage

61 THE CEPH-ANSIBLE UTILITY

Red Hat Ceph Storage installation on Red Hat OpenStack Platform fails

The ceph-ansible utility becomes unresponsive when attempting to install Red Hat Ceph Storagetogether with the Red Hat OpenStack Platform 16 and it returns an error similar to the following one

Error unable to exec into ceph-mon-dcn1-computehci1-2 no container with name or ID ceph-mon-dcn1-computehci1-2 found no such container

To work around this issue update the following part of the handler_osdsyml file located in the ceph-ansiblerolesceph-handlertasks directory

- name unset noup flag command container_exec_cmd | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

To

- name unset noup flag command hostvars[groups[mon_group_name][0]][container_exec_cmd] | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

And start the installation process again

(BZ1792320)

Ansible does not unset the norebalance flag after it completes

The rolling-updateyml Ansible playbook does not unset the norebalance flag after it completes Towork around this issue unset the flag manually

(BZ1793564)

Ansible fails to upgrade a multisite Ceph Object Gateway when the Dashboard is enabled

When the Red Hat Ceph Storage Dashboard is enabled an attempt to use Ansible to upgrade to afurther version of Red Hat Ceph Storage fails when trying to upgrade the secondary Ceph ObjectGateway site in a multisite setup This bug does not occur on the primary site or if the Dashboard is notenabled

(BZ1794351)

62 CEPH MANAGEMENT DASHBOARD

The Dashboard does not show correct values for certain configuration options

Both the Red Hat Ceph Storage Dashboard and the underlying ceph config show command do not

Red Hat Ceph Storage 40 Release Notes

16

return the current values for certain configuration options such as fsid This is probably because certaincore options that are not meant for further modification after deploying a cluster are not updated and adefault value is used As a result the Dashboard does not show correct values for certain configurationoptions

(BZ1765493 BZ1772310)

NFS Ganesha in Dashboard

The Red Hat Ceph Storage Dashboard currently does not support managing NFS Ganesha

(BZ1772316)

Dashboard does not support email verification

The Red Hat Ceph Storage Dashboard does not support any email verification when changing a userspassword This behavior is intentional because the Dashboard supports Single Sign-On (SSO) and thisfeature can be delegated to the SSO provider

(BZ1778608)

The OSD histogram graph for read and write operations is not clear

The Red Hat Ceph Storage Dashboard does not display any numbers or description in the OSDhistogram graph for read and write operations and therefore the graph is not clear

(BZ1779170)

Dashboard returns an error when a ceph-mgr module is enabled from the ceph CLI

When enabling a Ceph Manager (ceph-mgr) module such as telemetry from the ceph CLI the Red HatCeph Storage Dashboard displays the following error message

0 - Unknown Error

In addition the Dashboard does not mark the module as enabled until the Refresh button is clicked

(BZ1785077)

The Dashboard allows modifying the LUN ID and WWN which can lead to data corruption

The Red Hat Ceph Storage Dashboard allows you to modify the LUN ID and its World Wide Name(WWN) which is not required after creating the LUN Moreover editing these parameters can bedangerous for certain initiators that do not fully support this feature by default Consequently editingthese parameters after creating them can lead to data corruption To avoid this do not modify the LUNID and WWN in the Dashboard

(BZ1786455)

The Dashboard does not provide correct Ceph iSCSI error messages

If the Ceph iSCSI returns an error for example the HTTP 400 code when trying to delete an iSCSItarget while a user is logged in the Red Hat Ceph Storage Dashboard does not forward that error codeand message to the Dashboard user using the pop-up notifications but displays a generic 500 InternalServer Error Consequently the message that the Dashboard provides is not informative and evenmisleading an expected behavior (users cannot delete a busy resource) is perceived as an operationalfailure (internal server error) To work around this issue see the Dashboard logs

(BZ1786457)

CHAPTER 6 KNOWN ISSUES

17

The Dashboard requires to disable iptables rules

The Red Hat Ceph Storage Dashboard is unable to perform any iSCSI operations such as creating agateway unless all iptables rules are manually disabled on the Ceph iSCSI node To do so use thefollowing command as root or the sudo user

iptables -F

Note that after a reboot the rules are enabled again Either disable them again or delete thempermanently

(BZ1792818)

63 PACKAGES

Current version of Grafana causes certain bugs in the Dashboard

Red Hat Ceph Storage 4 uses the Grafana version 524 This version causes the following bugs in theRed Hat Ceph Storage Dashboard

When navigating to Pools gt Overall Performance Grafana returns the following error

TypeError lc[ttype] is undefinedtrue

When viewing a poolrsquos performance details (Pools gt select a pool from the list gt PerformanceDetails) the Grafana bar is displayed along with other graphs and values but it should not bethere

These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red HatCeph Storage

(BZ1786107 BZ1762197 BZ1765536)

64 RED HAT ENTERPRISE LINUX

Ansible cannot start NFS Ganesha if SELinux is in enforcing mode

When using SELinux in enforcing mode on Red Hat Enterprise Linux 81 the ceph-ansible utility fails tostart the NFS Ganesha service because SELinux policy currently does not allow creating a directoryrequired for NFS Ganesha

(BZ1794027 BZ1796160)

Red Hat Ceph Storage 40 Release Notes

18

CHAPTER 7 DEPRECATED FUNCTIONALITYThis section provides an overview of functionality that has been deprecated in all minor releases up tothis release of Red Hat Ceph Storage

Ubuntu is no longer supported

Installing a Red Hat Ceph Storage 4 cluster on Ubuntu is no longer supported Use Red HatEnterprise Linux as the underlying operating system

71 THE CEPH-ANSIBLE UTILITY

Configuring iSCSI gateway using ceph-ansible is no longer supported

Configuring the Ceph iSCSI gateway by using the ceph-ansible utility is no longer supported Use ceph-ansible to install the gateway and then use either the gwcli utility of the Red Hat Ceph StorageDashboard to configure the gateway For details see the Using the Ceph iSCSI Gateway chapter in theBlock Device Guide for Red Hat Ceph Storage 4

72 THE CEPH-DISK UTILITY

ceph-disk is deprecated

With this release the ceph-disk utility is no longer supported The ceph-volume utility is used insteadFor details see the Why does ceph-volume replace `ceph-disk` section in the Administration Guide forRed Hat Ceph Storage 4

73 RADOS

FileStore is no longer supported in production

The FileStore OSD back end is now deprecated because the new BlueStore back end is now fullysupported in production For details see the How to migrate the object store from FileStore toBlueStore section in the Red Hat Ceph Storage 4 Installation Guide

Ceph configuration is now deprecated

The Ceph configuration file (cephconf) is now deprecated in favor of new centralized configurationstored in Monitors For details see the Configuration is now stored in Monitors accessible by using `cephconfig` release note

CHAPTER 7 DEPRECATED FUNCTIONALITY

19

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES
Page 15: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

CHAPTER 4 BUG FIXESThis section describes bugs with significant impact on users that were fixed in this release of Red HatCeph Storage In addition the section includes descriptions of fixed known issues found in previousversions

41 THE CEPH-ANSIBLE UTILITY

The Ansible playbook no longer takes hours to complete the fuser command

Previously on a system running thousands of processes the fuser command in the Ansible playbookcould take several minutes or hours to complete because it iterates over all PID present in the procdirectory Because of this the handler tasks take a long time to complete checking if a Ceph process isalready running With this update instead of using the fuser command the Ansible playbook now checksthe socket file in the procnetunix directory and the handler tasks that check the Ceph socketcompletes almost instantly

(BZ1717011)

The purge-docker-clusteryml Ansible playbook no longer fails

Previously the purge-docker-clusteryml Ansible playbook could fail when trying to unmap RADOSBlock Devices (RBDs) because either the binary was absent or because the Atomic host versionprovided was too old With this update Ansible now uses the sysfs method to unmap devices if thereare any and the purge-docker-clusteryml playbook no longer fails

(BZ1766064)

42 OBJECT GATEWAY

The Expiration Days S3 Lifecycle parameter can now be set to 0

The Ceph Object Gateway did not accept the value of 0 for the Expiration Days Lifecycleconfiguration parameter Consequently setting the expiration to 0 could not be used to triggerbackground delete operation of objects With this update Expiration Days can be set to 0 as expected

(BZ1493476)

Archive zone fetches the current version of the source objects

An object is synced from multiple source zones to archive zone This behavior can lead to differentversions of the same object on the archive zone This update eliminates duplicate versions from gettingcreated by ensuring that the archive zone fetches the current version of the source object

(BZ1760862)

43 RADOS

A message to set expected_num_objects is no longer shown when using BlueStore

With this update a message that recommends setting the expected_num_obejcts parameter duringthe creation of the BlueStore pools has been removed because this message does not apply when usingthe BlueStore OSD back end

(BZ1650922)

Red Hat Ceph Storage 40 Release Notes

12

Deprecated JSON fields have been removed

This update removes deprecated fields from the JSON output of the ceph status command

(BZ1739233)

CHAPTER 4 BUG FIXES

13

CHAPTER 5 TECHNOLOGY PREVIEWSThis section provides an overview of Technology Preview features introduced or updated in this releaseof Red Hat Ceph Storage

IMPORTANT

Technology Preview features are not supported with Red Hat production service levelagreements (SLAs) might not be functionally complete and Red Hat does notrecommend to use them for production These features provide early access toupcoming product features enabling customers to test functionality and providefeedback during the development process

For more information on Red Hat Technology Preview features support scope seehttpsaccessredhatcomsupportofferingstechpreview

51 CEPH FILE SYSTEM

CephFS snapshots

Ceph File System (CephFS) supports taking snapshots as a Technology Preview Snapshots create animmutable view of the file system at the point in time they are taken

52 BLOCK DEVICES (RBD)

Mapping RBD images to NBD images

The rbd-nbd utility maps RADOS Block Device (RBD) images to Network Block Devices (NBD) andenables Ceph clients to access volumes and images in Kubernetes environments To use rbd-nbd installthe rbd-nbd package For details see the rbd-nbd(7) manual page

53 OBJECT GATEWAY

Object Gateway archive site

With this release an archive site is supported as a Technology Preview The archive site allows you tohave a history of versions of S3 objects that can only be eliminated through the gateways associatedwith the archive zone Including an archive zone in a multizone configuration allows you to have theflexibility of an S3 object history in only one zone while saving the space that the replicas of the versionsS3 objects would consume in the rest of the zones

Tiering within a cluster by disk type

This release adds the ability to tier within a cluster by disk type as a Technology Preview by using thecapabilities for mapping for example pools to placement targets and storage classes Using lifecycletransition rules it is possible to cause objects to migrate between storage pools based on policy

S3 bucket notifications

S3 bucket notifications are now supported as a Technology Preview When certain events are triggeredon an S3 bucket the notifications can be sent from the Ceph Object Gateway to HTTP AdvancedMessage Queuing Protocol (AMQP) 91 and Kafka endpoints Additionally the notifications can bestored in a ldquoPubSubrdquo zone instead of or in addition to sending them to the endpoints ldquoPubSubrdquo is apublish-subscribe model that enables recipients to pull notifications from Ceph

Red Hat Ceph Storage 40 Release Notes

14

To use the S3 notifications install the librabbitmq and librdkafka packages

CHAPTER 5 TECHNOLOGY PREVIEWS

15

CHAPTER 6 KNOWN ISSUESThis section documents known issues found in this release of Red Hat Ceph Storage

61 THE CEPH-ANSIBLE UTILITY

Red Hat Ceph Storage installation on Red Hat OpenStack Platform fails

The ceph-ansible utility becomes unresponsive when attempting to install Red Hat Ceph Storagetogether with the Red Hat OpenStack Platform 16 and it returns an error similar to the following one

Error unable to exec into ceph-mon-dcn1-computehci1-2 no container with name or ID ceph-mon-dcn1-computehci1-2 found no such container

To work around this issue update the following part of the handler_osdsyml file located in the ceph-ansiblerolesceph-handlertasks directory

- name unset noup flag command container_exec_cmd | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

To

- name unset noup flag command hostvars[groups[mon_group_name][0]][container_exec_cmd] | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

And start the installation process again

(BZ1792320)

Ansible does not unset the norebalance flag after it completes

The rolling-updateyml Ansible playbook does not unset the norebalance flag after it completes Towork around this issue unset the flag manually

(BZ1793564)

Ansible fails to upgrade a multisite Ceph Object Gateway when the Dashboard is enabled

When the Red Hat Ceph Storage Dashboard is enabled an attempt to use Ansible to upgrade to afurther version of Red Hat Ceph Storage fails when trying to upgrade the secondary Ceph ObjectGateway site in a multisite setup This bug does not occur on the primary site or if the Dashboard is notenabled

(BZ1794351)

62 CEPH MANAGEMENT DASHBOARD

The Dashboard does not show correct values for certain configuration options

Both the Red Hat Ceph Storage Dashboard and the underlying ceph config show command do not

Red Hat Ceph Storage 40 Release Notes

16

return the current values for certain configuration options such as fsid This is probably because certaincore options that are not meant for further modification after deploying a cluster are not updated and adefault value is used As a result the Dashboard does not show correct values for certain configurationoptions

(BZ1765493 BZ1772310)

NFS Ganesha in Dashboard

The Red Hat Ceph Storage Dashboard currently does not support managing NFS Ganesha

(BZ1772316)

Dashboard does not support email verification

The Red Hat Ceph Storage Dashboard does not support any email verification when changing a userspassword This behavior is intentional because the Dashboard supports Single Sign-On (SSO) and thisfeature can be delegated to the SSO provider

(BZ1778608)

The OSD histogram graph for read and write operations is not clear

The Red Hat Ceph Storage Dashboard does not display any numbers or description in the OSDhistogram graph for read and write operations and therefore the graph is not clear

(BZ1779170)

Dashboard returns an error when a ceph-mgr module is enabled from the ceph CLI

When enabling a Ceph Manager (ceph-mgr) module such as telemetry from the ceph CLI the Red HatCeph Storage Dashboard displays the following error message

0 - Unknown Error

In addition the Dashboard does not mark the module as enabled until the Refresh button is clicked

(BZ1785077)

The Dashboard allows modifying the LUN ID and WWN which can lead to data corruption

The Red Hat Ceph Storage Dashboard allows you to modify the LUN ID and its World Wide Name(WWN) which is not required after creating the LUN Moreover editing these parameters can bedangerous for certain initiators that do not fully support this feature by default Consequently editingthese parameters after creating them can lead to data corruption To avoid this do not modify the LUNID and WWN in the Dashboard

(BZ1786455)

The Dashboard does not provide correct Ceph iSCSI error messages

If the Ceph iSCSI returns an error for example the HTTP 400 code when trying to delete an iSCSItarget while a user is logged in the Red Hat Ceph Storage Dashboard does not forward that error codeand message to the Dashboard user using the pop-up notifications but displays a generic 500 InternalServer Error Consequently the message that the Dashboard provides is not informative and evenmisleading an expected behavior (users cannot delete a busy resource) is perceived as an operationalfailure (internal server error) To work around this issue see the Dashboard logs

(BZ1786457)

CHAPTER 6 KNOWN ISSUES

17

The Dashboard requires to disable iptables rules

The Red Hat Ceph Storage Dashboard is unable to perform any iSCSI operations such as creating agateway unless all iptables rules are manually disabled on the Ceph iSCSI node To do so use thefollowing command as root or the sudo user

iptables -F

Note that after a reboot the rules are enabled again Either disable them again or delete thempermanently

(BZ1792818)

63 PACKAGES

Current version of Grafana causes certain bugs in the Dashboard

Red Hat Ceph Storage 4 uses the Grafana version 524 This version causes the following bugs in theRed Hat Ceph Storage Dashboard

When navigating to Pools gt Overall Performance Grafana returns the following error

TypeError lc[ttype] is undefinedtrue

When viewing a poolrsquos performance details (Pools gt select a pool from the list gt PerformanceDetails) the Grafana bar is displayed along with other graphs and values but it should not bethere

These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red HatCeph Storage

(BZ1786107 BZ1762197 BZ1765536)

64 RED HAT ENTERPRISE LINUX

Ansible cannot start NFS Ganesha if SELinux is in enforcing mode

When using SELinux in enforcing mode on Red Hat Enterprise Linux 81 the ceph-ansible utility fails tostart the NFS Ganesha service because SELinux policy currently does not allow creating a directoryrequired for NFS Ganesha

(BZ1794027 BZ1796160)

Red Hat Ceph Storage 40 Release Notes

18

CHAPTER 7 DEPRECATED FUNCTIONALITYThis section provides an overview of functionality that has been deprecated in all minor releases up tothis release of Red Hat Ceph Storage

Ubuntu is no longer supported

Installing a Red Hat Ceph Storage 4 cluster on Ubuntu is no longer supported Use Red HatEnterprise Linux as the underlying operating system

71 THE CEPH-ANSIBLE UTILITY

Configuring iSCSI gateway using ceph-ansible is no longer supported

Configuring the Ceph iSCSI gateway by using the ceph-ansible utility is no longer supported Use ceph-ansible to install the gateway and then use either the gwcli utility of the Red Hat Ceph StorageDashboard to configure the gateway For details see the Using the Ceph iSCSI Gateway chapter in theBlock Device Guide for Red Hat Ceph Storage 4

72 THE CEPH-DISK UTILITY

ceph-disk is deprecated

With this release the ceph-disk utility is no longer supported The ceph-volume utility is used insteadFor details see the Why does ceph-volume replace `ceph-disk` section in the Administration Guide forRed Hat Ceph Storage 4

73 RADOS

FileStore is no longer supported in production

The FileStore OSD back end is now deprecated because the new BlueStore back end is now fullysupported in production For details see the How to migrate the object store from FileStore toBlueStore section in the Red Hat Ceph Storage 4 Installation Guide

Ceph configuration is now deprecated

The Ceph configuration file (cephconf) is now deprecated in favor of new centralized configurationstored in Monitors For details see the Configuration is now stored in Monitors accessible by using `cephconfig` release note

CHAPTER 7 DEPRECATED FUNCTIONALITY

19

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES
Page 16: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

Deprecated JSON fields have been removed

This update removes deprecated fields from the JSON output of the ceph status command

(BZ1739233)

CHAPTER 4 BUG FIXES

13

CHAPTER 5 TECHNOLOGY PREVIEWSThis section provides an overview of Technology Preview features introduced or updated in this releaseof Red Hat Ceph Storage

IMPORTANT

Technology Preview features are not supported with Red Hat production service levelagreements (SLAs) might not be functionally complete and Red Hat does notrecommend to use them for production These features provide early access toupcoming product features enabling customers to test functionality and providefeedback during the development process

For more information on Red Hat Technology Preview features support scope seehttpsaccessredhatcomsupportofferingstechpreview

51 CEPH FILE SYSTEM

CephFS snapshots

Ceph File System (CephFS) supports taking snapshots as a Technology Preview Snapshots create animmutable view of the file system at the point in time they are taken

52 BLOCK DEVICES (RBD)

Mapping RBD images to NBD images

The rbd-nbd utility maps RADOS Block Device (RBD) images to Network Block Devices (NBD) andenables Ceph clients to access volumes and images in Kubernetes environments To use rbd-nbd installthe rbd-nbd package For details see the rbd-nbd(7) manual page

53 OBJECT GATEWAY

Object Gateway archive site

With this release an archive site is supported as a Technology Preview The archive site allows you tohave a history of versions of S3 objects that can only be eliminated through the gateways associatedwith the archive zone Including an archive zone in a multizone configuration allows you to have theflexibility of an S3 object history in only one zone while saving the space that the replicas of the versionsS3 objects would consume in the rest of the zones

Tiering within a cluster by disk type

This release adds the ability to tier within a cluster by disk type as a Technology Preview by using thecapabilities for mapping for example pools to placement targets and storage classes Using lifecycletransition rules it is possible to cause objects to migrate between storage pools based on policy

S3 bucket notifications

S3 bucket notifications are now supported as a Technology Preview When certain events are triggeredon an S3 bucket the notifications can be sent from the Ceph Object Gateway to HTTP AdvancedMessage Queuing Protocol (AMQP) 91 and Kafka endpoints Additionally the notifications can bestored in a ldquoPubSubrdquo zone instead of or in addition to sending them to the endpoints ldquoPubSubrdquo is apublish-subscribe model that enables recipients to pull notifications from Ceph

Red Hat Ceph Storage 40 Release Notes

14

To use the S3 notifications install the librabbitmq and librdkafka packages

CHAPTER 5 TECHNOLOGY PREVIEWS

15

CHAPTER 6 KNOWN ISSUESThis section documents known issues found in this release of Red Hat Ceph Storage

61 THE CEPH-ANSIBLE UTILITY

Red Hat Ceph Storage installation on Red Hat OpenStack Platform fails

The ceph-ansible utility becomes unresponsive when attempting to install Red Hat Ceph Storagetogether with the Red Hat OpenStack Platform 16 and it returns an error similar to the following one

Error unable to exec into ceph-mon-dcn1-computehci1-2 no container with name or ID ceph-mon-dcn1-computehci1-2 found no such container

To work around this issue update the following part of the handler_osdsyml file located in the ceph-ansiblerolesceph-handlertasks directory

- name unset noup flag command container_exec_cmd | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

To

- name unset noup flag command hostvars[groups[mon_group_name][0]][container_exec_cmd] | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

And start the installation process again

(BZ1792320)

Ansible does not unset the norebalance flag after it completes

The rolling-updateyml Ansible playbook does not unset the norebalance flag after it completes Towork around this issue unset the flag manually

(BZ1793564)

Ansible fails to upgrade a multisite Ceph Object Gateway when the Dashboard is enabled

When the Red Hat Ceph Storage Dashboard is enabled an attempt to use Ansible to upgrade to afurther version of Red Hat Ceph Storage fails when trying to upgrade the secondary Ceph ObjectGateway site in a multisite setup This bug does not occur on the primary site or if the Dashboard is notenabled

(BZ1794351)

62 CEPH MANAGEMENT DASHBOARD

The Dashboard does not show correct values for certain configuration options

Both the Red Hat Ceph Storage Dashboard and the underlying ceph config show command do not

Red Hat Ceph Storage 40 Release Notes

16

return the current values for certain configuration options such as fsid This is probably because certaincore options that are not meant for further modification after deploying a cluster are not updated and adefault value is used As a result the Dashboard does not show correct values for certain configurationoptions

(BZ1765493 BZ1772310)

NFS Ganesha in Dashboard

The Red Hat Ceph Storage Dashboard currently does not support managing NFS Ganesha

(BZ1772316)

Dashboard does not support email verification

The Red Hat Ceph Storage Dashboard does not support any email verification when changing a userspassword This behavior is intentional because the Dashboard supports Single Sign-On (SSO) and thisfeature can be delegated to the SSO provider

(BZ1778608)

The OSD histogram graph for read and write operations is not clear

The Red Hat Ceph Storage Dashboard does not display any numbers or description in the OSDhistogram graph for read and write operations and therefore the graph is not clear

(BZ1779170)

Dashboard returns an error when a ceph-mgr module is enabled from the ceph CLI

When enabling a Ceph Manager (ceph-mgr) module such as telemetry from the ceph CLI the Red HatCeph Storage Dashboard displays the following error message

0 - Unknown Error

In addition the Dashboard does not mark the module as enabled until the Refresh button is clicked

(BZ1785077)

The Dashboard allows modifying the LUN ID and WWN which can lead to data corruption

The Red Hat Ceph Storage Dashboard allows you to modify the LUN ID and its World Wide Name(WWN) which is not required after creating the LUN Moreover editing these parameters can bedangerous for certain initiators that do not fully support this feature by default Consequently editingthese parameters after creating them can lead to data corruption To avoid this do not modify the LUNID and WWN in the Dashboard

(BZ1786455)

The Dashboard does not provide correct Ceph iSCSI error messages

If the Ceph iSCSI returns an error for example the HTTP 400 code when trying to delete an iSCSItarget while a user is logged in the Red Hat Ceph Storage Dashboard does not forward that error codeand message to the Dashboard user using the pop-up notifications but displays a generic 500 InternalServer Error Consequently the message that the Dashboard provides is not informative and evenmisleading an expected behavior (users cannot delete a busy resource) is perceived as an operationalfailure (internal server error) To work around this issue see the Dashboard logs

(BZ1786457)

CHAPTER 6 KNOWN ISSUES

17

The Dashboard requires to disable iptables rules

The Red Hat Ceph Storage Dashboard is unable to perform any iSCSI operations such as creating agateway unless all iptables rules are manually disabled on the Ceph iSCSI node To do so use thefollowing command as root or the sudo user

iptables -F

Note that after a reboot the rules are enabled again Either disable them again or delete thempermanently

(BZ1792818)

63 PACKAGES

Current version of Grafana causes certain bugs in the Dashboard

Red Hat Ceph Storage 4 uses the Grafana version 524 This version causes the following bugs in theRed Hat Ceph Storage Dashboard

When navigating to Pools gt Overall Performance Grafana returns the following error

TypeError lc[ttype] is undefinedtrue

When viewing a poolrsquos performance details (Pools gt select a pool from the list gt PerformanceDetails) the Grafana bar is displayed along with other graphs and values but it should not bethere

These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red HatCeph Storage

(BZ1786107 BZ1762197 BZ1765536)

64 RED HAT ENTERPRISE LINUX

Ansible cannot start NFS Ganesha if SELinux is in enforcing mode

When using SELinux in enforcing mode on Red Hat Enterprise Linux 81 the ceph-ansible utility fails tostart the NFS Ganesha service because SELinux policy currently does not allow creating a directoryrequired for NFS Ganesha

(BZ1794027 BZ1796160)

Red Hat Ceph Storage 40 Release Notes

18

CHAPTER 7 DEPRECATED FUNCTIONALITYThis section provides an overview of functionality that has been deprecated in all minor releases up tothis release of Red Hat Ceph Storage

Ubuntu is no longer supported

Installing a Red Hat Ceph Storage 4 cluster on Ubuntu is no longer supported Use Red HatEnterprise Linux as the underlying operating system

71 THE CEPH-ANSIBLE UTILITY

Configuring iSCSI gateway using ceph-ansible is no longer supported

Configuring the Ceph iSCSI gateway by using the ceph-ansible utility is no longer supported Use ceph-ansible to install the gateway and then use either the gwcli utility of the Red Hat Ceph StorageDashboard to configure the gateway For details see the Using the Ceph iSCSI Gateway chapter in theBlock Device Guide for Red Hat Ceph Storage 4

72 THE CEPH-DISK UTILITY

ceph-disk is deprecated

With this release the ceph-disk utility is no longer supported The ceph-volume utility is used insteadFor details see the Why does ceph-volume replace `ceph-disk` section in the Administration Guide forRed Hat Ceph Storage 4

73 RADOS

FileStore is no longer supported in production

The FileStore OSD back end is now deprecated because the new BlueStore back end is now fullysupported in production For details see the How to migrate the object store from FileStore toBlueStore section in the Red Hat Ceph Storage 4 Installation Guide

Ceph configuration is now deprecated

The Ceph configuration file (cephconf) is now deprecated in favor of new centralized configurationstored in Monitors For details see the Configuration is now stored in Monitors accessible by using `cephconfig` release note

CHAPTER 7 DEPRECATED FUNCTIONALITY

19

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES
Page 17: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

CHAPTER 5 TECHNOLOGY PREVIEWSThis section provides an overview of Technology Preview features introduced or updated in this releaseof Red Hat Ceph Storage

IMPORTANT

Technology Preview features are not supported with Red Hat production service levelagreements (SLAs) might not be functionally complete and Red Hat does notrecommend to use them for production These features provide early access toupcoming product features enabling customers to test functionality and providefeedback during the development process

For more information on Red Hat Technology Preview features support scope seehttpsaccessredhatcomsupportofferingstechpreview

51 CEPH FILE SYSTEM

CephFS snapshots

Ceph File System (CephFS) supports taking snapshots as a Technology Preview Snapshots create animmutable view of the file system at the point in time they are taken

52 BLOCK DEVICES (RBD)

Mapping RBD images to NBD images

The rbd-nbd utility maps RADOS Block Device (RBD) images to Network Block Devices (NBD) andenables Ceph clients to access volumes and images in Kubernetes environments To use rbd-nbd installthe rbd-nbd package For details see the rbd-nbd(7) manual page

53 OBJECT GATEWAY

Object Gateway archive site

With this release an archive site is supported as a Technology Preview The archive site allows you tohave a history of versions of S3 objects that can only be eliminated through the gateways associatedwith the archive zone Including an archive zone in a multizone configuration allows you to have theflexibility of an S3 object history in only one zone while saving the space that the replicas of the versionsS3 objects would consume in the rest of the zones

Tiering within a cluster by disk type

This release adds the ability to tier within a cluster by disk type as a Technology Preview by using thecapabilities for mapping for example pools to placement targets and storage classes Using lifecycletransition rules it is possible to cause objects to migrate between storage pools based on policy

S3 bucket notifications

S3 bucket notifications are now supported as a Technology Preview When certain events are triggeredon an S3 bucket the notifications can be sent from the Ceph Object Gateway to HTTP AdvancedMessage Queuing Protocol (AMQP) 91 and Kafka endpoints Additionally the notifications can bestored in a ldquoPubSubrdquo zone instead of or in addition to sending them to the endpoints ldquoPubSubrdquo is apublish-subscribe model that enables recipients to pull notifications from Ceph

Red Hat Ceph Storage 40 Release Notes

14

To use the S3 notifications install the librabbitmq and librdkafka packages

CHAPTER 5 TECHNOLOGY PREVIEWS

15

CHAPTER 6 KNOWN ISSUESThis section documents known issues found in this release of Red Hat Ceph Storage

61 THE CEPH-ANSIBLE UTILITY

Red Hat Ceph Storage installation on Red Hat OpenStack Platform fails

The ceph-ansible utility becomes unresponsive when attempting to install Red Hat Ceph Storagetogether with the Red Hat OpenStack Platform 16 and it returns an error similar to the following one

Error unable to exec into ceph-mon-dcn1-computehci1-2 no container with name or ID ceph-mon-dcn1-computehci1-2 found no such container

To work around this issue update the following part of the handler_osdsyml file located in the ceph-ansiblerolesceph-handlertasks directory

- name unset noup flag command container_exec_cmd | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

To

- name unset noup flag command hostvars[groups[mon_group_name][0]][container_exec_cmd] | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

And start the installation process again

(BZ1792320)

Ansible does not unset the norebalance flag after it completes

The rolling-updateyml Ansible playbook does not unset the norebalance flag after it completes Towork around this issue unset the flag manually

(BZ1793564)

Ansible fails to upgrade a multisite Ceph Object Gateway when the Dashboard is enabled

When the Red Hat Ceph Storage Dashboard is enabled an attempt to use Ansible to upgrade to afurther version of Red Hat Ceph Storage fails when trying to upgrade the secondary Ceph ObjectGateway site in a multisite setup This bug does not occur on the primary site or if the Dashboard is notenabled

(BZ1794351)

62 CEPH MANAGEMENT DASHBOARD

The Dashboard does not show correct values for certain configuration options

Both the Red Hat Ceph Storage Dashboard and the underlying ceph config show command do not

Red Hat Ceph Storage 40 Release Notes

16

return the current values for certain configuration options such as fsid This is probably because certaincore options that are not meant for further modification after deploying a cluster are not updated and adefault value is used As a result the Dashboard does not show correct values for certain configurationoptions

(BZ1765493 BZ1772310)

NFS Ganesha in Dashboard

The Red Hat Ceph Storage Dashboard currently does not support managing NFS Ganesha

(BZ1772316)

Dashboard does not support email verification

The Red Hat Ceph Storage Dashboard does not support any email verification when changing a userspassword This behavior is intentional because the Dashboard supports Single Sign-On (SSO) and thisfeature can be delegated to the SSO provider

(BZ1778608)

The OSD histogram graph for read and write operations is not clear

The Red Hat Ceph Storage Dashboard does not display any numbers or description in the OSDhistogram graph for read and write operations and therefore the graph is not clear

(BZ1779170)

Dashboard returns an error when a ceph-mgr module is enabled from the ceph CLI

When enabling a Ceph Manager (ceph-mgr) module such as telemetry from the ceph CLI the Red HatCeph Storage Dashboard displays the following error message

0 - Unknown Error

In addition the Dashboard does not mark the module as enabled until the Refresh button is clicked

(BZ1785077)

The Dashboard allows modifying the LUN ID and WWN which can lead to data corruption

The Red Hat Ceph Storage Dashboard allows you to modify the LUN ID and its World Wide Name(WWN) which is not required after creating the LUN Moreover editing these parameters can bedangerous for certain initiators that do not fully support this feature by default Consequently editingthese parameters after creating them can lead to data corruption To avoid this do not modify the LUNID and WWN in the Dashboard

(BZ1786455)

The Dashboard does not provide correct Ceph iSCSI error messages

If the Ceph iSCSI returns an error for example the HTTP 400 code when trying to delete an iSCSItarget while a user is logged in the Red Hat Ceph Storage Dashboard does not forward that error codeand message to the Dashboard user using the pop-up notifications but displays a generic 500 InternalServer Error Consequently the message that the Dashboard provides is not informative and evenmisleading an expected behavior (users cannot delete a busy resource) is perceived as an operationalfailure (internal server error) To work around this issue see the Dashboard logs

(BZ1786457)

CHAPTER 6 KNOWN ISSUES

17

The Dashboard requires to disable iptables rules

The Red Hat Ceph Storage Dashboard is unable to perform any iSCSI operations such as creating agateway unless all iptables rules are manually disabled on the Ceph iSCSI node To do so use thefollowing command as root or the sudo user

iptables -F

Note that after a reboot the rules are enabled again Either disable them again or delete thempermanently

(BZ1792818)

63 PACKAGES

Current version of Grafana causes certain bugs in the Dashboard

Red Hat Ceph Storage 4 uses the Grafana version 524 This version causes the following bugs in theRed Hat Ceph Storage Dashboard

When navigating to Pools gt Overall Performance Grafana returns the following error

TypeError lc[ttype] is undefinedtrue

When viewing a poolrsquos performance details (Pools gt select a pool from the list gt PerformanceDetails) the Grafana bar is displayed along with other graphs and values but it should not bethere

These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red HatCeph Storage

(BZ1786107 BZ1762197 BZ1765536)

64 RED HAT ENTERPRISE LINUX

Ansible cannot start NFS Ganesha if SELinux is in enforcing mode

When using SELinux in enforcing mode on Red Hat Enterprise Linux 81 the ceph-ansible utility fails tostart the NFS Ganesha service because SELinux policy currently does not allow creating a directoryrequired for NFS Ganesha

(BZ1794027 BZ1796160)

Red Hat Ceph Storage 40 Release Notes

18

CHAPTER 7 DEPRECATED FUNCTIONALITYThis section provides an overview of functionality that has been deprecated in all minor releases up tothis release of Red Hat Ceph Storage

Ubuntu is no longer supported

Installing a Red Hat Ceph Storage 4 cluster on Ubuntu is no longer supported Use Red HatEnterprise Linux as the underlying operating system

71 THE CEPH-ANSIBLE UTILITY

Configuring iSCSI gateway using ceph-ansible is no longer supported

Configuring the Ceph iSCSI gateway by using the ceph-ansible utility is no longer supported Use ceph-ansible to install the gateway and then use either the gwcli utility of the Red Hat Ceph StorageDashboard to configure the gateway For details see the Using the Ceph iSCSI Gateway chapter in theBlock Device Guide for Red Hat Ceph Storage 4

72 THE CEPH-DISK UTILITY

ceph-disk is deprecated

With this release the ceph-disk utility is no longer supported The ceph-volume utility is used insteadFor details see the Why does ceph-volume replace `ceph-disk` section in the Administration Guide forRed Hat Ceph Storage 4

73 RADOS

FileStore is no longer supported in production

The FileStore OSD back end is now deprecated because the new BlueStore back end is now fullysupported in production For details see the How to migrate the object store from FileStore toBlueStore section in the Red Hat Ceph Storage 4 Installation Guide

Ceph configuration is now deprecated

The Ceph configuration file (cephconf) is now deprecated in favor of new centralized configurationstored in Monitors For details see the Configuration is now stored in Monitors accessible by using `cephconfig` release note

CHAPTER 7 DEPRECATED FUNCTIONALITY

19

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES
Page 18: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

To use the S3 notifications install the librabbitmq and librdkafka packages

CHAPTER 5 TECHNOLOGY PREVIEWS

15

CHAPTER 6 KNOWN ISSUESThis section documents known issues found in this release of Red Hat Ceph Storage

61 THE CEPH-ANSIBLE UTILITY

Red Hat Ceph Storage installation on Red Hat OpenStack Platform fails

The ceph-ansible utility becomes unresponsive when attempting to install Red Hat Ceph Storagetogether with the Red Hat OpenStack Platform 16 and it returns an error similar to the following one

Error unable to exec into ceph-mon-dcn1-computehci1-2 no container with name or ID ceph-mon-dcn1-computehci1-2 found no such container

To work around this issue update the following part of the handler_osdsyml file located in the ceph-ansiblerolesceph-handlertasks directory

- name unset noup flag command container_exec_cmd | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

To

- name unset noup flag command hostvars[groups[mon_group_name][0]][container_exec_cmd] | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

And start the installation process again

(BZ1792320)

Ansible does not unset the norebalance flag after it completes

The rolling-updateyml Ansible playbook does not unset the norebalance flag after it completes Towork around this issue unset the flag manually

(BZ1793564)

Ansible fails to upgrade a multisite Ceph Object Gateway when the Dashboard is enabled

When the Red Hat Ceph Storage Dashboard is enabled an attempt to use Ansible to upgrade to afurther version of Red Hat Ceph Storage fails when trying to upgrade the secondary Ceph ObjectGateway site in a multisite setup This bug does not occur on the primary site or if the Dashboard is notenabled

(BZ1794351)

62 CEPH MANAGEMENT DASHBOARD

The Dashboard does not show correct values for certain configuration options

Both the Red Hat Ceph Storage Dashboard and the underlying ceph config show command do not

Red Hat Ceph Storage 40 Release Notes

16

return the current values for certain configuration options such as fsid This is probably because certaincore options that are not meant for further modification after deploying a cluster are not updated and adefault value is used As a result the Dashboard does not show correct values for certain configurationoptions

(BZ1765493 BZ1772310)

NFS Ganesha in Dashboard

The Red Hat Ceph Storage Dashboard currently does not support managing NFS Ganesha

(BZ1772316)

Dashboard does not support email verification

The Red Hat Ceph Storage Dashboard does not support any email verification when changing a userspassword This behavior is intentional because the Dashboard supports Single Sign-On (SSO) and thisfeature can be delegated to the SSO provider

(BZ1778608)

The OSD histogram graph for read and write operations is not clear

The Red Hat Ceph Storage Dashboard does not display any numbers or description in the OSDhistogram graph for read and write operations and therefore the graph is not clear

(BZ1779170)

Dashboard returns an error when a ceph-mgr module is enabled from the ceph CLI

When enabling a Ceph Manager (ceph-mgr) module such as telemetry from the ceph CLI the Red HatCeph Storage Dashboard displays the following error message

0 - Unknown Error

In addition the Dashboard does not mark the module as enabled until the Refresh button is clicked

(BZ1785077)

The Dashboard allows modifying the LUN ID and WWN which can lead to data corruption

The Red Hat Ceph Storage Dashboard allows you to modify the LUN ID and its World Wide Name(WWN) which is not required after creating the LUN Moreover editing these parameters can bedangerous for certain initiators that do not fully support this feature by default Consequently editingthese parameters after creating them can lead to data corruption To avoid this do not modify the LUNID and WWN in the Dashboard

(BZ1786455)

The Dashboard does not provide correct Ceph iSCSI error messages

If the Ceph iSCSI returns an error for example the HTTP 400 code when trying to delete an iSCSItarget while a user is logged in the Red Hat Ceph Storage Dashboard does not forward that error codeand message to the Dashboard user using the pop-up notifications but displays a generic 500 InternalServer Error Consequently the message that the Dashboard provides is not informative and evenmisleading an expected behavior (users cannot delete a busy resource) is perceived as an operationalfailure (internal server error) To work around this issue see the Dashboard logs

(BZ1786457)

CHAPTER 6 KNOWN ISSUES

17

The Dashboard requires to disable iptables rules

The Red Hat Ceph Storage Dashboard is unable to perform any iSCSI operations such as creating agateway unless all iptables rules are manually disabled on the Ceph iSCSI node To do so use thefollowing command as root or the sudo user

iptables -F

Note that after a reboot the rules are enabled again Either disable them again or delete thempermanently

(BZ1792818)

63 PACKAGES

Current version of Grafana causes certain bugs in the Dashboard

Red Hat Ceph Storage 4 uses the Grafana version 524 This version causes the following bugs in theRed Hat Ceph Storage Dashboard

When navigating to Pools gt Overall Performance Grafana returns the following error

TypeError lc[ttype] is undefinedtrue

When viewing a poolrsquos performance details (Pools gt select a pool from the list gt PerformanceDetails) the Grafana bar is displayed along with other graphs and values but it should not bethere

These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red HatCeph Storage

(BZ1786107 BZ1762197 BZ1765536)

64 RED HAT ENTERPRISE LINUX

Ansible cannot start NFS Ganesha if SELinux is in enforcing mode

When using SELinux in enforcing mode on Red Hat Enterprise Linux 81 the ceph-ansible utility fails tostart the NFS Ganesha service because SELinux policy currently does not allow creating a directoryrequired for NFS Ganesha

(BZ1794027 BZ1796160)

Red Hat Ceph Storage 40 Release Notes

18

CHAPTER 7 DEPRECATED FUNCTIONALITYThis section provides an overview of functionality that has been deprecated in all minor releases up tothis release of Red Hat Ceph Storage

Ubuntu is no longer supported

Installing a Red Hat Ceph Storage 4 cluster on Ubuntu is no longer supported Use Red HatEnterprise Linux as the underlying operating system

71 THE CEPH-ANSIBLE UTILITY

Configuring iSCSI gateway using ceph-ansible is no longer supported

Configuring the Ceph iSCSI gateway by using the ceph-ansible utility is no longer supported Use ceph-ansible to install the gateway and then use either the gwcli utility of the Red Hat Ceph StorageDashboard to configure the gateway For details see the Using the Ceph iSCSI Gateway chapter in theBlock Device Guide for Red Hat Ceph Storage 4

72 THE CEPH-DISK UTILITY

ceph-disk is deprecated

With this release the ceph-disk utility is no longer supported The ceph-volume utility is used insteadFor details see the Why does ceph-volume replace `ceph-disk` section in the Administration Guide forRed Hat Ceph Storage 4

73 RADOS

FileStore is no longer supported in production

The FileStore OSD back end is now deprecated because the new BlueStore back end is now fullysupported in production For details see the How to migrate the object store from FileStore toBlueStore section in the Red Hat Ceph Storage 4 Installation Guide

Ceph configuration is now deprecated

The Ceph configuration file (cephconf) is now deprecated in favor of new centralized configurationstored in Monitors For details see the Configuration is now stored in Monitors accessible by using `cephconfig` release note

CHAPTER 7 DEPRECATED FUNCTIONALITY

19

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES
Page 19: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

CHAPTER 6 KNOWN ISSUESThis section documents known issues found in this release of Red Hat Ceph Storage

61 THE CEPH-ANSIBLE UTILITY

Red Hat Ceph Storage installation on Red Hat OpenStack Platform fails

The ceph-ansible utility becomes unresponsive when attempting to install Red Hat Ceph Storagetogether with the Red Hat OpenStack Platform 16 and it returns an error similar to the following one

Error unable to exec into ceph-mon-dcn1-computehci1-2 no container with name or ID ceph-mon-dcn1-computehci1-2 found no such container

To work around this issue update the following part of the handler_osdsyml file located in the ceph-ansiblerolesceph-handlertasks directory

- name unset noup flag command container_exec_cmd | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

To

- name unset noup flag command hostvars[groups[mon_group_name][0]][container_exec_cmd] | default() ceph --cluster cluster osd unset noup delegate_to groups[mon_group_name][0] changed_when False

And start the installation process again

(BZ1792320)

Ansible does not unset the norebalance flag after it completes

The rolling-updateyml Ansible playbook does not unset the norebalance flag after it completes Towork around this issue unset the flag manually

(BZ1793564)

Ansible fails to upgrade a multisite Ceph Object Gateway when the Dashboard is enabled

When the Red Hat Ceph Storage Dashboard is enabled an attempt to use Ansible to upgrade to afurther version of Red Hat Ceph Storage fails when trying to upgrade the secondary Ceph ObjectGateway site in a multisite setup This bug does not occur on the primary site or if the Dashboard is notenabled

(BZ1794351)

62 CEPH MANAGEMENT DASHBOARD

The Dashboard does not show correct values for certain configuration options

Both the Red Hat Ceph Storage Dashboard and the underlying ceph config show command do not

Red Hat Ceph Storage 40 Release Notes

16

return the current values for certain configuration options such as fsid This is probably because certaincore options that are not meant for further modification after deploying a cluster are not updated and adefault value is used As a result the Dashboard does not show correct values for certain configurationoptions

(BZ1765493 BZ1772310)

NFS Ganesha in Dashboard

The Red Hat Ceph Storage Dashboard currently does not support managing NFS Ganesha

(BZ1772316)

Dashboard does not support email verification

The Red Hat Ceph Storage Dashboard does not support any email verification when changing a userspassword This behavior is intentional because the Dashboard supports Single Sign-On (SSO) and thisfeature can be delegated to the SSO provider

(BZ1778608)

The OSD histogram graph for read and write operations is not clear

The Red Hat Ceph Storage Dashboard does not display any numbers or description in the OSDhistogram graph for read and write operations and therefore the graph is not clear

(BZ1779170)

Dashboard returns an error when a ceph-mgr module is enabled from the ceph CLI

When enabling a Ceph Manager (ceph-mgr) module such as telemetry from the ceph CLI the Red HatCeph Storage Dashboard displays the following error message

0 - Unknown Error

In addition the Dashboard does not mark the module as enabled until the Refresh button is clicked

(BZ1785077)

The Dashboard allows modifying the LUN ID and WWN which can lead to data corruption

The Red Hat Ceph Storage Dashboard allows you to modify the LUN ID and its World Wide Name(WWN) which is not required after creating the LUN Moreover editing these parameters can bedangerous for certain initiators that do not fully support this feature by default Consequently editingthese parameters after creating them can lead to data corruption To avoid this do not modify the LUNID and WWN in the Dashboard

(BZ1786455)

The Dashboard does not provide correct Ceph iSCSI error messages

If the Ceph iSCSI returns an error for example the HTTP 400 code when trying to delete an iSCSItarget while a user is logged in the Red Hat Ceph Storage Dashboard does not forward that error codeand message to the Dashboard user using the pop-up notifications but displays a generic 500 InternalServer Error Consequently the message that the Dashboard provides is not informative and evenmisleading an expected behavior (users cannot delete a busy resource) is perceived as an operationalfailure (internal server error) To work around this issue see the Dashboard logs

(BZ1786457)

CHAPTER 6 KNOWN ISSUES

17

The Dashboard requires to disable iptables rules

The Red Hat Ceph Storage Dashboard is unable to perform any iSCSI operations such as creating agateway unless all iptables rules are manually disabled on the Ceph iSCSI node To do so use thefollowing command as root or the sudo user

iptables -F

Note that after a reboot the rules are enabled again Either disable them again or delete thempermanently

(BZ1792818)

63 PACKAGES

Current version of Grafana causes certain bugs in the Dashboard

Red Hat Ceph Storage 4 uses the Grafana version 524 This version causes the following bugs in theRed Hat Ceph Storage Dashboard

When navigating to Pools gt Overall Performance Grafana returns the following error

TypeError lc[ttype] is undefinedtrue

When viewing a poolrsquos performance details (Pools gt select a pool from the list gt PerformanceDetails) the Grafana bar is displayed along with other graphs and values but it should not bethere

These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red HatCeph Storage

(BZ1786107 BZ1762197 BZ1765536)

64 RED HAT ENTERPRISE LINUX

Ansible cannot start NFS Ganesha if SELinux is in enforcing mode

When using SELinux in enforcing mode on Red Hat Enterprise Linux 81 the ceph-ansible utility fails tostart the NFS Ganesha service because SELinux policy currently does not allow creating a directoryrequired for NFS Ganesha

(BZ1794027 BZ1796160)

Red Hat Ceph Storage 40 Release Notes

18

CHAPTER 7 DEPRECATED FUNCTIONALITYThis section provides an overview of functionality that has been deprecated in all minor releases up tothis release of Red Hat Ceph Storage

Ubuntu is no longer supported

Installing a Red Hat Ceph Storage 4 cluster on Ubuntu is no longer supported Use Red HatEnterprise Linux as the underlying operating system

71 THE CEPH-ANSIBLE UTILITY

Configuring iSCSI gateway using ceph-ansible is no longer supported

Configuring the Ceph iSCSI gateway by using the ceph-ansible utility is no longer supported Use ceph-ansible to install the gateway and then use either the gwcli utility of the Red Hat Ceph StorageDashboard to configure the gateway For details see the Using the Ceph iSCSI Gateway chapter in theBlock Device Guide for Red Hat Ceph Storage 4

72 THE CEPH-DISK UTILITY

ceph-disk is deprecated

With this release the ceph-disk utility is no longer supported The ceph-volume utility is used insteadFor details see the Why does ceph-volume replace `ceph-disk` section in the Administration Guide forRed Hat Ceph Storage 4

73 RADOS

FileStore is no longer supported in production

The FileStore OSD back end is now deprecated because the new BlueStore back end is now fullysupported in production For details see the How to migrate the object store from FileStore toBlueStore section in the Red Hat Ceph Storage 4 Installation Guide

Ceph configuration is now deprecated

The Ceph configuration file (cephconf) is now deprecated in favor of new centralized configurationstored in Monitors For details see the Configuration is now stored in Monitors accessible by using `cephconfig` release note

CHAPTER 7 DEPRECATED FUNCTIONALITY

19

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES
Page 20: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

return the current values for certain configuration options such as fsid This is probably because certaincore options that are not meant for further modification after deploying a cluster are not updated and adefault value is used As a result the Dashboard does not show correct values for certain configurationoptions

(BZ1765493 BZ1772310)

NFS Ganesha in Dashboard

The Red Hat Ceph Storage Dashboard currently does not support managing NFS Ganesha

(BZ1772316)

Dashboard does not support email verification

The Red Hat Ceph Storage Dashboard does not support any email verification when changing a userspassword This behavior is intentional because the Dashboard supports Single Sign-On (SSO) and thisfeature can be delegated to the SSO provider

(BZ1778608)

The OSD histogram graph for read and write operations is not clear

The Red Hat Ceph Storage Dashboard does not display any numbers or description in the OSDhistogram graph for read and write operations and therefore the graph is not clear

(BZ1779170)

Dashboard returns an error when a ceph-mgr module is enabled from the ceph CLI

When enabling a Ceph Manager (ceph-mgr) module such as telemetry from the ceph CLI the Red HatCeph Storage Dashboard displays the following error message

0 - Unknown Error

In addition the Dashboard does not mark the module as enabled until the Refresh button is clicked

(BZ1785077)

The Dashboard allows modifying the LUN ID and WWN which can lead to data corruption

The Red Hat Ceph Storage Dashboard allows you to modify the LUN ID and its World Wide Name(WWN) which is not required after creating the LUN Moreover editing these parameters can bedangerous for certain initiators that do not fully support this feature by default Consequently editingthese parameters after creating them can lead to data corruption To avoid this do not modify the LUNID and WWN in the Dashboard

(BZ1786455)

The Dashboard does not provide correct Ceph iSCSI error messages

If the Ceph iSCSI returns an error for example the HTTP 400 code when trying to delete an iSCSItarget while a user is logged in the Red Hat Ceph Storage Dashboard does not forward that error codeand message to the Dashboard user using the pop-up notifications but displays a generic 500 InternalServer Error Consequently the message that the Dashboard provides is not informative and evenmisleading an expected behavior (users cannot delete a busy resource) is perceived as an operationalfailure (internal server error) To work around this issue see the Dashboard logs

(BZ1786457)

CHAPTER 6 KNOWN ISSUES

17

The Dashboard requires to disable iptables rules

The Red Hat Ceph Storage Dashboard is unable to perform any iSCSI operations such as creating agateway unless all iptables rules are manually disabled on the Ceph iSCSI node To do so use thefollowing command as root or the sudo user

iptables -F

Note that after a reboot the rules are enabled again Either disable them again or delete thempermanently

(BZ1792818)

63 PACKAGES

Current version of Grafana causes certain bugs in the Dashboard

Red Hat Ceph Storage 4 uses the Grafana version 524 This version causes the following bugs in theRed Hat Ceph Storage Dashboard

When navigating to Pools gt Overall Performance Grafana returns the following error

TypeError lc[ttype] is undefinedtrue

When viewing a poolrsquos performance details (Pools gt select a pool from the list gt PerformanceDetails) the Grafana bar is displayed along with other graphs and values but it should not bethere

These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red HatCeph Storage

(BZ1786107 BZ1762197 BZ1765536)

64 RED HAT ENTERPRISE LINUX

Ansible cannot start NFS Ganesha if SELinux is in enforcing mode

When using SELinux in enforcing mode on Red Hat Enterprise Linux 81 the ceph-ansible utility fails tostart the NFS Ganesha service because SELinux policy currently does not allow creating a directoryrequired for NFS Ganesha

(BZ1794027 BZ1796160)

Red Hat Ceph Storage 40 Release Notes

18

CHAPTER 7 DEPRECATED FUNCTIONALITYThis section provides an overview of functionality that has been deprecated in all minor releases up tothis release of Red Hat Ceph Storage

Ubuntu is no longer supported

Installing a Red Hat Ceph Storage 4 cluster on Ubuntu is no longer supported Use Red HatEnterprise Linux as the underlying operating system

71 THE CEPH-ANSIBLE UTILITY

Configuring iSCSI gateway using ceph-ansible is no longer supported

Configuring the Ceph iSCSI gateway by using the ceph-ansible utility is no longer supported Use ceph-ansible to install the gateway and then use either the gwcli utility of the Red Hat Ceph StorageDashboard to configure the gateway For details see the Using the Ceph iSCSI Gateway chapter in theBlock Device Guide for Red Hat Ceph Storage 4

72 THE CEPH-DISK UTILITY

ceph-disk is deprecated

With this release the ceph-disk utility is no longer supported The ceph-volume utility is used insteadFor details see the Why does ceph-volume replace `ceph-disk` section in the Administration Guide forRed Hat Ceph Storage 4

73 RADOS

FileStore is no longer supported in production

The FileStore OSD back end is now deprecated because the new BlueStore back end is now fullysupported in production For details see the How to migrate the object store from FileStore toBlueStore section in the Red Hat Ceph Storage 4 Installation Guide

Ceph configuration is now deprecated

The Ceph configuration file (cephconf) is now deprecated in favor of new centralized configurationstored in Monitors For details see the Configuration is now stored in Monitors accessible by using `cephconfig` release note

CHAPTER 7 DEPRECATED FUNCTIONALITY

19

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES
Page 21: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

The Dashboard requires to disable iptables rules

The Red Hat Ceph Storage Dashboard is unable to perform any iSCSI operations such as creating agateway unless all iptables rules are manually disabled on the Ceph iSCSI node To do so use thefollowing command as root or the sudo user

iptables -F

Note that after a reboot the rules are enabled again Either disable them again or delete thempermanently

(BZ1792818)

63 PACKAGES

Current version of Grafana causes certain bugs in the Dashboard

Red Hat Ceph Storage 4 uses the Grafana version 524 This version causes the following bugs in theRed Hat Ceph Storage Dashboard

When navigating to Pools gt Overall Performance Grafana returns the following error

TypeError lc[ttype] is undefinedtrue

When viewing a poolrsquos performance details (Pools gt select a pool from the list gt PerformanceDetails) the Grafana bar is displayed along with other graphs and values but it should not bethere

These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red HatCeph Storage

(BZ1786107 BZ1762197 BZ1765536)

64 RED HAT ENTERPRISE LINUX

Ansible cannot start NFS Ganesha if SELinux is in enforcing mode

When using SELinux in enforcing mode on Red Hat Enterprise Linux 81 the ceph-ansible utility fails tostart the NFS Ganesha service because SELinux policy currently does not allow creating a directoryrequired for NFS Ganesha

(BZ1794027 BZ1796160)

Red Hat Ceph Storage 40 Release Notes

18

CHAPTER 7 DEPRECATED FUNCTIONALITYThis section provides an overview of functionality that has been deprecated in all minor releases up tothis release of Red Hat Ceph Storage

Ubuntu is no longer supported

Installing a Red Hat Ceph Storage 4 cluster on Ubuntu is no longer supported Use Red HatEnterprise Linux as the underlying operating system

71 THE CEPH-ANSIBLE UTILITY

Configuring iSCSI gateway using ceph-ansible is no longer supported

Configuring the Ceph iSCSI gateway by using the ceph-ansible utility is no longer supported Use ceph-ansible to install the gateway and then use either the gwcli utility of the Red Hat Ceph StorageDashboard to configure the gateway For details see the Using the Ceph iSCSI Gateway chapter in theBlock Device Guide for Red Hat Ceph Storage 4

72 THE CEPH-DISK UTILITY

ceph-disk is deprecated

With this release the ceph-disk utility is no longer supported The ceph-volume utility is used insteadFor details see the Why does ceph-volume replace `ceph-disk` section in the Administration Guide forRed Hat Ceph Storage 4

73 RADOS

FileStore is no longer supported in production

The FileStore OSD back end is now deprecated because the new BlueStore back end is now fullysupported in production For details see the How to migrate the object store from FileStore toBlueStore section in the Red Hat Ceph Storage 4 Installation Guide

Ceph configuration is now deprecated

The Ceph configuration file (cephconf) is now deprecated in favor of new centralized configurationstored in Monitors For details see the Configuration is now stored in Monitors accessible by using `cephconfig` release note

CHAPTER 7 DEPRECATED FUNCTIONALITY

19

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES
Page 22: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

CHAPTER 7 DEPRECATED FUNCTIONALITYThis section provides an overview of functionality that has been deprecated in all minor releases up tothis release of Red Hat Ceph Storage

Ubuntu is no longer supported

Installing a Red Hat Ceph Storage 4 cluster on Ubuntu is no longer supported Use Red HatEnterprise Linux as the underlying operating system

71 THE CEPH-ANSIBLE UTILITY

Configuring iSCSI gateway using ceph-ansible is no longer supported

Configuring the Ceph iSCSI gateway by using the ceph-ansible utility is no longer supported Use ceph-ansible to install the gateway and then use either the gwcli utility of the Red Hat Ceph StorageDashboard to configure the gateway For details see the Using the Ceph iSCSI Gateway chapter in theBlock Device Guide for Red Hat Ceph Storage 4

72 THE CEPH-DISK UTILITY

ceph-disk is deprecated

With this release the ceph-disk utility is no longer supported The ceph-volume utility is used insteadFor details see the Why does ceph-volume replace `ceph-disk` section in the Administration Guide forRed Hat Ceph Storage 4

73 RADOS

FileStore is no longer supported in production

The FileStore OSD back end is now deprecated because the new BlueStore back end is now fullysupported in production For details see the How to migrate the object store from FileStore toBlueStore section in the Red Hat Ceph Storage 4 Installation Guide

Ceph configuration is now deprecated

The Ceph configuration file (cephconf) is now deprecated in favor of new centralized configurationstored in Monitors For details see the Configuration is now stored in Monitors accessible by using `cephconfig` release note

CHAPTER 7 DEPRECATED FUNCTIONALITY

19

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES
Page 23: Red Hat Ceph Storage 4...May 20, 2020  · With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment

CHAPTER 8 SOURCESThe updated Red Hat Ceph Storage source code packages are available athttpftpredhatcomredhatlinuxenterprise7ServerenRHCEPHSRPMS

Red Hat Ceph Storage 40 Release Notes

20

  • Table of Contents
  • CHAPTER 1 INTRODUCTION
  • CHAPTER 2 ACKNOWLEDGMENTS
  • CHAPTER 3 NEW FEATURES
    • 31 THE CEPH-ANSIBLE UTILITY
    • 32 CEPH MANAGEMENT DASHBOARD
    • 33 CEPH FILE SYSTEM
    • 34 CEPH MEDIC
    • 35 ISCSI GATEWAY
    • 36 OBJECT GATEWAY
    • 37 PACKAGES
    • 38 RADOS
    • 39 BLOCK DEVICES (RBD)
    • 310 RBD MIRRORING
      • CHAPTER 4 BUG FIXES
        • 41 THE CEPH-ANSIBLE UTILITY
        • 42 OBJECT GATEWAY
        • 43 RADOS
          • CHAPTER 5 TECHNOLOGY PREVIEWS
            • 51 CEPH FILE SYSTEM
            • 52 BLOCK DEVICES (RBD)
            • 53 OBJECT GATEWAY
              • CHAPTER 6 KNOWN ISSUES
                • 61 THE CEPH-ANSIBLE UTILITY
                • 62 CEPH MANAGEMENT DASHBOARD
                • 63 PACKAGES
                • 64 RED HAT ENTERPRISE LINUX
                  • CHAPTER 7 DEPRECATED FUNCTIONALITY
                    • 71 THE CEPH-ANSIBLE UTILITY
                    • 72 THE CEPH-DISK UTILITY
                    • 73 RADOS
                      • CHAPTER 8 SOURCES