Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1519853 - [Docs] Admin Guide requires clarifications regarding OSD instructions
Summary: [Docs] Admin Guide requires clarifications regarding OSD instructions
Keywords:
Status: VERIFIED
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Documentation
Version: 3.0
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: rc
: 3.1
Assignee: Aron Gunn
QA Contact: Parikshith
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-12-01 14:50 UTC by khartsoe@redhat.com
Modified: 2018-08-17 11:27 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:


Attachments (Terms of Use)
Notes for recommended changes (deleted)
2017-12-01 14:50 UTC, khartsoe@redhat.com
no flags Details

Description khartsoe@redhat.com 2017-12-01 14:50:06 UTC
Created attachment 1361628 [details]
Notes for recommended changes

Description of problem:
Some commands and examples in the "6.3.3. Adding an OSD with the Command Line Interface" of the RHCS 3.0 Administration Guide require updates/clarifications as described below and in attachment.

[From Randy Martinez]

I have some additional updates for RHCS 3.0 documentation:

1. The parted cmd on pg. 61 should be updated to reflect how ceph by
default names data partitions "ceph data", instead of "primary":
REF: "parted <path_to_disk> mkpart primary <start> <end>"
if that doesn't happen then it can be resolved with: " sgdisk
--change-name="1:ceph data" /dev/sdb"

2. pg. 62
<ftp://partners.redhat.com/19b58b146c4fb868841d9374c1df0c9e/docs/Red_Hat_Ceph_Storage-3-Administration_Guide-en-US.pdf>:
"1. Initialize the OSD data directory: ceph-osd -i --mkfs --mkkey
--osd-uuid "... This does not create file `journal_uuid`, nor is it
mentioned in the docs to do so additionally. This is simply a file in
osd/ceph-<id> with the journal partuuid in it. Why does this matter? Well
if you have OCD and ever want `ceph-disk list` to map properly this file
must be present with the partuuid. Not to mention it poses a potential
problem for someone in operations who doesn't know better, and thinks that
the part isn't being used anymore during hardware failure scenarios.
To resolve: echo "<Journal_partuuid>" >
/var/lib/ceph/osd/ceph-<id>/journal_uuid; chown ceph:ceph !$. This will
turn things back to normal.
REF: ceph-disk list - Before journal_uuid present
 /dev/nvme1n1p2 ceph journal
/dev/sdb :
 /dev/sdb1 ceph data, active, cluster ceph, osd.319
####After
 /dev/nvme1n1p2 ceph journal, for /dev/sdb1
/dev/sdb :
 /dev/sdb1 ceph data, active, cluster ceph, osd.319, journal /dev/nvme1n1p2

*Note: the OSD itself doesn't care so long as journal is linked, I do
though.

3. Creating key manually for OSD is missing mgr permissions: `mgr 'allow
profile osd'` from pg. 63
<ftp://partners.redhat.com/19b58b146c4fb868841d9374c1df0c9e/docs/Red_Hat_Ceph_Storage-3-Administration_Guide-en-US.pdf>
of
.pdf.

I've attached my notes start->Finish on how to replace an OSD, and "Re-Use"
existing journal partition(Scenario: Spinner dies; NVME is dedicated
journal for multiple spinners; Re-use old part). This is likely going to be
the #1 request from our larger customers as rebooting is seldom ideal, and
I don't think ceph-ansible will do it properly. Let me know if you need
anything.

Version-Release number of selected component (if applicable):
3.0


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:


Note You need to log in before you can comment on or make changes to this bug.