[Bug 2051299] Re: nvme-cli: fguid is printed as binary data and causes MAAS to fail erasing NVME disks

Matthew Ruffell 2051299 at bugs.launchpad.net
Thu Jun 13 04:25:03 UTC 2024


Attached is a debdiff for nvme-cli on jammy which fixes this issue.

** Summary changed:

- Failed to wipe Micron 7400 MTFDKBA960TDZ during machine release
+ nvme-cli: fguid is printed as binary data and causes MAAS to fail erasing NVME disks

** Description changed:

- - Both main and secondary controllers running MAAS 3.4.0-14321-g.1027c7664 installed by Snap.
- - Host: Supermicro SYS-6019P-WT with Micron 7400 MTFDKBA960TDZ
- - Both the first and second attempts of machine releasing failed.
- - All parts of commissioning and tests passed with no errors. Ubuntu was deployed to the host with no errors.
- - Newly shipped hardware so it can't be a hardware problem.
+ [Impact]
  
- ```
- 2024-01-25T16:56:27+00:00 dp2 cloud-init[2497]: (Reading database ... #015(Reading database ... 5%#015(Reading database ... 10%#015(Reading database ... 15%#015(Reading database ... 20%#015(Reading database ... 25%#015(Reading database >2024-01-25T16:56:27+00:00 dp2 cloud-init[2497]: Preparing to unpack .../nvme-cli_1.16-3ubuntu0.1_amd64.deb ...
- 2024-01-25T16:56:27+00:00 dp2 cloud-init[2497]: Unpacking nvme-cli (1.16-3ubuntu0.1) ...
- 2024-01-25T16:56:27+00:00 dp2 cloud-init[2497]: Setting up nvme-cli (1.16-3ubuntu0.1) ...
- 2024-01-25T16:56:27+00:00 dp2 systemd[1]: Reloading.
- 2024-01-25T16:56:27+00:00 dp2 cloud-init[2497]: Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service.
- 2024-01-25T16:56:27+00:00 dp2 systemd[1]: Reloading.
- 2024-01-25T16:56:27+00:00 dp2 cloud-init[2497]: Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service.
- 2024-01-25T16:56:28+00:00 dp2 systemd[1]: Reloading.
- 2024-01-25T16:56:28+00:00 dp2 cloud-init[2497]: nvmf-connect.target is a disabled or a static unit, not starting it.
- 2024-01-25T16:56:28+00:00 dp2 systemd[1]: Condition check resulted in Auto-connect to subsystems on FC-NVME devices found during boot being skipped.
- 2024-01-25T16:56:28+00:00 dp2 systemd[1]: Starting Connect NVMe-oF subsystems automatically during boot...
- 2024-01-25T16:56:28+00:00 dp2 systemd[1]: nvmf-autoconnect.service: Deactivated successfully.
- 2024-01-25T16:56:28+00:00 dp2 systemd[1]: Finished Connect NVMe-oF subsystems automatically during boot.
- 2024-01-25T16:56:28+00:00 dp2 cloud-init[2497]: Processing triggers for man-db (2.10.2-1) ...
- 2024-01-25T16:56:28+00:00 dp2 cloud-init[2497]: NEEDRESTART-VER: 3.5
- 2024-01-25T16:56:29+00:00 dp2 cloud-init[2497]: NEEDRESTART-KCUR: 5.15.0-91-generic
- 2024-01-25T16:56:29+00:00 dp2 cloud-init[2497]: NEEDRESTART-KSTA: 0
- 2024-01-25T16:56:29+00:00 dp2 cloud-init[2497]: NEEDRESTART-UCSTA: 0
- 2024-01-25T16:56:30+00:00 dp2 cloud-init[2497]: Traceback (most recent call last):
- 2024-01-25T16:56:30+00:00 dp2 cloud-init[2497]:   File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 542, in <module>
- 2024-01-25T16:56:30+00:00 dp2 cloud-init[2497]:     main()
- 2024-01-25T16:56:30+00:00 dp2 cloud-init[2497]:   File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 522, in main
- 2024-01-25T16:56:30+00:00 dp2 cloud-init[2497]:     disk_info = get_disk_info()
- 2024-01-25T16:56:30+00:00 dp2 cloud-init[2497]:   File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 165, in get_disk_info
- 2024-01-25T16:56:30+00:00 dp2 cloud-init[2497]:     return {kname: get_disk_security_info(kname) for kname in list_disks()}
- 2024-01-25T16:56:30+00:00 dp2 cloud-init[2497]:   File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 165, in <dictcomp>
- 2024-01-25T16:56:30+00:00 dp2 cloud-init[2497]:     return {kname: get_disk_security_info(kname) for kname in list_disks()}
- 2024-01-25T16:56:30+00:00 dp2 cloud-init[2497]:   File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 158, in get_disk_security_info
- 2024-01-25T16:56:30+00:00 dp2 cloud-init[2497]:     return get_nvme_security_info(disk)
- 2024-01-25T16:56:30+00:00 dp2 cloud-init[2497]:   File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 64, in get_nvme_security_info
- 2024-01-25T16:56:30+00:00 dp2 cloud-init[2497]:     output = output.decode()
- 2024-01-25T16:56:30+00:00 dp2 cloud-init[2497]: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 385: invalid start byte
- 2024-01-25T16:56:30+00:00 dp2 cloud-init[2497]: 2024-01-25 16:56:30,929 - cc_scripts_user.py[WARNING]: Failed to run module scripts_user (scripts in /var/lib/cloud/instance/scripts)
- 2024-01-25T16:56:30+00:00 dp2 cloud-init[2497]: 2024-01-25 16:56:30,930 - util.py[WARNING]: Running module scripts_user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_scripts_user.py>
- 2024-01-25T16:56:31+00:00 dp2 cloud-init: #############################################################
- 2024-01-25T16:56:31+00:00 dp2 cloud-init: -----BEGIN SSH HOST KEY FINGERPRINTS-----
- 2024-01-25T16:56:31+00:00 dp2 cloud-init: 1024 SHA256:K01EHEVM4P7nsT++Sa/t2HdUJBmEtq0CzvTF80Nxl5U root at dp2 (DSA)
- 2024-01-25T16:56:31+00:00 dp2 cloud-init: 256 SHA256:esLgsNXtlXz5DTOVoo87+TEwUoV5AOaNTKow7VU1is0 root at dp2 (ECDSA)
- 2024-01-25T16:56:31+00:00 dp2 cloud-init: 256 SHA256:JyJHsc60D/M5slr2ERoYDtKaTA4y08hT4blFFR45Dn4 root at dp2 (ED25519)
- 2024-01-25T16:56:31+00:00 dp2 cloud-init: 3072 SHA256:KdILKl8DlVf2DFhxbe6x4TY806Wv42peRfXrRWH/7gw root at dp2 (RSA)
- 2024-01-25T16:56:31+00:00 dp2 cloud-init: -----END SSH HOST KEY FINGERPRINTS-----
- 2024-01-25T16:56:31+00:00 dp2 cloud-init: #############################################################
- 2024-01-25T16:56:31+00:00 dp2 cloud-init[2497]: Cloud-init v. 23.3.3-0ubuntu0~22.04.1 finished at Thu, 25 Jan 2024 16:56:31 +0000. Datasource DataSourceMAAS [http://192.168.0.2:5248/MAAS/metadata/].  Up 164.57 seconds
- 2024-01-25T16:56:31+00:00 dp2 systemd[1]: cloud-final.service: Main process exited, code=exited, status=1/FAILURE
- 2024-01-25T16:56:31+00:00 dp2 systemd[1]: cloud-final.service: Failed with result 'exit-code'.
- 2024-01-25T16:56:31+00:00 dp2 systemd[1]: cloud-final.service: Unit process 2496 (sh) remains running after unit stopped.
- 2024-01-25T16:56:31+00:00 dp2 systemd[1]: cloud-final.service: Unit process 2497 (tee) remains running after unit stopped.
- 2024-01-25T16:56:31+00:00 dp2 systemd[1]: cloud-final.service: Unit process 3187 (cloud-init) remains running after unit stopped.
- 2024-01-25T16:56:31+00:00 dp2 systemd[1]: Failed to start Execute cloud user/final scripts.
- 2024-01-25T16:56:31+00:00 dp2 systemd[1]: cloud-final.service: Consumed 13.354s CPU time.
- 2024-01-25T16:56:31+00:00 dp2 systemd[1]: Reached target Cloud-init target.
- 2024-01-25T16:56:31+00:00 dp2 systemd[1]: Startup finished in 22.611s (kernel) + 2min 22.111s (userspace) = 2min 44.723s.
- 2024-01-25T16:56:31+00:00 dp2 systemd[1]: Removed slice Slice /system/modprobe.
- 2024-01-25T16:56:31+00:00 dp2 systemd[1]: Stopped target Cloud-init target.
- ```
+ When a user tries to release a system deployed with MAAS, that has erase
+ disks on release set, erasing NVME disks fails on Jammy.
+ 
+ Traceback (most recent call last):
+ File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 542, in <module>
+ main()
+ File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 522, in main
+ disk_info = get_disk_info()
+ File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 165, in get_disk_info
+ return {kname: get_disk_security_info(kname) for kname in list_disks()}
+ File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 165, in <dictcomp>
+ return {kname: get_disk_security_info(kname) for kname in list_disks()}
+ File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 158, in get_disk_security_info
+ return get_nvme_security_info(disk)
+ File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 64, in get_nvme_security_info
+ output = output.decode()
+ UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 385: invalid start byte
+ 
+ This is due to maas_wipe.py running "nvme id-ctrl <device>" and parsing
+ the results. This should be human readable data, in string format, so
+ utf-8 should be appropriate for MAAS to use.
+ 
+ Instead, the "fguid" field is being printed as binary data, and is not
+ parsable as utf-8.
+ 
+ e.g. From comment #8.
+ 
+ The user sees:
+ 
+ `fguid : 2.`
+ 
+ on closer inspection, the hex is:
+ 
+ x32,0x89,0x82,0x2E
+ 
+ Note it is cut off early, likely because the next byte would be 0x00,
+ and is being interprested as a null byte.
+ 
+ Fix nvme-cli such that we print out the fguid as a correct utf-8 string,
+ so MAAS works as intended.
+ 
+ [Testcase]
+ 
+ Deploy Jammy onto a system that has a NVME device.
+ 
+ $ sudo apt install nvme-cli
+ 
+ Run the 'id-ctrl' command and look at the fguid entry:
+ 
+ $ sudo nvme id-ctrl /dev/nvme1n1 | grep fguid
+ fguid     : 
+ 
+ Due to the UUID being all zeros, this was interpreted as a null byte,
+ and the UUID was not printed correctly.
+ 
+ There is a test package available in the following ppa:
+ 
+ https://launchpad.net/~mruffell/+archive/ubuntu/sf387274-test
+ 
+ If you install the test package, the fguid will be printed as a proper
+ string:
+ 
+ $ sudo nvme id-ctrl /dev/nvme1n1 | grep fguid
+ fguid     : 00000000-0000-0000-0000-000000000000
+ 
+ Also check that json output works as expected:
+ 
+ $ sudo nvme id-ctrl -o json /dev/nvme1n1 | grep fguid
+   "fguid" : "00000000-0000-0000-0000-000000000000",
+ 
+ Additionally, also test that the new package allows a MAAS deployed system to
+ be released correctly with the erase option enabled, as maas_wipe.py should now
+ complete successfully.
+ 
+ [Where problems could occur]
+ 
+ [Other info]
+ 
+ Upstream bug:
+ https://github.com/linux-nvme/nvme-cli/issues/1653
+ 
+ This was fixed in the below commit:
+ 
+ commit 78b7ad235507ddd59c75c7fcc74fc6c927811f87
+ From: Pierre Labat <plabat at micron.com>
+ Date: Fri, 26 Aug 2022 17:02:08 -0500
+ Subject: nvme-print: Print fguid as a UUID
+ Link: https://github.com/linux-nvme/nvme-cli/commit/78b7ad235507ddd59c75c7fcc74fc6c927811f87
+ 
+ The commit required a minor backport. In later versions, a major
+ refactor occurred that changed nvme_uuid_to_string() among numerous
+ other functions, that is not appropriate to backport. Instead, just take
+ the current implementation of nvme_uuid_to_string() and move it like the
+ patch suggests, so json output works correctly.

** Tags added: sts

** Description changed:

  [Impact]
  
  When a user tries to release a system deployed with MAAS, that has erase
  disks on release set, erasing NVME disks fails on Jammy.
  
  Traceback (most recent call last):
  File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 542, in <module>
  main()
  File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 522, in main
  disk_info = get_disk_info()
  File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 165, in get_disk_info
  return {kname: get_disk_security_info(kname) for kname in list_disks()}
  File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 165, in <dictcomp>
  return {kname: get_disk_security_info(kname) for kname in list_disks()}
  File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 158, in get_disk_security_info
  return get_nvme_security_info(disk)
  File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 64, in get_nvme_security_info
  output = output.decode()
  UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 385: invalid start byte
  
  This is due to maas_wipe.py running "nvme id-ctrl <device>" and parsing
  the results. This should be human readable data, in string format, so
  utf-8 should be appropriate for MAAS to use.
  
  Instead, the "fguid" field is being printed as binary data, and is not
  parsable as utf-8.
  
  e.g. From comment #8.
  
  The user sees:
  
  `fguid : 2.`
  
  on closer inspection, the hex is:
  
  x32,0x89,0x82,0x2E
  
  Note it is cut off early, likely because the next byte would be 0x00,
  and is being interprested as a null byte.
  
  Fix nvme-cli such that we print out the fguid as a correct utf-8 string,
  so MAAS works as intended.
  
  [Testcase]
  
  Deploy Jammy onto a system that has a NVME device.
  
  $ sudo apt install nvme-cli
  
  Run the 'id-ctrl' command and look at the fguid entry:
  
  $ sudo nvme id-ctrl /dev/nvme1n1 | grep fguid
- fguid     : 
+ fguid     :
  
  Due to the UUID being all zeros, this was interpreted as a null byte,
  and the UUID was not printed correctly.
  
  There is a test package available in the following ppa:
  
  https://launchpad.net/~mruffell/+archive/ubuntu/sf387274-test
  
  If you install the test package, the fguid will be printed as a proper
  string:
  
  $ sudo nvme id-ctrl /dev/nvme1n1 | grep fguid
  fguid     : 00000000-0000-0000-0000-000000000000
  
  Also check that json output works as expected:
  
  $ sudo nvme id-ctrl -o json /dev/nvme1n1 | grep fguid
-   "fguid" : "00000000-0000-0000-0000-000000000000",
+   "fguid" : "00000000-0000-0000-0000-000000000000",
  
  Additionally, also test that the new package allows a MAAS deployed system to
  be released correctly with the erase option enabled, as maas_wipe.py should now
  complete successfully.
  
  [Where problems could occur]
+ 
+ We are changing the output of the 'id-ctrl' subcommand. No other
+ subcommands are changed. Users who for some reason rely on broken,
+ incomplete binary data that is printed might be impacted. For users
+ doing a hard diff of the command output, the output will now change to
+ reflect the actual fguid, and might need a change. The fguid is now
+ supplied in json output for 'id-ctrl', and might change programs parsing
+ the json object.
+ 
+ There are no workarounds, and if a regression were to occur, it would
+ only affect the 'id-ctrl' subcommand, and not change anything else.
  
  [Other info]
  
  Upstream bug:
  https://github.com/linux-nvme/nvme-cli/issues/1653
  
  This was fixed in the below commit:
  
  commit 78b7ad235507ddd59c75c7fcc74fc6c927811f87
  From: Pierre Labat <plabat at micron.com>
  Date: Fri, 26 Aug 2022 17:02:08 -0500
  Subject: nvme-print: Print fguid as a UUID
  Link: https://github.com/linux-nvme/nvme-cli/commit/78b7ad235507ddd59c75c7fcc74fc6c927811f87
  
  The commit required a minor backport. In later versions, a major
  refactor occurred that changed nvme_uuid_to_string() among numerous
  other functions, that is not appropriate to backport. Instead, just take
  the current implementation of nvme_uuid_to_string() and move it like the
  patch suggests, so json output works correctly.

** Also affects: nvme-cli (Ubuntu Jammy)
   Importance: Undecided
       Status: New

** Changed in: nvme-cli (Ubuntu Jammy)
       Status: New => In Progress

** Changed in: nvme-cli (Ubuntu Jammy)
   Importance: Undecided => Medium

** Changed in: nvme-cli (Ubuntu Jammy)
     Assignee: (unassigned) => Matthew Ruffell (mruffell)

** Changed in: nvme-cli (Ubuntu)
       Status: Confirmed => Fix Released

** Description changed:

  [Impact]
  
  When a user tries to release a system deployed with MAAS, that has erase
  disks on release set, erasing NVME disks fails on Jammy.
  
  Traceback (most recent call last):
  File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 542, in <module>
  main()
  File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 522, in main
  disk_info = get_disk_info()
  File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 165, in get_disk_info
  return {kname: get_disk_security_info(kname) for kname in list_disks()}
  File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 165, in <dictcomp>
  return {kname: get_disk_security_info(kname) for kname in list_disks()}
  File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 158, in get_disk_security_info
  return get_nvme_security_info(disk)
  File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 64, in get_nvme_security_info
  output = output.decode()
  UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 385: invalid start byte
  
  This is due to maas_wipe.py running "nvme id-ctrl <device>" and parsing
  the results. This should be human readable data, in string format, so
  utf-8 should be appropriate for MAAS to use.
  
  Instead, the "fguid" field is being printed as binary data, and is not
  parsable as utf-8.
  
  e.g. From comment #8.
  
  The user sees:
  
  `fguid : 2.`
  
  on closer inspection, the hex is:
  
  x32,0x89,0x82,0x2E
  
  Note it is cut off early, likely because the next byte would be 0x00,
  and is being interprested as a null byte.
  
  Fix nvme-cli such that we print out the fguid as a correct utf-8 string,
  so MAAS works as intended.
  
  [Testcase]
  
  Deploy Jammy onto a system that has a NVME device.
  
  $ sudo apt install nvme-cli
  
  Run the 'id-ctrl' command and look at the fguid entry:
  
  $ sudo nvme id-ctrl /dev/nvme1n1 | grep fguid
  fguid     :
  
  Due to the UUID being all zeros, this was interpreted as a null byte,
  and the UUID was not printed correctly.
  
  There is a test package available in the following ppa:
  
  https://launchpad.net/~mruffell/+archive/ubuntu/sf387274-test
  
  If you install the test package, the fguid will be printed as a proper
  string:
  
  $ sudo nvme id-ctrl /dev/nvme1n1 | grep fguid
  fguid     : 00000000-0000-0000-0000-000000000000
  
  Also check that json output works as expected:
  
  $ sudo nvme id-ctrl -o json /dev/nvme1n1 | grep fguid
    "fguid" : "00000000-0000-0000-0000-000000000000",
  
  Additionally, also test that the new package allows a MAAS deployed system to
  be released correctly with the erase option enabled, as maas_wipe.py should now
  complete successfully.
  
  [Where problems could occur]
  
  We are changing the output of the 'id-ctrl' subcommand. No other
  subcommands are changed. Users who for some reason rely on broken,
  incomplete binary data that is printed might be impacted. For users
  doing a hard diff of the command output, the output will now change to
  reflect the actual fguid, and might need a change. The fguid is now
  supplied in json output for 'id-ctrl', and might change programs parsing
  the json object.
  
  There are no workarounds, and if a regression were to occur, it would
  only affect the 'id-ctrl' subcommand, and not change anything else.
  
  [Other info]
  
  Upstream bug:
  https://github.com/linux-nvme/nvme-cli/issues/1653
  
- This was fixed in the below commit:
+ This was fixed in the below commit in version 2.2, found in mantic and
+ later:
  
  commit 78b7ad235507ddd59c75c7fcc74fc6c927811f87
  From: Pierre Labat <plabat at micron.com>
  Date: Fri, 26 Aug 2022 17:02:08 -0500
  Subject: nvme-print: Print fguid as a UUID
  Link: https://github.com/linux-nvme/nvme-cli/commit/78b7ad235507ddd59c75c7fcc74fc6c927811f87
  
  The commit required a minor backport. In later versions, a major
  refactor occurred that changed nvme_uuid_to_string() among numerous
  other functions, that is not appropriate to backport. Instead, just take
  the current implementation of nvme_uuid_to_string() and move it like the
  patch suggests, so json output works correctly.

** Patch added: "Debdiff for nvme-cli on Jammy"
   https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/2051299/+attachment/5788974/+files/lp2051299_jammy.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to nvme-cli in Ubuntu.
https://bugs.launchpad.net/bugs/2051299

Title:
  nvme-cli: fguid is printed as binary data and causes MAAS to fail
  erasing NVME disks

Status in MAAS:
  Triaged
Status in nvme-cli package in Ubuntu:
  Fix Released
Status in nvme-cli source package in Jammy:
  In Progress

Bug description:
  [Impact]

  When a user tries to release a system deployed with MAAS, that has
  erase disks on release set, erasing NVME disks fails on Jammy.

  Traceback (most recent call last):
  File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 542, in <module>
  main()
  File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 522, in main
  disk_info = get_disk_info()
  File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 165, in get_disk_info
  return {kname: get_disk_security_info(kname) for kname in list_disks()}
  File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 165, in <dictcomp>
  return {kname: get_disk_security_info(kname) for kname in list_disks()}
  File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 158, in get_disk_security_info
  return get_nvme_security_info(disk)
  File "/tmp/user_data.sh.jNE4lC/bin/maas-wipe", line 64, in get_nvme_security_info
  output = output.decode()
  UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 385: invalid start byte

  This is due to maas_wipe.py running "nvme id-ctrl <device>" and
  parsing the results. This should be human readable data, in string
  format, so utf-8 should be appropriate for MAAS to use.

  Instead, the "fguid" field is being printed as binary data, and is not
  parsable as utf-8.

  e.g. From comment #8.

  The user sees:

  `fguid : 2.`

  on closer inspection, the hex is:

  x32,0x89,0x82,0x2E

  Note it is cut off early, likely because the next byte would be 0x00,
  and is being interprested as a null byte.

  Fix nvme-cli such that we print out the fguid as a correct utf-8
  string, so MAAS works as intended.

  [Testcase]

  Deploy Jammy onto a system that has a NVME device.

  $ sudo apt install nvme-cli

  Run the 'id-ctrl' command and look at the fguid entry:

  $ sudo nvme id-ctrl /dev/nvme1n1 | grep fguid
  fguid     :

  Due to the UUID being all zeros, this was interpreted as a null byte,
  and the UUID was not printed correctly.

  There is a test package available in the following ppa:

  https://launchpad.net/~mruffell/+archive/ubuntu/sf387274-test

  If you install the test package, the fguid will be printed as a proper
  string:

  $ sudo nvme id-ctrl /dev/nvme1n1 | grep fguid
  fguid     : 00000000-0000-0000-0000-000000000000

  Also check that json output works as expected:

  $ sudo nvme id-ctrl -o json /dev/nvme1n1 | grep fguid
    "fguid" : "00000000-0000-0000-0000-000000000000",

  Additionally, also test that the new package allows a MAAS deployed system to
  be released correctly with the erase option enabled, as maas_wipe.py should now
  complete successfully.

  [Where problems could occur]

  We are changing the output of the 'id-ctrl' subcommand. No other
  subcommands are changed. Users who for some reason rely on broken,
  incomplete binary data that is printed might be impacted. For users
  doing a hard diff of the command output, the output will now change to
  reflect the actual fguid, and might need a change. The fguid is now
  supplied in json output for 'id-ctrl', and might change programs
  parsing the json object.

  There are no workarounds, and if a regression were to occur, it would
  only affect the 'id-ctrl' subcommand, and not change anything else.

  [Other info]

  Upstream bug:
  https://github.com/linux-nvme/nvme-cli/issues/1653

  This was fixed in the below commit in version 2.2, found in mantic and
  later:

  commit 78b7ad235507ddd59c75c7fcc74fc6c927811f87
  From: Pierre Labat <plabat at micron.com>
  Date: Fri, 26 Aug 2022 17:02:08 -0500
  Subject: nvme-print: Print fguid as a UUID
  Link: https://github.com/linux-nvme/nvme-cli/commit/78b7ad235507ddd59c75c7fcc74fc6c927811f87

  The commit required a minor backport. In later versions, a major
  refactor occurred that changed nvme_uuid_to_string() among numerous
  other functions, that is not appropriate to backport. Instead, just
  take the current implementation of nvme_uuid_to_string() and move it
  like the patch suggests, so json output works correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/2051299/+subscriptions




More information about the foundations-bugs mailing list