UI does not allow REMOVE operations on single-device data vdevs
Description
The UI button does not exist to remove a single-device vdev from either an all-stripe pool or a single-device vdev in a pool with mirrors. This action should be made visible in order to allow users to recover from an incorrectly designed pool or single-device addition.
Replication Instructions
Using latest CORE, attach three disks to a machine
Create a pool consisting of two disks in a stripe
Optionally, EXTEND one stripe disk to a mirror
Note that the mirror vdev has DETACH options for both child disks and REMOVE option for vdev itself, but single-device vdev has only EXTEND and OFFLINE (which fails with EZFS_NOREPLICAS)
Issue confirmed present in TrueNAS CORE 13.0-U5.1 and TrueNAS SCALE 22.12.3.1 (cannot repro in SCALE)
Operation succeeds via cli
root@truenas[]# zpool status removal-test
pool: removal-test
state: ONLINE
scan: resilvered 6.48M in 00:00:00 with 0 errors on Fri Jul 7 08:50:17 2023
config:
NAME STATE READ WRITE CKSUM
removal-test ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/f17e8a27-1cdd-11ee-b0b9-000c29a7a1c2 ONLINE 0 0 0
gptid/f85e4b58-1cdd-11ee-b0b9-000c29a7a1c2 ONLINE 0 0 0
gptid/f17cbd82-1cdd-11ee-b0b9-000c29a7a1c2 ONLINE 0 0 0
errors: No known data errors
root@truenas[]# zpool remove removal-test gptid/f17cbd82-1cdd-11ee-b0b9-000c29a7a1c2
root@truenas[]# zpool status removal-test
pool: removal-test
state: ONLINE
scan: resilvered 6.48M in 00:00:00 with 0 errors on Fri Jul 7 08:50:17 2023
remove: Removal of vdev 1 copied 3.30M in 0h0m, completed on Fri Jul 7 08:50:57 2023
576 memory used for removed device mappings
config:
NAME STATE READ WRITE CKSUM
removal-test ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/f17e8a27-1cdd-11ee-b0b9-000c29a7a1c2 ONLINE 0 0 0
gptid/f85e4b58-1cdd-11ee-b0b9-000c29a7a1c2 ONLINE 0 0 0
errors: No known data errors
Problem/Justification
None
Impact
None
Attachments
1
22 Jun 2023, 07:18 PM
Activity
Show:
Automation for Jira July 12, 2023 at 2:34 PM
This issue has now been closed. Comments made after this point may not be viewed by the TrueNAS Teams. Please open a new issue if you have found a problem or need to re-engage with the TrueNAS Engineering Teams.
Opting to not fix in this in CORE as it is already addressed in SCALE. Is a rather obscure edge case that I don't want churn over in our more stable code.
Chris Peredun July 11, 2023 at 1:47 PM
Edited
Reopened - this is still an issue in CORE 13.0-U5.2
The UI button does not exist to remove a single-device vdev from either an all-stripe pool or a single-device vdev in a pool with mirrors. This action should be made visible in order to allow users to recover from an incorrectly designed pool or single-device addition.
Replication Instructions
Using latest CORE, attach three disks to a machine
Create a pool consisting of two disks in a stripe
Optionally, EXTEND one stripe disk to a mirror
Note that the mirror vdev has DETACH options for both child disks and REMOVE option for vdev itself, but single-device vdev has only EXTEND and OFFLINE (which fails with EZFS_NOREPLICAS)
Issue confirmed present in TrueNAS CORE 13.0-U5.1 and TrueNAS SCALE 22.12.3.1 (cannot repro in SCALE)
Operation succeeds via cli
root@truenas[]# zpool status removal-test pool: removal-test state: ONLINE scan: resilvered 6.48M in 00:00:00 with 0 errors on Fri Jul 7 08:50:17 2023 config: NAME STATE READ WRITE CKSUM removal-test ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/f17e8a27-1cdd-11ee-b0b9-000c29a7a1c2 ONLINE 0 0 0 gptid/f85e4b58-1cdd-11ee-b0b9-000c29a7a1c2 ONLINE 0 0 0 gptid/f17cbd82-1cdd-11ee-b0b9-000c29a7a1c2 ONLINE 0 0 0 errors: No known data errors root@truenas[]# zpool remove removal-test gptid/f17cbd82-1cdd-11ee-b0b9-000c29a7a1c2 root@truenas[]# zpool status removal-test pool: removal-test state: ONLINE scan: resilvered 6.48M in 00:00:00 with 0 errors on Fri Jul 7 08:50:17 2023 remove: Removal of vdev 1 copied 3.30M in 0h0m, completed on Fri Jul 7 08:50:57 2023 576 memory used for removed device mappings config: NAME STATE READ WRITE CKSUM removal-test ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/f17e8a27-1cdd-11ee-b0b9-000c29a7a1c2 ONLINE 0 0 0 gptid/f85e4b58-1cdd-11ee-b0b9-000c29a7a1c2 ONLINE 0 0 0 errors: No known data errors