Thanks for using the TrueNAS Community Edition issue tracker! TrueNAS Enterprise users receive direct support for their reports from our support portal.

Failed error when expanding encrypted pool

Description

I had an encrypted RAID-Z2 pool with 6x2TB discs from which I replaced the discs over time by 4TB drives. I thought that the pool would grow automatically but that did not happen so I tried out to click 'Expand Pool'.
After some time the following error appeared:

Error: Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 367, in run
await self.future
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 403, in __run_body
rv = await self.method(*([self] + args))
File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 973, in nf
return await f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool_/expand.py", line 101, in expand
await self.__geli_resize(pool, geli_resize, options)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool_/expand.py", line 186, in __geli_resize
raise CallError(
middlewared.service_exception.CallError: [EFAULT] Resizing partitions of your encrypted pool failed and rolling back changes failed too. You'll need to run the following commands manually:
gpart resize -a 4k -i 2 -s 3998639460352 da1
gpart resize -a 4k -i 2 -s 3998639460352 da4
gpart resize -a 4k -i 2 -s 3998639460352 da5
gpart resize -a 4k -i 2 -s 3998639460352 da3
gpart resize -a 4k -i 2 -s 3998639460352 da6
gpart resize -a 4k -i 2 -s 3998639460352 da2
gpart resize -a 4k -i 1 -s 500107776000 nvd1
gpart resize -a 4k -i 1 -s 1000204800000 nvd0

I tried to issue one of the gpart commands on a data disc (nvd1/0 are nvme ssds for read and write cache, they are fine) and got this error:

errors: No known data errors
root@gaia:~ # gpart resize -a 4k -i 2 -s 3998639460352 da1
gpart: size '3998639460352': Invalid argument

For what its worth, I executed:

root@gaia:~ # gpart show da1
=> 40 7814037088 da1 GPT (3.6T)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 7809842696 2 freebsd-zfs (3.6T)

What can I do? Thanks for your help in advance!

Problem/Justification

None

Impact

None

Activity

Show:

Bug Clerk August 26, 2021 at 10:00 AM

Christian FitzGerald Forberg August 25, 2021 at 9:25 PM

Many thanks! I only executed the first command:

zpool online -e tank gptid/b1c9361e-002c-11ea-8295-0cc47a401407.eli

And voila the pool is extended:

root@gaia:~ # zpool list -v tank NAME                                                 SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT tank                                                21.8T  9.57T  12.2T        -         -    24%    43%  1.00x    ONLINE  /mnt  raidz2                                            21.8T  9.57T  12.2T        -         -    24%  43.9%      -    ONLINE    gptid/b1c9361e-002c-11ea-8295-0cc47a401407.eli      -      -      -        -         -      -      -      -    ONLINE    gptid/296847ee-ef49-11eb-acf9-0cc47a401407.eli      -      -      -        -         -      -      -      -    ONLINE    gptid/c9232c78-f124-11eb-acf9-0cc47a401407.eli      -      -      -        -         -      -      -      -    ONLINE    gptid/3f40249d-be00-11eb-ad1f-0cc47a401407.eli      -      -      -        -         -      -      -      -    ONLINE    gptid/5bbb504b-f439-11eb-bf5f-0cc47a401407.eli      -      -      -        -         -      -      -      -    ONLINE    gptid/d3a564cc-c9df-11e9-930e-0cc47a401407.eli      -      -      -        -         -      -      -      -    ONLINE logs                                                    -      -      -        -         -      -      -      -  -  gptid/a0eaffe0-ef40-11eb-acf9-0cc47a401407.eli     465G  10.5M   465G        -         -     0%  0.00%      -    ONLINE cache                                                   -      -      -        -         -      -      -      -  -  gptid/978c7bcb-ef81-11eb-acf9-0cc47a401407.eli     932G   182G   750G        -         -     0%  19.5%      -    ONLINE

But I have to admit that I did not executed 'zpool list -v tank' before the execution, but I doubt that it extended itself over night.

So thanks a mill, as you said, ZFS needed a little nudge.

Alexander Motin August 25, 2021 at 7:43 PM

You may also run `zdb -U /data/zfs/zpool.cache` to get some more information.

Alexander Motin August 25, 2021 at 7:38 PM
Edited

Weird. Previously there was EXPANDSZ reported, but now it is not. But GELI provides I see at proper 3.6GB, so the question must be in ZFS.  Could you try to run:

zpool online -e tank gptid/b1c9361e-002c-11ea-8295-0cc47a401407.eli zpool online -e tank gptid/296847ee-ef49-11eb-acf9-0cc47a401407.eli zpool online -e tank gptid/c9232c78-f124-11eb-acf9-0cc47a401407.eli zpool online -e tank gptid/3f40249d-be00-11eb-ad1f-0cc47a401407.eli zpool online -e tank gptid/5bbb504b-f439-11eb-bf5f-0cc47a401407.eli zpool online -e tank gptid/d3a564cc-c9df-11e9-930e-0cc47a401407.eli

Christian FitzGerald Forberg August 24, 2021 at 9:54 PM

I added a current debug info file

Complete

Details

Assignee

Reporter

Labels

Impact

Medium

Components

Fix versions

Affects versions

Priority

More fields

Katalon Platform

Created August 4, 2021 at 6:04 PM
Updated July 6, 2022 at 8:58 PM
Resolved August 26, 2021 at 4:11 PM

Flag notifications