Non boot-drive swap space "unclean" and re-constructed every boot
Description
Problem/Justification
Impact
relates to
Activity

Bug Clerk August 21, 2022 at 6:40 PM

Moonshine August 9, 2022 at 6:51 PM
This is still an issue in 22.02.3. I’m surprised more people aren’t noticing very long boots (15+ mins) with large arrays. Or is there some way to disable this swap creation process all together?

Moonshine July 12, 2022 at 4:11 PM
Would really love if someone could look into this one, as the swap rebuild is pretty costly on my system each restart (~15 mins) and doesn’t seem to be necessary (?). I’d assume that, encrypted or not, the raid swap partitions could be cleanly disabled on shutdown.

TrueNAS User June 16, 2022 at 6:42 PMEdited
I have the same issue on reboot where I get the md/raid1/mdxxx devices not clean reconstructing.
However after this happend I now have an additional side affect and that after reboot and this happening I have no running docker containers and when trying to run the docker command it states there is no sock file and askes if docker is running.
Switching back to boot from 22.02.0.1 it also give the md/raid1/mdxxx not clean, but the docker containers do show up and work.
Will try today to see what happens when I go back to 22.02.1.
I there any way I can debug what the root cause is for both the md/raid1 situation as well as the disappearing dockers?
UPDATE 21:53
Booted back to the 22.02.1 and now the docker containers are up and running again. Really odd.
But I did get the not clean again, which finished rather quickly this boot time.
The messages on SWAP, see timestamps are at the same time as the kernel MD not clean messages.

Ameer Hamza June 16, 2022 at 12:59 AMEdited
Hello ,
I got a chance to debug this issue. For any new pool creation, middleware performs additional encrypted RAID mirroring of SWAP partition type for crash prevention. However, these encrypted device are recreated on every system reboot in middleware due to which it is being shown in the logs. I think middleware team can better comment if this is something they should fix. Assigned this back to you to reassign to middleware team.
Details
Details
Assignee

Reporter

When Scale is restared it appears the non-boot drive MD Raid1 swap paritions are marked as "unclean" and reconstructed each time, even with a clean reboot and no storage device changes. It's as though they haven't been cleanly unmounted on shutdown. Debug and screenshot attached, and section from dmesg below. I posted in the forums first and another user sees reconstruction in dmesg as well.
[ 56.569059] md/raid1:md126: not clean – starting background reconstruction
[ 56.577989] md/raid1:md126: active with 2 out of 2 mirrors
[ 56.585013] md126: detected capacity change from 0 to 2147418624
[ 56.869518] md/raid1:md125: not clean – starting background reconstruction
[ 56.878297] md/raid1:md125: active with 2 out of 2 mirrors
[ 56.885302] md125: detected capacity change from 0 to 2147418624
[ 57.243521] md: resync of RAID array md126
[ 57.339830] Adding 2097084k swap on /dev/mapper/md126. Priority:-3 extents:1 across:2097084k FS
[ 57.445881] md: resync of RAID array md125
[ 57.567717] Adding 2097084k swap on /dev/mapper/md125. Priority:-4 extents:1 across:2097084k FS
[ 67.816567] md: md126: resync done.
[ 68.126529] md: md125: resync done.
[ 69.172839] md/raid1:md124: not clean – starting background reconstruction
[ 69.181374] md/raid1:md124: active with 2 out of 2 mirrors
[ 69.181947] md124: detected capacity change from 0 to 2147418624
[ 69.476112] md/raid1:md123: not clean – starting background reconstruction
[ 69.484911] md/raid1:md123: active with 2 out of 2 mirrors
[ 69.491896] md123: detected capacity change from 0 to 2147418624
[ 69.836363] md: resync of RAID array md124
[ 69.939660] Adding 2097084k swap on /dev/mapper/md124. Priority:-5 extents:1 across:2097084k FS
[ 70.029991] md: resync of RAID array md123
[ 70.139651] Adding 2097084k swap on /dev/mapper/md123. Priority:-6 extents:1 across:2097084k FS
[ 85.850593] md: md124: resync done.
[ 87.365191] md: md123: resync done.
[ 117.069136] md: md127: resync done.