Issues
- TrueNAS SCALE unexpectedly ground to a near halt with no obvious reason whyNAS-129221Resolved issue: NAS-129221Triage Team
- [EFAULT] [EFAULT] Failed to generate debug: OSError(24, 'Too many open files')NAS-128779Resolved issue: NAS-128779Vladimir Vinogradenko
- Add a forceNVME flag to openSeaChestNAS-127349Resolved issue: NAS-127349Ameer Hamza
- Block Cloning Testing for zfs-2.2NAS-125662Resolved issue: NAS-125662Ameer Hamza
- ZFS replication causing unscheduled reboot on destinationNAS-116927Resolved issue: NAS-116927Ameer Hamza
- Encountering fairly frequent panics during ZFS replicationNAS-116147Resolved issue: NAS-116147Triage Team
- TrueNAS SCALE is unexpectedly restartingNAS-114209Resolved issue: NAS-114209Triage Team
- Regression in cxlowest settingNAS-106404Resolved issue: NAS-106404Triage Team
- Kernel Panic: Page Fault in mdsnd. Possibly IPv6 autoconf related.NAS-105326Resolved issue: NAS-105326Matthew Macy
- Static Routes do not apply after rebootNAS-102871Resolved issue: NAS-102871Triage Team
- gptid labels are disappearing on certain drives running 11.2-U5NAS-102846Resolved issue: NAS-102846Triage Team
- If the DNS server is not working correctly FreeNAS hangs on boot.NAS-102174Resolved issue: NAS-102174Triage Team
1-12 of 12
1 of 12
This impacted the whole system and started out of the blue. SMB, SSH, and ZFS subsystems were incredibly slow to respond. Some apps were going offline and back online, NextCloud notably. Many kube-system pods were perpetually crashing continuously. The WebUI was very slow to respond, even 2fa took so long to process the token timed out. Capturing a debug was a headache as the UI timed out before it could finish kicking me back to the login screen. Even executing sudo on the sudo had me waiting sometimes minutes to actually get to the root shell.
I was forced to restart the entire system to get it back to normal operation.