Not Applicable
Details
Assignee
Triage TeamTriage TeamReporter
Dragoon AethisDragoon AethisLabels
Impact
LowComponents
Fix versions
Affects versions
Priority
Undefined
Details
Details
Assignee
Triage Team
Triage TeamReporter
Dragoon Aethis
Dragoon AethisLabels
Impact
Low
Components
Fix versions
Affects versions
Priority

More fields
More fields
More fields
Katalon Platform
Katalon Platform
Katalon Platform
Created October 15, 2024 at 6:56 PM
Updated October 24, 2024 at 1:40 PM
Resolved October 24, 2024 at 1:40 PM
The scenario is to use Cloud Sync to Pull with Copy from a Synology NAS via SFTP onto the local TrueNAS dataset.
Synology stores its data on btrfs volumes mounted under root, like /volume1, /volume2, etc. Their equivalent of a dataset is a "shared folder", a shared folder named "archive" is available locally under /volume1/archive. Shared folder names are unique within a system - you cannot have /volume1/archive and /volume2/archive at the same time.
Its SFTP server is patched to create a "virtual file system" that essentially hides everything except the shared folders that a given user has access to. The SFTP path for the "archive" above would be just /archive.
In the TrueNAS UI, you must select "/archive" from the SFTP file picker - it does not allow entering the real "/volume1/archive" path, which is available over SSH but not SFTP.
When rclone starts pulling data, it downloads the file, then tries to verify the checksum both locally and on the remote device. This fails, because rclone tries to run md5sum /path/to/file using the SFTP path. In the logs, this looks like this:
2024/10/15 15:06:02 ERROR : FILENAME: Failed to calculate src hash: failed to calculate md5 hash: failed to run "md5sum /archive/FILENAME": md5sum: /archive/FILENAME: No such file or directory: Process exited with status 1
2024/10/15 15:06:02 ERROR : FILENAME.jihetus1.partial: Failed to calculate dst hash: hash: failed to read: context canceled
2024/10/15 15:06:02 ERROR : FILENAME.jihetus1.partial: corrupted on transfer: md5 hashes differ src(sftp://user@synology.lan:12345//archive) "" vs dst(Local file system at /mnt/nvme4x4/archive) ""
2024/10/15 15:06:02 INFO : FILENAME.jihetus1.partial: Removing failed copy
It is not obvious that something is going wrong. I/O meters look fine on both sides, but on TrueNAS you have to look at the copied data to know that it is being downloaded, then immediately deleted. When the rclone job is running, you cannot look at its logs yet.
This can be fixed by providing the
--sftp-path-override
flag to rclone, but there is no way to do that from the TrueNAS UI. It would be nice to be able to provide this/arbitrary arguments to rclone in the UI if needed.Extra: The brand new TrueNAS box just spent last 6 hours downloading, writing to QLC NVMe storage, then immediately deleting around 2 terabytes of data, so I might as well spend 5 minutes to write this up. :)