I have a ZFS pool that had two drives go bad on the same day. Now I'm left in a state where it is in a continuous resliver process that never seems to finish. In the meantime, I'm just trying to copy some of the data off to another file server and the volume is almost unusable (like 500kBps disk access). The server reboots when it gets about 70% reslivered then it starts all over again.

I'm looking for two pieces of advice: 1) Can I stop the resliver temporarily so I can copy the data I need off the drive (it is about 1TB that I need to copy in total) 2) Is this array salvageable? I'm not sure, but it looks like mirror-1 has issues with both its mirrored drives, and from my understanding that is not something that can normally be recovered from.

pool: primary_vol

state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Sat Dec 28 20:01:33 2019 105G scanned out of 7.28T at 15.7M/s, 133h2m to go 26.2G resilvered, 1.41% done config:

NAME STATE READ WRITE CKSUM primary_vol DEGRADED 215 0 0 mirror-0 ONLINE 0 0 0 c0t12d1 ONLINE 0 0 0 c0t13d1 ONLINE 0 0 0 mirror-1 DEGRADED 215 0 35 spare-0 DEGRADED 430 0 0 c0t15d1 FAULTED 0 0 0 too many errors c0t21d1 ONLINE 0 0 430 (resilvering) c0t18d1 DEGRADED 215 0 59 too many errors (resilvering) mirror-2 ONLINE 0 0 0 c0t19d1 ONLINE 0 0 0 c0t20d1 ONLINE 0 0 0 mirror-3 DEGRADED 0 0 0 c0t24d1 ONLINE 0 0 0 c0t22d1 UNAVAIL 0 0 0 cannot open logs c0t16d1 ONLINE 0 0 0 spares c0t21d1 INUSE currently in use errors: 184 data errors, use '-v' for a list

zpool status -v shows the following errors. These are all files I don't care about. Would deleting them help at all?