

The timeout sometimes fails to stop the IO, causing the program to hang.

If the copy takes too long, it sometimes gives up before the end of the file.

If the block size is too small it takes forever, if it is too large it drops large parts of the file.





{-# LANGUAGE ScopedTypeVariables #-}



import Data.ByteString(hGet, hPut)

import System.IO

import System.Environment

import Control.Monad

import Control.Exception





src = "file on dodgy drive (source)"

dest = "file on safe drive (destination)"



main :: IO ()

main =

withBinaryFile src ReadMode $ \hSrc ->

withBinaryFile dest WriteMode $ \hDest -> do

nSrc nDest when (nSrc /= nDest) $ hSetFileSize hDest nSrc

copy hSrc hDest $ split start (0,nSrc)





copy :: Handle -> Handle -> [(Integer,Integer)] -> IO ()

copy hSrc hDest [] = return ()

copy hSrc hDest chunks = do

putStrLn $ "Copying " ++ show (length chunks) ++ " of at most " ++ show (snd $ head chunks)

chunks <- forM chunks $ \(from,len) -> do

res <- Control.Exception.try $ do

hSeek hSrc AbsoluteSeek from

hSeek hDest AbsoluteSeek from

bs <- hGet hSrc $ fromIntegral len

hPut hDest bs

case res of

Left (a :: IOException) -> do putChar '#' ; return $ split (len `div` 5) (from,len)

Right _ -> do putChar '.' ; return []

putChar '

'

copy hSrc hDest $ concat chunks



start = 10000000

stop = 1000



split :: Integer -> (Integer,Integer) -> [(Integer,Integer)]

split i (a,b) | i < stop = []

| i >= b = [(a,b)]

| otherwise = (a,i) : split i (a+i, b-i)



Haskell has long been my favoured scripting language, and in this post I thought I'd share one of my more IO heavy scripts. I have an external hard drive, that due to regular dropping, is somewhat unreliable. I have a 1Gb file on this drive, which I'd like to copy, but is partly corrupted. I'd like to copy as much as I can.In the past I've used JFileRecovery , which I thoroughly recommend. The basic algorithm is that it copies the file in chunks, and if a chunk copy exceeds a timeout it is discarded. It has a nice graphical interface, and some basic control over timeout and block sizes. Unfortunately, JFileRecovery didn't work for this file - it has three basic problems:To recover my file I needed something better, so wrote a quick script in Haskell. The basic algorithm is to copy the the file in 10Mb chunks. If any chunk fails to copy, I split the chunk and retry it after all other pending chunks. The result is that the file is complete after the first pass, but the program then goes back and recovers more information where it can. I can terminate the program at any point with a working file, but waiting longer will probably recover more of the file.I have included the script at the bottom of this post. I ran this script from GHCi, but am not going to turn it in to a proper program. If someone does wish to build on this script please do so (I hereby place this code in the public domain, or if that is not possible then under thelicenses).The script took about 15 minutes to write, and makes use of exceptions and file handles - not the kind of program traditionally associated with Haskell. A lot of hard work has been spent polishing the GHC runtime, and the Haskell libraries (bytestring, exceptions). Now this work has been done, slotting together reasonably complex scripts is simple.There are many limitations in this code, but it was sufficient to recover my file quickly and accurately.