In this article I will explain how you can mount Amazon S3 bucket as a network drive on windows using FREE tools (no tntdrive or cloudberry explorer). There are other ways using Aws Storage gateway but I will be using some free tools.

This is the result of the tutorial:

We will use the following tools:

Rclone: is a command line program to sync files and directories to and from multiple cloud storage including AWS,Azure Blob, Google Cloud

https://rclone.org/

https://rclone.org/ WinFSP: windows file systm proxy to be able to mount the folders as drive.

NSSM : to install rclone mount as service

We will start with installing rclone using the below Powershell script

Invoke-Expression ((Invoke-WebRequest -Uri "https://gist.githubusercontent.com/justusiv/1ff2ad273cea3e33ca4acc5cab24c8e0/raw").content) $silent = mkdir c:\rclone -ErrorAction SilentlyContinue install-rclone -location c:\rclone

Next we will start the configuration or rclone by running

rclone config

type n for new remote

go to aws console and create an s3 bucket if you don’t have one already, I named mine builtwithclouds3bucket

Go to IAM Management Console -> Security Credentials and create access keys

follow the configuration wizard as shown below:

C:\rclone>rclone config 2019/07/11 23:34:10 NOTICE: Config file "C:\\Users\\admin\\.config\\rclone\\rclone.conf" not found - using defaults No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> mys3bucket Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / 1Fichier \ "fichier" 2 / A stackable unification remote, which can appear to merge the contents of several remotes \ "union" 3 / Alias for an existing remote \ "alias" 4 / Amazon Drive \ "amazon cloud drive" 5 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc) \ "s3" 6 / Backblaze B2 \ "b2" 7 / Box \ "box" 8 / Cache a remote \ "cache" 9 / Dropbox \ "dropbox" 10 / Encrypt/Decrypt a remote \ "crypt" 11 / FTP Connection \ "ftp" 12 / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" 13 / Google Drive \ "drive" 14 / Google Photos \ "google photos" 15 / Hubic \ "hubic" 16 / JottaCloud \ "jottacloud" 17 / Koofr \ "koofr" 18 / Local Disk \ "local" 19 / Mega \ "mega" 20 / Microsoft Azure Blob Storage \ "azureblob" 21 / Microsoft OneDrive \ "onedrive" 22 / OpenDrive \ "opendrive" 23 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ "swift" 24 / Pcloud \ "pcloud" 25 / QingCloud Object Storage \ "qingstor" 26 / SSH/SFTP Connection \ "sftp" 27 / Webdav \ "webdav" 28 / Yandex Disk \ "yandex" 29 / http Connection \ "http" Storage> 5 ** See help for s3 backend at: https://rclone.org/s3/ ** Choose your S3 provider. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Amazon Web Services (AWS) S3 \ "AWS" 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun \ "Alibaba" 3 / Ceph Object Storage \ "Ceph" 4 / Digital Ocean Spaces \ "DigitalOcean" 5 / Dreamhost DreamObjects \ "Dreamhost" 6 / IBM COS S3 \ "IBMCOS" 7 / Minio Object Storage \ "Minio" 8 / Netease Object Storage (NOS) \ "Netease" 9 / Wasabi Object Storage \ "Wasabi" 10 / Any other S3 compatible provider \ "Other" provider> 1 Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Enter a boolean value (true or false). Press Enter for the default ("false"). Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true" env_auth> 1 AWS Access Key ID. Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). access_key_id> YOUR ACCESS KEY ID AWS Secret Access Key (password) Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). secret_access_key> YOUR ACCESS KEY SECRET Region to connect to. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value / The default endpoint - a good choice if you are unsure. 1 | US Region, Northern Virginia or Pacific Northwest. | Leave location constraint empty. \ "us-east-1" / US East (Ohio) Region 2 | Needs location constraint us-east-2. \ "us-east-2" / US West (Oregon) Region 3 | Needs location constraint us-west-2. \ "us-west-2" / US West (Northern California) Region 4 | Needs location constraint us-west-1. \ "us-west-1" / Canada (Central) Region 5 | Needs location constraint ca-central-1. \ "ca-central-1" / EU (Ireland) Region 6 | Needs location constraint EU or eu-west-1. \ "eu-west-1" / EU (London) Region 7 | Needs location constraint eu-west-2. \ "eu-west-2" / EU (Stockholm) Region 8 | Needs location constraint eu-north-1. \ "eu-north-1" / EU (Frankfurt) Region 9 | Needs location constraint eu-central-1. \ "eu-central-1" / Asia Pacific (Singapore) Region 10 | Needs location constraint ap-southeast-1. \ "ap-southeast-1" / Asia Pacific (Sydney) Region 11 | Needs location constraint ap-southeast-2. \ "ap-southeast-2" / Asia Pacific (Tokyo) Region 12 | Needs location constraint ap-northeast-1. \ "ap-northeast-1" / Asia Pacific (Seoul) 13 | Needs location constraint ap-northeast-2. \ "ap-northeast-2" / Asia Pacific (Mumbai) 14 | Needs location constraint ap-south-1. \ "ap-south-1" / South America (Sao Paulo) Region 15 | Needs location constraint sa-east-1. \ "sa-east-1" region> 1 Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region. Enter a string value. Press Enter for the default (""). endpoint> Location constraint - must be set to match the Region. Used when creating buckets only. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Empty for US Region, Northern Virginia or Pacific Northwest. \ "" 2 / US East (Ohio) Region. \ "us-east-2" 3 / US West (Oregon) Region. \ "us-west-2" 4 / US West (Northern California) Region. \ "us-west-1" 5 / Canada (Central) Region. \ "ca-central-1" 6 / EU (Ireland) Region. \ "eu-west-1" 7 / EU (London) Region. \ "eu-west-2" 8 / EU (Stockholm) Region. \ "eu-north-1" 9 / EU Region. \ "EU" 10 / Asia Pacific (Singapore) Region. \ "ap-southeast-1" 11 / Asia Pacific (Sydney) Region. \ "ap-southeast-2" 12 / Asia Pacific (Tokyo) Region. \ "ap-northeast-1" 13 / Asia Pacific (Seoul) \ "ap-northeast-2" 14 / Asia Pacific (Mumbai) \ "ap-south-1" 15 / South America (Sao Paulo) Region. \ "sa-east-1" location_constraint> Canned ACL used when creating buckets and storing or copying objects. This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Note that this ACL is applied when server side copying objects as S3 doesn't copy the ACL from the source but rather writes a fresh one. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private" 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. \ "public-read" / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. 3 | Granting this on a bucket is generally not recommended. \ "public-read-write" 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. \ "authenticated-read" / Object owner gets FULL_CONTROL. Bucket owner gets READ access. 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ "bucket-owner-read" / Both the object owner and the bucket owner get FULL_CONTROL over the object. 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ "bucket-owner-full-control" acl> 1 The server-side encryption algorithm used when storing this object in S3. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / None \ "" 2 / AES256 \ "AES256" 3 / aws:kms \ "aws:kms" server_side_encryption> 1 If using KMS ID you must provide the ARN of Key. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / None \ "" 2 / arn:aws:kms:* \ "arn:aws:kms:us-east-1:*" sse_kms_key_id> 1 The storage class to use when storing new objects in S3. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Default \ "" 2 / Standard storage class \ "STANDARD" 3 / Reduced redundancy storage class \ "REDUCED_REDUNDANCY" 4 / Standard Infrequent Access storage class \ "STANDARD_IA" 5 / One Zone Infrequent Access storage class \ "ONEZONE_IA" 6 / Glacier storage class \ "GLACIER" 7 / Glacier Deep Archive storage class \ "DEEP_ARCHIVE" 8 / Intelligent-Tiering storage class \ "INTELLIGENT_TIERING" storage_class> 1 Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config -------------------- [mys3bucket] type = s3 provider = AWS env_auth = false access_key_id = your access key id secret_access_key = your access key secret region = us-east-1 acl = private -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Current remotes: Name Type ==== ==== mys3bucket s3 e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> q C:\rclone>

Next in S3 we will create a policy to grant access to this s3 bucket only:

Next Create a group and assign the policy to the group and add the user

Now if you try to list all buckets using the command:

rclone lsd mys3bucket:

you will receive access denied since the policy is prohibiting access to list other buckets, next create a folder in s3 bucket

Now, if you specify the bucket name, you can see the content

Next we need to install WinFSP using cholocaltey

choco install winfsp -y

Now we are ready to mount our bucket on windows using the below command on drive S:

rclone mount mys3bucket:builtwithclouds3bucket/ S: --vfs-cache-mode full

Without the use of “–vfs-cache-mode” this can only write files sequentially, it can only seek when reading. This means that many applications won’t work with their files on an rclone mount without “–vfs-cache-mode writes” or “–vfs-cache-mode full”. See the File Caching section for more info read rclone caching

the mount will disappear once you reboot to make it persistent you can use NSSM

an easy way to make a windows service

https://nssm.cc/