Many organizations use SFTP (Secure File Transfer Protocol) as part of long-established data processing and partner integration workflows. While it would be easy to dismiss these systems as “legacy,” the reality is that they serve a useful purpose and will continue to do so for quite some time. We want to help our customers to move these workflows to the cloud in a smooth, non-disruptive way.

AWS Transfer for SFTP

Today we are launching AWS Transfer for SFTP, a fully-managed, highly-available SFTP service. You simply create a server, set up user accounts, and associate the server with one or more Amazon Simple Storage Service (S3) buckets. You have fine-grained control over user identity, permissions, and keys. You can create users within Transfer for SFTP, or you can make use of an existing identity provider. You can also use IAM policies to control the level of access granted to each user. You can also make use of your existing DNS name and SSH public keys, making it easy for you to migrate to Transfer for SFTP. Your customers and your partners will continue to connect and to make transfers as usual, with no changes to their existing workflows.

You have full access to the underlying S3 buckets and you can make use of many different S3 features including lifecycle policies, multiple storage classes, several options for server-side encryption, versioning, and so forth. You can write AWS Lambda functions to to build an “intelligent” FTP site that processes incoming files as soon as they are uploaded, query the files in situ using Amazon Athena, and easily connect to your existing data ingestion process. On the outbound side, you can generate reports, documents, manifests, custom software builds and so forth using other AWS services, and then store them in S3 for each, controlled distribution to your customers and partners.

Creating a Server

To get started, I open up the AWS Transfer for SFTP Console and click Create server:

I can have Transfer for SFTP manage user names and passwords, or I can access an existing LDAP or Active Directory identify provider via API Gateway. I can use a Amazon Route 53 DNS alias or an existing hostname, and I can tag my server. I start with default values and click Create server to actually create my SFTP server:

It is up and running within minutes:

Now I can add a user or two! I select the server and click Add user, then enter the user name, pick the S3 bucket (with an optional prefix) for their home directory, and select an IAM role that gives the user the desired access to the bucket. Then I paste the SSH public key (created with ssh-keygen), and click Add:

And now I am all set. I retrieve the server endpoint from the console and issue my first sftp command:

The files are visible in the jeff/ section of the S3 bucket immediately:

I could attach a Lambda function to the bucket and do any sort of post-upload processing I want. For example, I could run all uploaded images through Amazon Rekognition and route them to one of several different destinations depending on the types of objects that it contains, and I could run audio files through Amazon Comprehend to perform a speech to text operation.

Full Control via IAM

In order to get right to the point in my walk-through, my IAM role uses this very simple policy:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets", "s3:GetBucketLocation" ], "Resource": "*" }, { "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::data-transfer-inbound" }, { "Effect": "Allow", "Action": "s3:*", "Resource": "arn:aws:s3:::data-transfer-inbound/jeff/*" } ] }

Update 12/19/2018: The role’s trust relationship looks like this:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "transfer.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }

If I plan to host lots of users on the same server, I can make use of a scope-down policy that looks like this:

{ "Version": "2012-10-17", "Statement": [ { "Sid": "ListHomeDir", "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::${transfer:HomeBucket}" }, { "Sid": "AWSTransferRequirements", "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets", "s3:GetBucketLocation" ], "Resource": "*" }, { "Sid": "HomeDirObjectAccess", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObjectVersion", "s3:DeleteObject", "s3:GetObjectVersion" ], "Resource": "arn:aws:s3:::${transfer:HomeDirectory}*" } ] }

The ${transfer:HomeBucket} and ${transfer:HomeDirectory} policy variables will be set to appropriate values for each user when the scope-down policy is evaluated; this allows me to use the same policy, suitably customized, for each user.

Things to Know

Here are a couple of things to keep in mind regarding AWS Transfer for SFTP:

Programmatic Access – A full set of APIs and CLI commands is also available. For example, I can create a server with one simple command:

$ aws transfer create-server --identity-provider-type SERVICE_MANAGED ------------------------------------- | CreateServer | +-----------+-----------------------+ | ServerId | s-b445dcff7f164c73a | +-----------+-----------------------+

There are many other commands including list-servers , start-server , stop-server , create-user , and list-users .

CloudWatch – Each server can optionally send detailed access logs to Amazon CloudWatch. There’s a separate log stream for each SFTP session and one more for authentication errors:

Alternate Identity Providers – I showed you the built-in user management above. You can also access an alternate identity provider that taps into your existing LDAP or Active Directory.

Pricing – You pay a per-hour fee for each running server and a per-GB data upload and download fee.

Available Now

AWS Transfer for SFTP is available today in US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Canada (Central), Europe (Ireland), Europe (Paris), Europe (Frankfurt), Europe (London), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Seoul) Regions.

— Jeff;