ZS3 - Amazon S3 and CloudFront from Common Lisp

ZS3 is a Common Lisp library for working with Amazon's Simple Storage Service (S3) and CloudFront content delivery service. It is available under a BSD-style license; see LICENSE for details. Development of ZS3 is hosted on GitHub. The latest version is 1.3.3, released on September 29th, 2019.

Download shortcut: http://www.xach.com/lisp/zs3.tgz

Contents

Installation

ZS3 depends on the following libraries:

The easiest way to install ZS3 and all its required libraries is with Quicklisp. After Quicklisp is installed, the following will fetch and load ZS3:

(ql:quickload "zs3")

For more information about incorporating ASDF-using libraries like ZS3 into your own projects, see this short tutorial. Overview

ZS3 provides an interface to two separate, but related, Amazon services: S3 and CloudFront

Using Amazon S3 involves working with two kinds of resources: buckets and objects.

Buckets are containers, and are used to organize and manage objects. Buckets are identified by their name, which must be unique across all of S3. A user may have up to 100 buckets in S3.

Objects are stored within buckets. Objects consist of arbitrary binary data, from 1 byte to 5 gigabytes. They are identified by a key, which must be unique within a bucket. Objects can also have associated S3-specific metadata and HTTP metadata.

For full documentation of the Amazon S3 system, see the Amazon S3 Documentation. ZS3 uses the REST interface for all its operations.

Using Amazon CloudFront involves working with distributions. Distributions are objects that associate an S3 bucket with primary cloudfront.net hostname and zero or more arbitrary CNAMEs. S3 objects requested through a CloudFront distribution are distributed to and cached in multiple locations throughout the world, reducing latency and improving throughput, compared to direct S3 requests.

For full documentation of the Amazon CloudFront system, see the Amazon CloudFront Documentation.

For help with using ZS3, please see the zs3-devel mailing list. Example Use

* (asdf:oos 'asdf:load-op '#:zs3) => lots of stuff * (defpackage #:demo (:use #:cl #:zs3)) => #<PACKAGE "DEMO"> * (in-package #:demo) => #<PACKAGE "DEMO"> * (setf *credentials* (file-credentials "~/.aws")) => #<FILE-CREDENTIALS {100482AF91}> * (bucket-exists-p "zs3-demo") => NIL * (create-bucket "zs3-demo") => #<RESPONSE 200 "OK" {10040D3281}> * (http-code *) => 200 * (put-vector (octet-vector 8 6 7 5 3 0 9 ) "zs3-demo" "jenny") => #<RESPONSE 200 "OK" {10033EC2E1}> * (create-bucket "zs3 demo") Error: InvalidBucketName: The specified bucket is not valid. For more information, see: http://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html [Condition of type INVALID-BUCKET-NAME] * (copy-object :from-bucket "zs3-demo" :from-key "jenny" :to-key "j2") => #<RESPONSE 200 "OK" {10040E3EA1}> * (get-vector "zs3-demo" "j2") => #(8 6 7 5 3 0 9), ((:X-AMZ-ID-2 . "Huwo...") (:X-AMZ-REQUEST-ID . "304...") (:DATE . "Sat, 27 Sep 2008 15:01:03 GMT") (:LAST-MODIFIED . "Sat, 27 Sep 2008 14:57:31 GMT") (:ETAG . "\"f9e71fe2c41a10c0a78218e98a025520\"") (:CONTENT-TYPE . "binary/octet-stream") (:CONTENT-LENGTH . "7") (:CONNECTION . "close") (:SERVER . "AmazonS3")) * (put-string "Nämen" "zs3-demo" "bork") => #<RESPONSE 200 "OK" {10047A3791}> * (values (get-vector "zs3-demo" "bork")) => #(78 195 164 109 101 110) * (values (get-file "zs3-demo" "bork" "bork.txt")) => #P"bork.txt" * (setf *distribution* (create-distribution "zs3-demo" :cnames "cdn.wigflip.com") => #<DISTRIBUTION X2S94L4KLZK5G0 for "zs3-demo.s3.amazonaws.com" [InProgress]> * (progn (sleep 180) (refresh *distribution*)) => #<DISTRIBUTION X2S94L4KLZK5G0 for "zs3-demo.s3.amazonaws.com" [Deployed]> * (domain-name *distribution*) => "x214g1hzpjm1zp.cloudfront.net" * (cnames *distribution*) => ("cdn.wigflip.com") * (put-string "Hello, world" "zs3-demo" "cloudfront" :public t) #<RESPONSE 200 "OK" {10042689F1}> * (drakma:http-request "http://x214g1hzpjm1zp.cloudfront.net/cloudfront") "Hello, world" 200 ((:X-AMZ-ID-2 . "NMc3IY3NzHGGEvV/KlzPgZMyDfPVT+ITtHo47Alqg00MboTxSX2f5XJzVTErfuHr") (:X-AMZ-REQUEST-ID . "52B050DC18638A00") (:DATE . "Thu, 05 Mar 2009 16:24:25 GMT") (:LAST-MODIFIED . "Thu, 05 Mar 2009 16:24:10 GMT") (:ETAG . "\"bc6e6f16b8a077ef5fbc8d59d0b931b9\"") (:CONTENT-TYPE . "text/plain") (:CONTENT-LENGTH . "12") (:SERVER . "AmazonS3") (:X-CACHE . "Miss from cloudfront") (:VIA . "1.0 ad78cb56da368c171e069e4444b2cbf6.cloudfront.net:11180") (:CONNECTION . "close")) #<PURI:URI http://x214g1hzpjm1zp.cloudfront.net/cloudfront> #<FLEXI-STREAMS:FLEXI-IO-STREAM {1002CE0781}> T "OK" * (drakma:http-request "http://x214g1hzpjm1zp.cloudfront.net/cloudfront") "Hello, world" 200 ((:X-AMZ-ID-2 . "NMc3IY3NzHGGEvV/KlzPgZMyDfPVT+ITtHo47Alqg00MboTxSX2f5XJzVTErfuHr") (:X-AMZ-REQUEST-ID . "52B050DC18638A00") (:DATE . "Thu, 05 Mar 2009 16:24:25 GMT") (:LAST-MODIFIED . "Thu, 05 Mar 2009 16:24:10 GMT") (:ETAG . "\"bc6e6f16b8a077ef5fbc8d59d0b931b9\"") (:CONTENT-TYPE . "text/plain") (:CONTENT-LENGTH . "12") (:SERVER . "AmazonS3") (:AGE . "311") (:X-CACHE . "Hit from cloudfront") (:VIA . "1.0 0d78cb56da368c171e069e4444b2cbf6.cloudfront.net:11180") (:CONNECTION . "close")) #<PURI:URI http://x214g1hzpjm1zp.cloudfront.net/cloudfront> #<FLEXI-STREAMS:FLEXI-IO-STREAM {100360A781}> T "OK"

Limitations

ZS3 supports many of the features of Amazon's S3 REST interface. Some features are unsupported or incompletely supported:

No direct support for Amazon DevPay

No support for checking the 100-Continue response to avoid unnecessary large requests; this will hopefully be fixed with a future Drakma release

If a character in a key is encoded with multiple bytes in UTF-8, a bad interaction between PURI and Amazon's web servers will trigger a validation error.

The ZS3 Dictionary

The following sections document the symbols that are exported from ZS3. Credentials

[Special variable] *credentials* *CREDENTIALS* is the source of the Amazon Access Key and Secret Key for authenticated requests. Any object that has methods for the ACCESS-KEY and SECRET-KEY generic functions may be used. If *CREDENTIALS* is a cons, it is treated as a list, and the first element of the list is taken as the access key and the second element of the list is taken as the secret key. The default value of *CREDENTIALS* is NIL, which will signal an error. You must set *CREDENTIALS* to something that follows the credentials generic function protocol to use ZS3. All ZS3 functions that involve authenticated requests take an optional :CREDENTIALS keyword parameter. This parameter is bound to *CREDENTIALS* for the duration of the function call. The following illustrates how to implement a credentials object that gets the access and secret key from external environment variables. (defclass environment-credentials () ()) (defmethod access-key ((credentials environment-credentials)) (declare (ignore credentials)) (getenv "AWS_ACCESS_KEY")) (defmethod secret-key ((credentials environment-credentials)) (declare (ignore credentials)) (getenv "AWS_SECRET_KEY")) (setf *credentials* (make-instance 'environment-credentials))

[Generic function] access-key credentials => access-key-string Returns the access key for credentials .

[Function] security-token credentials => security-token-string Returns the security token string for credentials , or NIL if there is no associated security token.

[Generic function] secret-key credentials => secret-key-string Returns the secret key for credentials .

[Function] file-credentials pathname => credentials Loads credentials on demand from pathname . The file named by pathname should be a text file with the access key on the first line and the secret key on the second line. It can be used like so: (setf *credentials* (file-credentials "/etc/s3.conf"))

Responses

Some operations return a response as an additional value. All response objects can be interrogated to obtain the HTTP code, headers and phrase.

The outcome of some requests — a very small proportion — will be an error internal to the AWS server. In these circumstances an exponential backoff policy operates; if this encounters too many failures then ZS3 signals an internal-error which can be interrogated to obtain the response object, and through that the HTTP response code and headers:

* e #<ZS3:INTERNAL-ERROR @ #x1000296bc92> * (setf r (zs3:request-error-response e)) #<ZS3::AMAZON-ERROR "InternalError"> * (zs3:http-code r) 500 * (zs3:http-headers r) ((:X-AMZ-REQUEST-ID . "3E20E3BAC24AB9AA") (:X-AMZ-ID-2 . "80sxu4PDKtx1BWLOcSrUVWD90mMMVaMx6y9c+sz5VBGa2eAES2YlNaefn5kqRsfvrbaF+7QGNXA=") (:CONTENT-TYPE . "application/xml") (:TRANSFER-ENCODING . "chunked") (:DATE . "Fri, 30 Sep 2016 10:10:11 GMT") (:CONNECTION . "close") (:SERVER . "AmazonS3")) * (zs3:http-phrase r) "Internal Server Error" *

[Special variable] *backoff* Used as the default value of :backoff when submitting a request. The value should be a cons of two numbers: how many times to try before giving up, and how long to wait (in ms) before trying for the second time. Each subsequent attempt will double that time. The default value is (3 . 100) . If a requst fails more times than permitted by *backoff* , an error will be signalled. It is the application's responsibility to handle this error.

[Function] request-error-response request-error => response Returns the response object associated with a request-error.

[Function] http-code response => code Returns the HTTP code associated with a response object.

[Function] http-headers response => headers Returns the HTTP headers associated with a response object.

[Function] http-phrase response => phrase Returns the HTTP phrase associated with a response object.

Operations on Buckets

With ZS3, you can put, get, copy, and delete buckets. You can also get information about the bucket.

[Function] all-buckets &key credentials backoff => bucket-vector Returns a vector of all bucket objects. Bucket object attributes are accessible via NAME and CREATION-DATE .

creation-date bucket-object => creation-universal-time Returns the creation date of bucket-object , which must be a bucket object, as a universal time.

[Function] name object => name-string Returns the string name of object , which must be a key object or bucket object.

[Function] all-keys bucket &key prefix credentials backoff => key-vector Returns a vector of all key objects in bucket with names that start with the string prefix . If no prefix is specified, returns all keys. Keys in the vector are in alphabetical order by name. Key object attributes are accessible via NAME , SIZE , ETAG , LAST-MODIFIED , OWNER , and STORAGE-CLASS . This function is built on QUERY-BUCKET and may involve multiple requests if a bucket has more than 1000 keys.

[Function] bucket-exists-p bucket &key credentials backoff => boolean Returns true if bucket exists.

[Function] create-bucket name &key access-policy public location credentials backoff => response Creates a bucket named name . If provided, access-policy should be one of the following: :PRIVATE - bucket owner is granted :FULL-CONTROL ; this is the default behavior if no access policy is provided

- bucket owner is granted ; this is the default behavior if no access policy is provided :PUBLIC-READ - all users, regardless of authentication, can query the bucket's contents

- all users, regardless of authentication, can query the bucket's contents :PUBLIC-READ-WRITE - all users, regardless of authentication, can query the bucket's contents and create new objects in the bucket

- all users, regardless of authentication, can query the bucket's contents and create new objects in the bucket :AUTHENTICATED-READ - authenticated Amazon AWS users can query the bucket For more information about access policies, see Canned ACL in the Amazon S3 developer documentation. If public is true, it has the same effect as providing an access-policy of :PUBLIC-READ . An error is signaled if both public and access-policy are provided. If location is specified, the bucket will be created in a region matching the given location constraint. If no location is specified, the bucket is created in the US. Valid locations change over time, but currently include "EU", "us-west-1", "us-west-2", "eu-west-1", "eu-central-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", and "sa-east-1". See Regions and Endpoints in the Amazon S3 developer documentation for the current information about location constraints.

[Function] delete-bucket bucket &key credentials backoff => response Deletes bucket . Signals a BUCKET-NOT-EMPTY error if the bucket is not empty, or a NO-SUCH-BUCKET error if there is no bucket with the given name.

[Function] bucket-location bucket &key credentials backoff => location Returns the location specified when creating a bucket, or NIL if no location was specified.

[Function] bucket-lifecycle bucket => rules-list Returns a list of lifecycle rules for bucket . Signals a NO-SUCH-LIFECYCLE-CONFIGURATION error if the bucket has no lifecycle rules configured. Bucket lifecycle rules are used to control the automatic deletion of objects in a bucket. For more information about bucket lifecycle configuration, see Object Expiration in the Amazon S3 developer documentation.

[Function] (setf bucket-lifecycle) rules bucket => rules , response Sets the lifecycle configuration of bucket to the designator for a list of bucket lifecycle rules rules . To create a bucket lifecycle rule, use LIFECYCLE-RULE . For example, to automatically delete objects with keys matching a "logs/" prefix after 30 days: (setf (bucket-lifecycle "my-bucket") (lifecycle-rule :prefix "logs/" :days 30)) To delete a bucket's lifecycle configuration, use an empty list of rules, e.g. (setf (bucket-lifecycle "my-bucket") nil)

[Function] lifecycle-rule &key action prefix days date => rule Creates a rule object suitable for passing to (SETF BUCKET-LIFECYCLE) . action should be either :expire (the default) or :transition . For :expire , matching objects are deleted. For :transition , matching objects are transitioned to the GLACIER storage class. For more information about S3-to-Glacier object transition, see Object Archival (Transition Objects to the Glacier Storage Class) in the Amazon S3 Developer's Guide. prefix is a string; all objects in a bucket with keys matching the prefix will be affected by the rule. days is the number of days after which an object will be affected. date is the date after which objects will be affected. Only one of days or date may be provided.

[Function] restore-object bucket key &body days credentials backoff => response Initiates a restoration operation on the object identified by bucket and key . A restoration operation can take several hours to complete. The restored object is temporarily stored with the reduced redundancy storage class. The status of the operation may monitored via OBJECT-RESTORATION-STATUS . days is the number of days for which the restored object should be avilable for normal retrieval before transitioning back to archival storage. Object restoration operation is only applicable to objects that have been transitioned to Glacier storage by the containing bucket's lifecycle configuration. For more information, see POST Object restore in the S3 documentation.

[Function] object-restoration-status bucket key &key credentials backoff => status-string Returns a string describing the status of restoring the object identified by bucket and key . If no restoration is in progress, or the operation is not applicable, returns NIL.

Querying Buckets

S3 has a flexible interface for querying a bucket for information about its contents. ZS3 supports this interface via QUERY-BUCKET , CONTINUE-BUCKET-QUERY , and related functions.

[Function] query-bucket bucket &key prefix marker max-keys delimiter credentials backoff => response Query bucket for key information. Returns a response object that has the result of the query. Response attributes are accessible via BUCKET-NAME , PREFIX , MARKER , DELIMITER , TRUNCATEDP , KEYS , and COMMON-PREFIXES . Amazon might return fewer key objects than actually match the query parameters, based on max-keys or the result limit of 1000 key objects. In that case, TRUNCATEDP for response is true, and CONTINUE-BUCKET-QUERY can be used with response be used to get successive responses for the query parameters. When prefix is supplied, only key objects with names that start with prefix will be returned in response . When marker is supplied, only key objects with names occurring lexically after marker will be returned in response . When max-keys is supplied, it places an inclusive upper limit on the number of key objects returned in response . Note that Amazon currently limits responses to at most 1000 key objects even if max-keys is greater than 1000. When delimiter is supplied, key objects that have the delimiter string after prefix in their names are not returned in the KEYS attribute of the response, but are instead accumulated into the COMMON-PREFIXES attribute of the response. For example: * (all-keys "zs3-demo") => #(#<KEY "a" 4> #<KEY "b/1" 4> #<KEY "b/2" 4> #<KEY "b/3" 4> #<KEY "c/10" 4> #<KEY "c/20" 4> #<KEY "c/30" 4>) * (setf *response* (query-bucket "zs3-demo" :delimiter "/")) => #<BUCKET-LISTING "zs3-demo"> * (values (keys *response*) (common-prefixes *response*)) => #(#<KEY "a" 4>), #("b/" "c/") * (setf *response* (query-bucket "zs3-demo" :delimiter "/" :prefix "b/")) => #<BUCKET-LISTING "zs3-demo"> * (values (keys *response*) (common-prefixes *response*)) => #(#<KEY "b/1" 4> #<KEY "b/2" 4> #<KEY "b/3" 4>), #() For more information about bucket queries, see GET Bucket in the Amazon S3 developer documentation.

[Function] continue-bucket-query response => response If response is a truncated response from a previous call to QUERY-BUCKET , continue-bucket-query returns the result of resuming the query at the truncation point. When there are no more results, continue-bucket-query returns NIL.

[Function] bucket-name response => name Returns the name of the bucket used in the call to QUERY-BUCKET that produced response .

[Function] keys response => keys-vector Returns the vector of key objects in response . Key object attributes are accessible via NAME , SIZE , ETAG , LAST-MODIFIED , and OWNER .

[Function] common-prefixes response => prefix-vector Returns a vector of common prefix strings, based on the delimiter argument of the QUERY-BUCKET call that produced response .

[Function] prefix response => prefix-string Returns the prefix given to the QUERY-BUCKET call that produced response . If present, all keys in response have prefix-string as a prefix.

[Function] marker response => marker Returns the marker given to the QUERY-BUCKET call that produced response . If present, it lexically precedes all key names in the response.

[Function] delimiter response => delimiter Returns the delimiter used in the QUERY-BUCKET call that produced response .

[Function] truncatedp response => boolean Returns true if response is truncated; that is, if there is more data to retrieve for a given QUERY-BUCKET query. CONTINUE-BUCKET-QUERY may be used to fetch more data.

[Function] last-modified key-object => universal-time Returns a universal time representing the last modified time of key-object .

[Function] etag key-object => etag-string Returns the etag for key-object .

[Function] size key-object => size Returns the size, in octets, of key-object .

[Function] owner key-object => owner Returns the owner of key-object , or NIL if no owner information is available.

[Function] storage-class key-object => storage-class Returns the storage class of key-object .

Operations on Objects

Objects are the stored binary data in S3. Every object is uniquely identified by a bucket/key pair. ZS3 has several functions for storing and fetching objects, and querying object attributes.

[Function] get-object bucket key &key output

start end

when-modified-since unless-modified-since

when-etag-matches unless-etag-matches

if-exists string-external-format credentials backoff => object Fetch the object referenced by bucket and key . The secondary value of all successful requests is an alist of Drakma-style response HTTP headers. If output is :VECTOR (the default), the object's octets are returned in a vector. If output is :STRING , the object's octets are converted to a string using the encoding specified by string-external-format , which defaults to :UTF-8 . See External formats in the FLEXI-STREAMS documentation for supported values for the string external format. Note that, even when output is :STRING , the start and end arguments operate on the object's underlying octets, not the string representation in a particular encoding. It's possible to produce a subsequence of the object's octets that are not valid in the desired encoding. If output is a string or pathname, the object's octets are saved to a file identified by the string or pathname. The if-exists argument is passed to WITH-OPEN-FILE to control the behavior when the output file already exists. It defaults to :SUPERSEDE . If output is :STREAM , a stream is returned from which the object's contents may be read. start marks the first index fetched from the object's data. end specifies the index after the last octet fetched. If start is NIL, it defaults to 0. If end is nil, it defaults to the total length of the object. If both start and end are provided, start must be less than or equal to end . when-modified-since and unless-modified-since are optional. If when-modified-since is provided, the result will be the normal object value if the object has been modified since the provided universal time, NIL otherwise. The logic is reversed for unless-modified-since . when-etag-matches and unless-etag-matches are optional. If when-etag-matches is provided, the result will be the normal object value if the object's etag matches the provided string, NIL otherwise. The logic is reversed for unless-etag-matches .

[Function] get-vector bucket key &key start end when-modified-since unless-modified-since when-etag-matches unless-etag-matches credentials backoff => vector get-vector is a convenience interface to GET-OBJECT . It is equivalent to calling: (get-object bucket key :output :vector ...)

[Function] get-string bucket key &key external-format start end when-modified-since unless-modified-since when-etag-matches unless-etag-matches credentials backoff => string get-string is a convenience interface to GET-OBJECT . It is equivalent to calling: (get-object bucket key :output :string :string-external-format external-format ...)

[Function] get-file bucket key file &key start end when-modified-since unless-modified-since when-etag-matches unless-etag-matches credentials backoff => pathname get-file is a convenience interface to GET-OBJECT . It is equivalent to calling: (get-object bucket key :output file ...)

[Function] put-object object bucket key &key access-policy public metadata string-external-format cache-control content-encoding content-disposition content-type expires storage-class tagging credentials backoff => response Stores the octets of object in the location identified by bucket and key . If object is an octet vector, it is stored directly. If object is a string, it is converted to an octet vector using string-external-format, which defaults to :UTF-8 , then stored. See External formats in the FLEXI-STREAMS documentation for supported values for the string external format. If object is a pathname, its contents are loaded in memory as an octet vector and stored. If provided, access-policy should be one of the following: :PRIVATE - object owner is granted :FULL-CONTROL ; this is the default behavior if no access policy is provided

- object owner is granted ; this is the default behavior if no access policy is provided :PUBLIC-READ - all users, regardless of authentication, can read the object

- all users, regardless of authentication, can read the object :AUTHENTICATED-READ - authenticated Amazon AWS users can read the object For more information about access policies, see Canned ACL in the Amazon S3 developer documentation. If public is true, it has the same effect as providing an access-policy of :PUBLIC-READ . An error is signaled if both public and access-policy are provided. If provided, metadata should be an alist of Amazon metadata to set on the object. When the object is fetched again, the metadata will be returned in HTTP headers prefixed with "x-amz-meta-". The cache-control, content-encoding, content-disposition, content-type, and expires values are all used to set HTTP properties of the object that are returned with subsequent GET or HEAD requests. If content-type is not set, it defaults to "binary/octet-stream". The others default to NIL. If expires is provided, it should be a universal time. If provided, storage-class should refer to one of the standard storage classes available for S3; currently the accepted values are the strings "STANDARD" and "REDUCED_REDUNDANCY". Using other values may trigger an API error from S3. For more information about reduced redundancy storage, see reduced Redundancy Storage in the Developer Guide. If provided, tagging specifies the set of tags to be associated with the object. The set is given as an alist. For more information, see Object Tagging in the Developer Guide.

[Function] put-vector vector bucket key &key start end access-policy public metadata content-disposition content-encoding content-type expires storage-class tagging credentials backoff => response put-vector is a convenience interface to PUT-OBJECT . It is similar to calling: (put-object vector bucket key ...) If one of start or end is provided, they are used as bounding index designators on the string, and only a subsequence is used.

[Function] put-string string bucket key &key start end external-format access-policy public metadata content-disposition content-encoding content-type expires storage-class tagging credentials backoff => response put-string is a convenience interface to PUT-OBJECT . It is similar to calling: (put-object string bucket key :string-external-format external-format ...) If one of start or end is supplied , they are used as bounding index designators on the string, and only a substring is used.

[Function] put-file file bucket key &key start end access-policy public metadata content-disposition content-encoding content-type expires storage-class tagging credentials backoff => response put-file is a convenience interface to PUT-OBJECT . It is almost equivalent to calling: (put-object (pathname file) bucket key ...) If key is T, the FILE-NAMESTRING of the file is used as the key instead of key. If one of start or end is supplied, only a subset of the file is used. If start is not NIL, start octets starting from the beginning of the file are skipped. If end is not NIL, octets in the file at and after end are ignored. An error of type CL:END-OF-FILE is signaled if end is provided and the file size is less than end .

[Function] put-stream file bucket key &key start end access-policy public metadata content-disposition content-encoding content-type expires storage-class tagging credentials backoff => response put-stream is similar to to PUT-OBJECT . It has the same effect as collecting octets from stream into a vector and using: (put-object vector bucket key ...) If start is not NIL, start octets starting from the current position in the stream are skipped before collecting. If end is NIL, octets are collected until the end of the stream is reached. If end is not NIL, collecting octets stops just before reaching end in the stream. An error of type CL:END-OF-FILE is signaled if the stream ends prematurely.

[Function] copy-object &key from-bucket from-key to-bucket to-key

access-policy public

when-etag-matches unless-etag-matches

when-modified-since unless-modified-since

metadata public precondition-errors storage-class tagging credentials backoff => response Copies the object identified by from-bucket and from-key to a new location identified by to-bucket and to-key . If to-bucket is NIL, from-bucket is used as the target. If to-key is nil, from-key is used as the target. An error is signaled if both to-bucket and to-key are NIL. access-policy and public have the same effect on the target object as in PUT-OBJECT . The precondition arguments when-etag-matches , unless-etag-matches , when-modified-since , and unless-modified-since work the same way they do in GET-OBJECT , but with one difference: if precondition-errors is true, an PRECONDITION-FAILED error is signaled when a precondition does not hold, instead of returning NIL. If metadata is explicitly provided, it follows the same behavior as with PUT-OBJECT . Passing NIL means that the new object has no metadata. Otherwise, the metadata is copied from the original object. If tagging is explicitly provided, it follows the same behavior as with PUT-OBJECT . Passing NIL means that the new object has no tags. Otherwise, tagging is copied from the original object. If storage-class is provided, it should refer to one of the standard storage classes available for S3; currently the accepted values are the strings "STANDARD" and "REDUCED_REDUNDANCY". Using other values may trigger an API error from S3. For more information about reduced redundancy storage, see Reduced Redundancy Storage in the Developer Guide.

[Function] delete-object bucket key &key credentials backoff => response Deletes the object identified by bucket and key . If bucket is a valid bucket for which you have delete access granted, S3 will always return a success response, even if key does not reference an existing object.

[Function] delete-objects bucket keys &key credentials backoff => deleted-count , errors Deletes keys , which should be a sequence of keys, from bucket . The primary value is the number of objects deleted. The secondary value is a list of error plists; if there are no errors deleting any of the keys, the secondary value is NIL.

[Function] delete-all-objects bucket &key credentials backoff => count Deletes all objects in bucket and returns the count of objects deleted.

[Function] object-metadata bucket key &key credentials backoff => metadata-alist Returns the metadata for the object identified by bucket and key , or NIL if there is no metadata. For example: * (put-string "Hadjaha!" "zs3-demo" "hadjaha.txt" :metadata (parameters-alist :language "Swedish")) => #<RESPONSE 200 "OK" {1003BD2841}> * (object-metadata "zs3-demo" "hadjaha.txt") => ((:LANGUAGE . "Swedish"))

[Function] set-storage-class bucket key storage-class &key credentials backoff => response Sets the storage class of the object identified by bucket and key to storage-class . This is a convenience function that uses COPY-OBJECT to make storage class changes. The storage class of an object can be determined by querying the bucket with ALL-KEYS or QUERY-BUCKET and using STORAGE-CLASS on one of the resulting key objects.

Access Control

Each S3 resource has an associated access control list that is created automatically when the resource is created. The access control list specifies the resource owner and a list of permission grants.

Grants consist of a permission and a grantee. The permission must be one of :READ , :WRITE , :READ-ACL , :WRITE-ACL , or :FULL-CONTROL . The grantee should be a person object, an acl-group object, or an acl-email object.

ZS3 has several functions that assist in reading, modifying, and storing access control lists.

[Function] get-acl &key bucket key credentials backoff => owner , grants Returns the owner and grant list for a resource as multiple values.

[Function] put-acl owner grants &key bucket key credentials backoff => response Sets the owner and grant list of a resource.

[Function] grant permission &key to => grant Returns a grant object that represents a permission (one of :READ , :WRITE , :READ-ACL , :WRITE-ACL , or :FULL-CONTROL ) for the grantee to . For example: * (grant :full-control :to (acl-email "bob@example.com")) => #<GRANT :FULL-CONTROL to "bob@example.com"> * (grant :read :to *all-users*) => #<GRANT :READ to "AllUsers"> It can be used to create or modify a grant list for use with PUT-ACL .

[Function] acl-eqv a b => boolean Returns true if a and b are equivalent ACL-related objects (person, group, email, or grant).

[Special variable] *all-users* This acl-group includes all users, including unauthenticated clients.

[Special variable] *aws-users* This acl-group object includes only users that have an Amazon Web Services account.

[Special variable] *log-delivery* This acl-group object includes the S3 system user that creates logfile objects. See also ENABLE-LOGGING-TO .

[Function] acl-email email-address => acl-email Returns an acl-email object, which can be used as a grantee for GRANT .

[Function] acl-person id &optional display-name => acl-person Returns an acl-person object for use as a resource owner (for PUT-ACL ) or as a grantee (for GRANT ). id must be a string representing the person's Amazon AWS canonical ID; for information about getting the canonical ID, see the Managing Access with ACLS in the Amazon S3 developer documentation. If display-name is provided, it is used only for printing the object in Lisp; it is ignored when passed to S3.

[Function] me &key credentials backoff => acl-person Returns the acl-person object associated with the current credentials. This data requires a S3 request, but the result is always the same per credentials and is cached.

[Function] make-public &key bucket key credentials backoff => response Makes a resource publicly accessible, i.e. readable by the *ALL-USERS* group.

[Function] make-private &key bucket key credentials backoff => response Removes public access to a resource, i.e. removes all access grants for the *ALL-USERS* group.

Access Logging

S3 offers support for logging information about client requests. Logfile objects are delivered by a system user in the *LOG-DELIVERY* group to a bucket of your choosing. For more information about S3 access logging and the logfile format, see the Server Access Logging in the Amazon S3 developer documentation.

[Function] enable-logging-to bucket &key credentials backoff => response Adds the necessary permission grants to bucket to allow S3 to write logfile objects into it.

[Function] disable-logging-to bucket &key credentials backoff => response Changes the access control list of bucket to remove all grants for the *LOG-DELIVERY* group.

[Function] enable-logging bucket target-bucket target-prefix &key target-grants credentials backoff => response Enables logging of all requests involving bucket . Logfile objects are created in target-bucket and each logfile's key starts with target-prefix . When a new logfile is created, its list of access control grants is extended with target-grants , if any. If target-bucket does not have the necessary grants to allow logging, the grants are implicitly added by calling ENABLE-LOGGING-TO .

[Function] disable-logging bucket &key credentials backoff => response Disables logging for bucket .

[Function] logging-setup bucket &key credentials backoff => target-bucket , target-prefix , target-grants If logging is enabled for bucket , returns the target bucket, target prefix, and target grants as multiple values.

Object Tagging

In S3, a set of tags can be associated with each key and bucket. Tagging offers a way to categorize objects that is orthogonal to key prefixes. They resemble object metadata but, unlike metadata, tagging be used in access control, lifecycle rules, and metrics. For more information, please refer to the Object Tagging section on the S3 Developer Guide.

[Function] get-tagging &key bucket key credentials backoff => tag-set Returns the object's current tag set as an alist.

[Function] put-tagging tag-set &key bucket key credentials backoff => response Sets the object's tagging resource to the given set of tags. The tags are given as an alist.

[Function] delete-tagging &key bucket key credentials backoff => response Deletes the tagging resource associated with the object.

Miscellaneous Operations

[Special variable] *use-ssl* When true, requests to S3 are sent via HTTPS. The default is NIL.

[Special variable] *use-keep-alive* When true, HTTP keep-alives are used to reuse a single network connection for multiple requests.

[Macro] with-keep-alive &body body => | Evaluate body in a context where *USE-KEEP-ALIVE* is true.

[Function] make-post-policy &key expires conditions credentials => policy , signature Returns an encoded HTML POST form policy and its signature as multiple values. The policy can be used to conditionally allow any user to put objects into S3. expires must be a universal time after which the policy is no longer accepted. conditions must be a list of conditions that the posted form fields must satisfy. Each condition is a list of a condition keyword, a form field name, and the form field value. For example, the following are all valid conditions: (:starts-with "key" "uploads/")

(:eq "bucket" "user-uploads")

(:eq "acl" "public-read")

(:range "content-length-range" 1 10000) These conditions are converted into a post policy description, base64-encoded, and returned as policy . The signature is returned as signature . These values can then be embedded in an HTML form and used to allow direct browser uploads. For example, if policy is "YSBwYXRlbnRseSBmYWtlIHBvbGljeQ==" and the policy signature is "ZmFrZSBzaWduYXR1cmU=", you could construct a form like this: <form action="http://user-uploads.s3.amazonaws.com/" method="post" enctype="multipart/form-data">

<input type="input" name="key" value="uploads/fun.jpg">

<input type=hidden name="acl" value="public-read">

<input type=hidden name="AWSAccessKeyId" value="8675309JGT9876430310">

<input type=hidden name="Policy" value="YSBwYXRlbnRseSBmYWtlIHBvbGljeQ==">

<input type=hidden name='Signature' value="ZmFrZSBzaWduYXR1cmU=">

<input name='file' type='file'>

<input type=submit value='Submit'>

</form>

For full, detailed documentation of browser-based POST uploads and policy documents, see Browser-Based Uploads Using POST in the Amazon S3 developer documentation.

[Function] head &key bucket key parameters credentials backoff => headers-alist , status-code , phrase Submits a HTTP HEAD request for the resource identified by bucket and optionally key . Returns the Drakma headers, HTTP status code, and HTTP phrase as multiple values. When parameters is supplied, it should be an alist of keys and values to pass as GET request parameters. For example: * (head :bucket "zs3-demo" :parameters (parameters-alist :max-keys 0)) => ((:X-AMZ-ID-2 . "...") (:X-AMZ-REQUEST-ID . "...") (:DATE . "Sat, 27 Sep 2008 19:00:35 GMT") (:CONTENT-TYPE . "application/xml") (:TRANSFER-ENCODING . "chunked") (:SERVER . "AmazonS3") (:CONNECTION . "close")), 200, "OK"

[Function] authorized-url &key bucket key vhost expires ssl sub-resource credentials => url Creates an URL that allows temporary access to a resource regardless of its ACL. If neither bucket nor key is specified, the top-level bucket listing is accessible. If key is not specified, listing the keys of bucket is accessible. If both bucket and key are specified , the object specified by bucket and key is accessible. expires is required, and should be the integer universal time after which the URL is no longer valid. vhost controls the construction of the url. If vhost is nil, the constructed URL refers to the bucket, if present, as part of the path. If vhost is :AMAZON , the bucket name is used as a prefix to the Amazon hostname. If vhost is :FULL , the bucket name becomes the full hostname of the url. For example: * (authorized-url :bucket "foo" :key "bar" :vhost nil) => "http://s3.amazonaws.com/foo/bar?..." * (authorized-url :bucket "foo" :key "bar" :vhost :amazon) => "http://foo.s3.amazonaws.com/bar?..." * (authorized-url :bucket "foo.example.com" :key "bar" :vhost :full) => "http://foo.example.com/bar?..." If ssl is true, the URL has "https" as the scheme, otherwise it has "http". If sub-resource is specified, it is used as part of the query string to access a specific sub-resource. Example Amazon sub-resources include "acl" for access to the ACL, "location" for location information, and "logging" for logging information. For more information about the various sub-resources, see the Amazon S3 developer documentation.

[Function] resource-url &key bucket key vhost ssl sub-resource => url Returns an URL that can be used to reference a resource. See AUTHORIZED-URL for more info.

Utility Functions

[Function] octet-vector &rest octets => octet-vector Returns a vector of type (simple-array (unsigned-byte 8) (*)) initialized with octets .

[Function] now+ delta => universal-time Returns a universal time that represents the current time incremented by delta seconds. It's useful for passing as the :EXPIRES parameter to functions like PUT-OBJECT and AUTHORIZED-URL .

[Function] now- delta => universal-time Like NOW+ , but decrements the current time instead of incrementing it.

[Function] file-etag pathname => etag Returns the etag of pathname . This can be useful for the conditional arguments :WHEN-ETAG-MATCHES and :UNLESS-ETAG-MATCHES in GET-OBJECT and COPY-OBJECT .

[Function] parameters-alist &rest parameters &key &allow-other-keys => alist Returns an alist based on all keyword arguments passed to the function. Keywords are converted to their lowercase symbol name and values are converted to strings. For example: * (parameters-alist :name "Bob" :age 21) => (("name" . "Bob") ("age" . "21")) This can be used to construct Amazon metadata alists for PUT-OBJECT and COPY-OBJECT , or request parameters in HEAD .

[Function] clear-redirects => | Clear ZS3's internal cache of redirections. Most ZS3 requests are submitted against the Amazon S3 endpoint "s3.amazonaws.com". Some requests, however, are permanently redirected by S3 to new endpoints. ZS3 maintains an internal cache of permanent redirects, but it's possible for that cache to get out of sync if external processes alter the bucket structure For example, if the bucket "eu.zs3" is created with a EU location constraint, S3 will respond to requests to that bucket with a permanent redirect to "eu.zs3.s3.amazonaws.com", and ZS3 will cache that redirect information. But if the bucket is deleted and recreated by a third party, the redirect might no longer be necessary.

CloudFront

CloudFront functions allow the creation and manipulation of distributions. In ZS3, distributions are represented by objects that reflect the state of a distributon at some point in time. It's possible for the distribution to change behind the scenes without notice, e.g. when a distribution's status is updated from "InProgress" to "Deployed".

The functions ENABLE , DISABLE , ENSURE-CNAME , REMOVE-CNAME , and SET-COMMENT are designed so that regardless of the state of the distribution provided, after the function completes, the new state of the distribution will reflect the desired update. The functions STATUS , CNAMES , and ENABLEDP do not automatically refresh the object and therefore might reflect outdated information. To ensure the object has the most recent information, use REFRESH . For example, to fetch the current, live status, use (status (refresh distribution)) .

[Function] all-distributions => | Returns a list of all distributions.

[Function] create-distribution bucket-name &key cnames enabled comment => distribution Creates and returns a new distribution object that will cache objects from the bucket named by bucket-name . If cnames is provided, it is taken as a designator for a list of additional domain names that can be used to access the distribution. If enabled is NIL, the distribution is initially created in a disabled state. The default value is is T. If comment is provided, it becomes part of the newly created distribution.

[Function] delete-distribution distribution => | Deletes distribution . Distributions must be disabled before deletion; see DISABLE .

[Function] refresh distribution => distribution Queries Amazon for the latest information regarding distribution and destructively modifies the instance with the new information. Returns its argument.

[Function] enable distribution => | Enables distribution .

[Function] disable distribution => | Disables distribution .

[Function] ensure-cname distribution cname => | Adds cname to the CNAMEs of distribution , if necessary.

[Function] remove-cname distribution cname => | Removes cname from the CNAMEs of distribution .

set-comment distribution comment => | Sets the comment of distribution to comment .

[Function] distributions-for-bucket bucket-name => | Returns a list of distributions that have bucket-name as the origin bucket.

[Condition] distribution-error All errors signaled as a result of a CloudFront request error are subtypes of distribution-error .

[Condition] distribution-not-disabled Distributions must be fully disabled before they are deleted. If they have not been disabled, or the status of the distribution is still "InProgress", distribution-not-disabled is signaled.

[Condition] cname-already-exists A CNAME may only appear on one distribution. If you attempt to add a CNAME to a distribution that is already present on some other distribution, cname-already-exists is signaled.

[Condition] too-many-distributions If creating a new distribution via CREATE-DISTRIBUTION would exceed the account limit of total distributions, too-many-distributions is signaled.

[Function] status distribution => status Returns a string describing the status of distribution . The status is either "InProgress", meaning that the distribution's configuration has not fully propagated through the CloudFront system, or "Deployed".

[Function] origin-bucket distribution => origin-bucket Returns the origin bucket for distribution . It is different from a normal ZS3 bucket name, because it has ".s3.amazonaws.com" as a suffix.

[Function] domain-name distribution => domain-name Returns the domain name through which CloudFront-enabled access to a resource may be made.

[Function] cnames distribution => cnames Returns a list of CNAMEs associated with distribution .

[Function] enabledp distribution => boolean Returns true if distribution is enabled, NIL otherwise.

invalidate-paths distribution paths => invalidation Initiates the invalidation of resources identified by paths in distribution . paths should consist of key names that correspond to objects in the distribution's S3 bucket. The invalidation object reports on the status of the invalidation request. It can be queried with STATUS and refreshed with REFRESH . * (invalidate-paths distribution '("/css/site.css" "/js/site.js")) #<INVALIDATION "I1HJC711OFAVKO" [InProgress]> * (progn (sleep 300) (refresh *)) #<INVALIDATION "I1HJC711OFAVKO" [Completed]>

References

Acknowledgements

Several people on freenode #lisp pointed out typos and glitches in this documentation. Special thanks to Bart "_3b" Botta for providing a detailed documentation review that pointed out glitches, omissions, and overly confusing passages.

James Wright corrected a problem with computing the string to sign and URL encoding. Feedback

If you have any questions or comments about ZS3, please email me, Zach Beane

For ZS3 announcements and development discussion, please see the zs3-devel mailing list.

2016-06-17

Copyright © 2008-2016 Zachary Beane, All Rights Reserved