Way before the first commit, we settled on some principles that would drive the development of Publish. One key principle is that Publish would never become a decision maker. This meant that all business logic should be handled outside of the core Publish codebase.

Handling business logic with API

DADI API makes adding logic to an application very simple. By adding hooks to an installed API, it is possible to intercept and manipulate data at key stages in the lifecycle of a request.

For example, let’s say you take payments on your website, storing data about successful orders in API. Should you want to allow Publish to display up-to-date tracking information for those orders, it is possible to configure API to perform this task via a hook by making a request to your shipping API to retrieve that data. This approach ensures logic is kept out of Publish and the codebase remains as lean as possible.

Handling images in Publish

In the first iteration of Publish an uploaded image would be passed from the webapp to the Publish backend, which would then handle uploading it to an Amazon S3 bucket. This required Publish to have asset handling built in to the codebase, effectively going against our key principle to keep logic out of the core.

Using API to handle images

The first step in removing this logic from Publish was to have API handle images, using the following flow:

User uploads image to Publish webapp

Publish webapp passes image to Publish backend

Publish backend uploads image to API

API uploads image to S3 bucket

That’s a lot of processing for a relatively large payload so we implemented signed URLs in API to allow the Publish webapp to post directly to API using a pre-authenticated URL.

Adding the following to your API configuration allowed you to store images on the API instance, and have Publish load directly from that:

"media": {

"enabled": true,

"storage": "disk",

"basePath": "workspace/media",

"pathFormat": "date"

}

This worked great for small projects, but what if you want to store your assets separately to the API instance, so they could be served by DADI CDN? To enable storing assets remotely we added a storage provider to API to handle uploading to an S3 bucket. This can be configured in API like this:

"media": {

"enabled": true,

"defaultBucket": "mediaStore",

"tokenSecret": "your-secret-here", // for signed URLs

"tokenExpiresIn": "1h",

"storage": "s3",

"basePath": "media",

"pathFormat": "date",

"s3": {

"accessKey": "******",

"secretKey": "*******",

"bucketName": "bucket-rogers",

"region": "eu-west-2"

}

}

This means that assets can be pushed directly to a remote location, the retrieval from which is already supported by DADI CDN. This improves performance but Publish is still loading the assets via the API box.

The solution? Allow Publish to retrieve assets from DADI CDN.

Configure Publish to use CDN

This week we released an updated public beta of Publish (version 1.0.4) which contains support for DADI CDN, and it couldn’t be simpler.

To enable Publish to read assets from a remote S3 bucket via your DADI CDN instance, simply add CDN’s public URL to Publish’s configuration file:

"cdn": {

"publicUrl": "https://cdn.somedomain.tech"

}

This allows Publish to load images from the same S3 bucket that API uploads to, removing a step (and improving the performance) in the process of retrieving those assets for display in the Publish interface.

Upgrading your Publish installation

You can find the latest beta release on NPM or Github.

Need help or like to chat?

If you want to know more about Publish, find me on Twitter or join our Discord channel for discussions and updates from the core engineering team.