What is Kubernetes? Watch Now

must read What is Kubernetes? How orchestration redefines the data center In a little over four years' time, the project born from Google's internal container management efforts has upended the best-laid plans of VMware, Microsoft, Oracle, and every other would-be king of the data center. Read More

Kubernetes lets us orchestrate containers, but how do you track your container images? That's where Quay comes in. It enables you to keep a handle on not just your images but the configuration details you need to get a complete application up and running. Now, Red Hat is releasing Quay 3.1 to enable developers to mirror, store, build, and deploy their images securely across diverse enterprise environments and to leverage several new backend technologies.

This follows up on May's Quay 3.0 release. That version brought support for multiple architectures, Windows containers, and a Red Hat Enterprise Linux (RHEL)-based image to this container image registry.

Quay's newest feature, which is now in beta, is repository mirroring. This complements its existing geographic replication feature and can be used with it.

The difference? Quay geo-replication is designed for a shared, global registry, and mirrors the entire storage backend data. Its primary use case is to speed up access to the binary blobs for branch offices. Repository mirroring reflects content between distinct, different registries. With this, you can synchronize whitelisted repositories or a source registry subset into Quay. This makes it much easier to distribute images and related data through Quay.

Specifically, with repo mirroring, system administrators can:

● Continually synchronize repositories from external source registries into Quay (content ingress point);

● Mirror a subset of the entire registry content to distributed deployments;

● Set up and apply filters to sync a smaller subset of a repository using tag filters. This capability makes use of the container tool Skopeo. Since Skopeo communicates directly with registry servers (no daemon required), it's well suited for replication.

Quay, which started as a CoreOS project, is now better integrated with Red Hat OpenShift, IBM, and Red Hat's main Kubernetes release. The Quay Setup Operator helps deploy and maintain Quay on OpenShift. Thanks to this, it can take only minutes to make a full Quay deployment. Thus, OpenShift users can focus on their applications instead of managing their images. This is still in Developer Preview and not ready for business deployment.



Red Hat Quay already supports a variety of storage backends for both on-premise and cloud deployments, but it also supports NooBaa software-defined storage AWS S3 Operator. This flexible, lightweight, and scalable S3 API will also be available on the Red Hat Multi-Cloud Object Gateway Operator, as part of Red Hat OpenShift Container Storage 3, with more features planned to be leveraged in future versions of Quay. This helps to set the stage to allow customers to use Red Hat OpenShift Container Storage in current and future versions with Quay.

With this release, you can also run the PostgreSQL DBMS in high availability mode on OpenShift using the Crunchy Data PostgreSQL Operator. In earlier days, Red Hat recommended running PostgreSQL database outside the Kubernetes cluster. Now, with Kubernetes Operator technology, you can run stateful database applications on Kubernetes, it. For more database Operators, see Operatorhub.io.

The new Quay also supports (temporarily) frozen or archived repositories. To do this, Quay 3.1 introduces a read-only repository mode, otherwise known as frozen zones. This is designed to give developers more granular control over their environments at critical times, such as freezing certain zones from any changes right before a production release.

Looking ahead, Red Hat will be integrating Quay even deeper into OpenShift as the industry's most comprehensive enterprise Kubernetes platform. The features to come include advanced vulnerability scanning and enhanced support for distributed, multi-cloud setups, and air-gapped environments.

Related Stories: