In the previous post, we examined different (SSL/TLS) certificate combinations to secure a gRPC channel. As the number of endpoints grows, this process soon gets too complicated to carry out manually. It’s time to look at how to automate the generation of signed certificates our gRPC endpoints can use without our intervention. We will explore alternatives for private and public domains. If you want to jump directly into the code, check out the repository.

This is part 2 of a series of three posts. In part 1 we covered setting gRPC TLS connections manually. Mutual authentication will be discussed in Part 3.

Introduction

We will need a Certificate Authority (CA) we can interact with from our Go gRPC endpoints.

For private domains our CA of choice will be the Vault PKI Secrets Engine. In order to generate certificate signing requests (CSR) and renewals from our gRPC endpoints, we will use Certify.

For public certificate generation and distribution, we’ll go with Let’s Encrypt; a free, automated, and open Certificate Authority… how cool is that!?. The only thing they require from you is to demonstrate control over the domain with the Automatic Certificate Management Environment (ACME) protocol. This means we need an ACME client, fortunately there is a list of Go libraries we can chose from for this. In this opportunity, we will use autocert for its ease of use and support for TLS-ALPN-01 challenge.

Private domains: Vault and Certify

Vault

Vault is a secrets management and data protection open source project, which can store and control access to certificates, among other secrets like passwords and tokens. It’s distributed as a binary you can place anywhere in your $PATH . If you want to learn more about Vault, its Getting Started guide is a good place to start. All the details of the setup used for this post are documented here.

First, we run Vault with vault server -config=vault_config.hcl . The config file ( vault_config.hcl ) provides the storage backend where Vault data is stored. For simplicity, we are just using a local file. You could alternatively choose to store it in-memory, on a cloud provider or else. See all the options in storage Stanza.

storage "file" {

path = ".../data"

}

Additionally, we specify the address Vault will bind to. TLS is enabled by default, so we need to provide a certificate and private key pair. If you choose to self-sign these (see these instructions for an example), make sure you keep the Root certificate ( ca.cert ) handy, you will need it later on to make requests to Vault (*). Other TCP config options are documented in tcp Listener Parameters.

listener "tcp" {

address = "localhost:8200"

tls_cert_file = ".../vault.pem"

tls_key_file = ".../vault.key"

}

After initializing Vault’s Server and unsealing Vault you can validate is working with an API call.

curl \

--cacert ca.cert \

-i https://localhost:8200/v1/sys/health HTTP/1.1 200 OK

... {"initialized":true,"sealed":false,"standby":false, ...}

The next step is to enable Vault PKI Secrets Engine backend with vault secrets enable pki , generate a CA certificate and private key Vault will use to sign certificates, and create a role ( my-role ) that can make requests for our domain ( localhost ). See all the details here.

vault write pki/roles/my-role \

allowed_domains=localhost \

allow_subdomains=true \

max_ttl=72h

Certify

Now that our Certificate Authority (CA) is ready to go, we can make requests to it, to have our certificates signed. Which certificates you might ask, and how to automatically tell our gRPC endpoints to use them, if we don’t have them yet?. Enter Certify, a Go library to perform certificate distribution and renewal whenever it’s needed, automatically. It not only works with Vault as CA backend, but also with Cloudflare CFSSL and AWS ACM.

The first step to configure Certify is to specify the backend issuer , Vault in this case.

In this example we identify our Vault instance and access credentials by providing:

The listener address we configured for Vault ( localhost:8200 ).

). The TOKEN we get after initializing Vault’s Server.

we get after initializing Vault’s Server. The role we created ( my-role ).

). The CA certificate of the issuer of the certs we provided in Vault’s config. cp is a x509.CertPool that includes ca.cert in this case, as noted in (*).

You can, optionally, provide certificate details via CertConfig . We do it in this case to specify we want to generate private keys for our Certificate Signing Requests (CSR) using the RSA algorithm instead of Certify’s default ECDSA P256 .

Certify hooks into the GetCertificate and GetClientCertificate methods of tls.Config via the Certify type, which we now build with; the previously collected information, a Cache method to prevent requesting a new certificate for every incoming connection, and a login plugin ( go-kit/log in tis example).

The last step is to create a tls.Config pointing to the GetCertificate method of the Certify we just created. Then, use this config in our gRPC Server.

You can reproduce this by running make run-server-vault in one tab and make run-client-ca in another after pointing the environmental variable CAFILE to Vault’s certificate file ( ca-vault.cert ), which you can get as follows:

Server:

$ make run-server-vault

...

level=debug time=2019-07-15T19:37:12.694833Z caller=logger.go:36 server_name=localhost remote_addr=[::1]:64103 msg="Getting server certificate"

level=debug time=2019-07-15T19:37:12.694936Z caller=logger.go:36 msg="Requesting new certificate from issuer"

level=debug time=2019-07-15T19:37:12.815081Z caller=logger.go:36 serial=451331845556263599050597627925015657462097174315 expiry=2019-07-18T19:37:12Z msg="New certificate issued"

level=debug time=2019-07-15T19:37:12.815115Z caller=logger.go:36 serial=451331845556263599050597627925015657462097174315 took=120.284897ms msg="Certificate found"

Client:

$ export CAFILE="ca-vault.cert"

$ make run-client-ca

...

User found: Nicolas

Inspecting the certificate we generated and had signed automatically, will reveal some of the specifics we just configured.

openssl x509 -in grpc-cert.pem -text -noout

Certificate:

Data:

...

Validity

Not Before: Jul 15 19:36:42 2019 GMT

Not After : Jul 18 19:37:12 2019 GMT

Subject: CN=localhost

Subject Public Key Info:

Public Key Algorithm: rsaEncryption

Public-Key: (2048 bit)

Modulus:

00:bf:3c:a3:d8:8c:d8:3c:d0:bd:0c:e0:4c:9d:4d:

...

X509v3 extensions:

...

Authority Information Access:

CA Issuers - URI:https://localhost:8200/v1/pki/ca Certificate:Data:...ValidityNot Before:19:36:42 2019 GMTNot After : Jul 182019 GMTSubject: CN=Subject Public Key Info:Public Key Algorithm:Public-Key: (Modulus:00:bf:3c:a3:d8:8c:d8:3c:d0:bd:0c:e0:4c:9d:4d:...X509v3 extensions:...Authority Information Access: X509v3 Subject Alternative Name:

DNS:localhost, DNS:localhost, IP Address:127.0.0.1, IP Address:0:0:0:0:0:0:0:1

Public Domains: Let’s Encrypt and autocert

Let’s Encrypt

Can we use Let’s Encrypt for gRPC?. Well, it did work for me. The question might be whether having a public facing gRPC API’s is a good idea or not. Google Cloud seems to be doing it, see Google APIs. However, this is not a very common practice. Anyways, here is how I was able to expose a public gRPC API with certificates we automatically get from Let’s Encrypt.

Is important to emphasize this example is not meant to be replicated for internal/private services. In talking to Jacob Hoffman-Andrews from Let’s Encrypt, he mentioned:

In general, I recommend that people don’t use Let’s Encrypt certificates for gRPC or other internal RPC services. In my opinion, it’s both easier and safer to generate a single-purpose internal CA using something like minica and generate both server and client certificates with it. That way you don’t have to open up your RPC servers to the outside internet, plus you limit the scope of trust to just what’s needed for your internal RPCs, plus you can have a much longer certificate lifetime, plus you can get revocation that works.

Let’s Encrypt uses the ACME protocol to verify that an applicant for a certificate legitimately represents the domain name(s) in the certificate. It also provides facilities for other certificate management functions, such as certificate revocation. ACME describes an extensible framework for automating the issuance and domain validation procedure, thereby allowing servers and infrastructure software to obtain certificates without user interaction. [RFC 8555]

In a nutshell, all we need to do in order to leverage Let’s Encrypt is to run an ACME client. We will use autocert in this example.

autocert

The autocert package provides automatic access to certificates from Let’s Encrypt and any other ACME-based CA. However, keep in mind this package is a work in progress and makes no API stability promises. [Documentation]

In terms of code requirements, the first step to is to declare a Manager with a Prompt that indicate acceptance of the CA’s Terms of Service during account registration, a Cache method to store and retrieve previously obtained certificates (directory on the local filesystem in this case), a HostPolicy with the list of domains we can respond to, and optionally and Email address to notify about problems with issued certificates.

This Manager will create a TLS config for us automagically, taking care of the interaction with Let’s Encrypt. The client, on the other hand, just needs a pointer to an empty tls config ( &tls.Config{} ), which will, by default, load the system CA certificates and therefore trust our CA (Let’s Encrypt).

If you are paying close attention, you might have noticed we didn’t include the listener section in this example. The reason is how the ACME TLS-based challenge TLS-ALPN-01 works. The TLS with Application Level Protocol Negotiation (TLS ALPN) validation method proves control over a domain name by requiring the client to configure a TLS server to respond to specific connection attempts utilizing the ALPN extension with identifying information. [draft-ietf-acme-tls-alpn-05].

In other words, we need to listen to HTTPS request. The good news is autocert got you covered and can create this special Listener with manager.Listener() . Now, the question is whether HTTPS and gRPC should listen on the same port or not?. Long story short, I couldn’t make it work with independent ports, but if both services listen on 443, it works flawlessly.

gRPC and HTTPS on the same port… say what!?. I know, just because you can doesn’t mean you should. However, the Go gRPC library provides the ServeHTTP method that can help us route incoming requests to the corresponding service. Note that ServeHTTP uses Go’s HTTP/2 server implementation which is totally separate from grpc-go’s HTTP/2 server. Performance and features may vary between the two paths. [go-grpc]. You can see some benchmarks in gRPC serveHTTP performance penalty. Having said that, routing would then look like this:

So we can listen to requests as follows, notice we provide the handler grpcHandlerFunc we just created to http.Serve :

You can reproduce this by running make run-server-public in one tab and make run-client-default in another. For this to work, you need to own a domain ( HOST ). In my case I used:

export HOST=grpc.nleiva.com

export PORT=443

make run-server-public

Now, I can make gRPC requests from anywhere in the world over the Internet with:

$ export HOST=grpc.nleiva.com

$ export PORT=443

$ make run-client-default

User found: Nicolas

Finally, we can take a look at the certificate generated by making an HTTPS request.

Conclusion

Managing and distributing certificates for your gRPC endpoints shouldn’t be a hassle if you leverage some of the resources discussed in this post.

So far, while the connection has been encrypted and the client has validated the integrity of the server, the server hasn’t authenticated the client. This might be required for some microservices scenarios, which we will cover in the next part of this blog series, when we review Mutual TLS. Stay tuned!

Further reading:

I would like to thank Alex, Jacob and Johan for their help with this post and keeping me honest.