UPDATED June 7th 2020

用中文阅读.

Sponsor me on Patreon to support more content like this.

In the previous post, we covered some of the basics of go-micro and Docker. We also introduced a second service. In this post, we're going to look at docker-compose, and how we can run our services together locally a little easier. We're going to introduce some different databases, and finally we'll introduce a third service into the mix.

Prerequisites

Install docker-compose: https://docs.docker.com/compose/install/

But first, let's look at databases.

Choosing a database

So far our data isn't actually stored anywhere, it's stored in memory in our services, which is then lost when our containers are restarted. So of course we need a way of persisting, storing and querying our data.

The beauty of microservices, is that you can use different databases per service. Of course you don't have to do this, and many people don't. In fact I rarely do for small teams as it's a bit of a mental leap to maintain several different databases, than just one. But in some cases, one services data, might not fit the database you've used for your other services. So it makes sense to use something else. Microservices makes this trivially easy as your concerns are completely separate.

Choosing the 'correct' database for your services is an entirely different article, this one for example, so we wont go into too much detail on this subect. However, I will say that if you have fairly loose or inconsistent datasets, then a NoSQL document store solution is perfect. They're much more flexible with what you can store and work well with json. We'll be using MongoDB for our NoSQL database. No particular reason other than it performs well, it's widely used and supported and has a great online community.

If your data is more strictly defined and relational by nature, then it can makes sense to use a traditional rdbms, or relational database. But there really aren't any hard rules, generally any will do the job. But be sure to look at your data structure, consider whether your service is doing more reading or more writing, how complex the queries will be, and try to use those as a starting point in choosing your databases. For our relational database, we'll be using Postgres. Again, no particular reason other than it does the job well and I'm familiar with it. You could use MySQL, MariaDB, or something else.

Amazon and Google both have some fantastic on premises solution for both of these database types as well, if you wanted to avoid managing your own databases (generally advisable). Another great option is compose, who will spin up fully managed, scalable instances of various database technologies, using the same cloud provider as your services to avoid connection latency.

Amazon:

RDBMS: https://aws.amazon.com/rds/

NoSQL: https://aws.amazon.com/dynamodb/

Google:

RDBMS: https://cloud.google.com/spanner/

NoSQL: https://cloud.google.com/datastore/

Now that we've discussed databases a little, let's do some coding!

docker-compose

In the last part in the series we looked at Docker, which let us run our services in light-weight containers with their own run-times and dependencies. However, it's getting slightly cumbersome to have to run and manage each service separate docker commands.

So let's take a look at docker-compose. Docker-compose. Docker Compose allows you do define a list of docker containers in a yaml file, and specify metadata about their run-time. Docker-compose services map more or less to the same docker commands we're already using. For example:

$ docker run -p 50052:50051 -e MICRO_SERVER_ADDRESS=:50051 shippy-service-vessel

Becomes:

version: '3.5' services: ... vessel: build: ./shippy-service-vessel ports: - 50052:50051 environment: MICRO_SERVER_ADDRESS: ":50051" ...

Easy!

So let's create a docker-compose file in the root of our project:

# docker-compose.yml version: '3.5' services: # Services consignment: restart: always build: ./shippy-service-consignment depends_on: - datastore - vessel ports: - 50051:50051 environment: MICRO_SERVER_ADDRESS: ":50051" DB_HOST: "mongodb://datastore:27017" vessel: restart: always build: ./shippy-service-vessel ports: - 50052:50051 environment: MICRO_SERVER_ADDRESS: ":50051" DB_HOST: "mongodb://datastore:27017" # Commands cli: build: ./shippy-cli-consignment # Database tier datastore: image: mongo container_name: "datastore" environment: - MONGO_DATA_DIR=/data/db - MONGO_LOG_DIR=/dev/null volumes: - ./data/db:/data/db # ensures data persistence between restarting ports: - 27017 command: mongod --logpath=/dev/null

First we define the version of docker-compose we want to use, then a list of services. There are other root level definitions such as networks and volumes, but we'll just focus on services for now.

First we define our service, including any environment variables etc. Then we define our database, using the official mongodb image.

Each service is defined by its name, then we include a build path, which is a reference to a location, which should contain a Dockerfile. This tells docker-compose to use this Dockerfile to build its image. You can also use image here to use a pre-built image. Which we use for our MongoDB image. Then you define your port mappings, and finally your environment variables.

To build your docker-compose stack, simply run $ docker-compose build , and to run it, $ docker-compose run . To run your stack in the background, use $ docker-compose up -d . You can also view a list of your currently running containers at any point using $ docker ps . Finally, you can stop all of your current containers by running $ docker stop $(docker ps -qa) .

Let's test it all, by running our CLI tool. To run it through docker-compose, simply run $ docker-compose run cli . You should see everything working as before.

Let's start hooking up our first service, our consignment service. I feel as though we should do some tidying up first. We've lumped everything into our main.go file. These are microservices, but that's no excuse to be messy! So let's create two more files in shippy-service-consignment , handler.go , datastore.go , and repository.go . I'm creating these within the root of our service, rather than creating them as new packages and directories. This is perfectly adequate for a small microservice.

We will also need to update the build step in our Docker image to pick these new files up when compiling our service:

FROM golang:alpine as builder RUN apk update && apk upgrade && \ apk add --no-cache git RUN mkdir /app WORKDIR /app ENV GO111MODULE=on COPY . . RUN go mod download # CHANGE THIS LINE: RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o shippy-service-consignment *.go # Run container FROM alpine:latest RUN apk --no-cache add ca-certificates RUN mkdir /app WORKDIR /app COPY --from=builder /app/shippy-service-consignment . CMD ["./shippy-service-consignment"]

Here's a great article on organising Go codebases.

Let's start by removing all of the repository code from our main.go and re-purpose it to use the mongodb library:

// shippy-service-consignment/repository.go package main import ( "context" pb "github.com/<YourUsername>/shippy/shippy-service-consignment/proto/consignment" "go.mongodb.org/mongo-driver/mongo" ) type Consignment struct { ID string `json:"id"` Weight int32 `json:"weight"` Description string `json:"description"` Containers Containers `json:"containers"` VesselID string `json:"vessel_id"` } type Container struct { ID string `json:"id"` CustomerID string `json:"customer_id"` UserID string `json:"user_id"` } type Containers []*Container func MarshalContainerCollection(containers []*pb.Container) []*Container { collection := make([]*Container, 0) for _, container := range containers { collection = append(collection, MarshalContainer(container)) } return collection } func UnmarshalContainerCollection(containers []*Container) []*pb.Container { collection := make([]*pb.Container, 0) for _, container := range containers { collection = append(collection, UnmarshalContainer(container)) } return collection } func UnmarshalConsignmentCollection(consignments []*Consignment) []*pb.Consignment { collection := make([]*pb.Consignment, 0) for _, consignment := range consignments { collection = append(collection, UnmarshalConsignment(consignment)) } return collection } func UnmarshalContainer(container *Container) *pb.Container { return &pb.Container{ Id: container.ID, CustomerId: container.CustomerID, UserId: container.UserID, } } func MarshalContainer(container *pb.Container) *Container { return &Container{ ID: container.Id, CustomerID: container.CustomerId, UserID: container.UserId, } } // Marshal an input consignment type to a consignment model func MarshalConsignment(consignment *pb.Consignment) *Consignment { containers := MarshalContainerCollection(consignment.Containers) return &Consignment{ ID: consignment.Id, Weight: consignment.Weight, Description: consignment.Description, Containers: containers, VesselID: consignment.VesselId, } } func UnmarshalConsignment(consignment *Consignment) *pb.Consignment { return &pb.Consignment{ Id: consignment.ID, Weight: consignment.Weight, Description: consignment.Description, Containers: UnmarshalContainerCollection(consignment.Containers), VesselId: consignment.VesselID, } } type repository interface { Create(ctx context.Context, consignment *Consignment) error GetAll(ctx context.Context) ([]*Consignment, error) } // MongoRepository implementation type MongoRepository struct { collection *mongo.Collection } // Create - func (repository *MongoRepository) Create(ctx context.Context, consignment *Consignment) error { _, err := repository.collection.InsertOne(ctx, consignment) return err } // GetAll - func (repository *MongoRepository) GetAll(ctx context.Context) ([]*Consignment, error) { cur, err := repository.collection.Find(ctx, nil, nil) var consignments []*Consignment for cur.Next(ctx) { var consignment *Consignment if err := cur.Decode(&consignment); err != nil { return nil, err } consignments = append(consignments, consignment) } return consignments, err }

So there we have our code responsible for interacting with our Mongodb database. You will notice we include marshalling and unmarshalling functions, for converting between the protobuf definition generated structs, and our internal datastore models. You can in theory use the generated structs as your models also, but this isn't neccessarily recommended from a software design perspective. As you are now coupling your data model, to your delivery layer. It's good to maintain these boundaries between the various responsibilities in your software. It may seem like additional overhead, but this is important for the extensibility of your software.

We'll need to create the code that creates the master session/connection. Update shippy-consignment-service/datastore.go with the following:

// shippy-service-consignment/datastore.go package main import ( "context" "go.mongodb.org/mongo-driver/mongo" "go.mongodb.org/mongo-driver/mongo/options" "time" ) // CreateClient - func CreateClient(ctx context.Context, uri string, retry int32) (*mongo.Client, error) { conn, err := mongo.Connect(ctx, options.Client().ApplyURI(uri)) if err := conn.Ping(ctx, nil); err != nil { if retry >= 3 { return nil, err } retry = retry + 1 time.Sleep(time.Second * 2) return CreateClient(ctx, uri, retry) } return conn, err }

Here we're creating a connection, using a given connection string, then we're 'pinging' the connection to check it's correct and that the datastore is available. We're then including some basic retry logic, by calling itself again if it can't connect. If it exceeds three retries, we let the error bubble upwards to be handled.

Now let's refactor our main.go file:

// shippy-service-consignment/main.go package main import ( "context" "fmt" "log" "os" pb "github.com/EwanValentine/shippy/shippy-service-consignment/proto/consignment" vesselProto "github.com/EwanValentine/shippy/shippy-service-vessel/proto/vessel" "github.com/micro/go-micro/v2" ) const ( defaultHost = "datastore:27017" ) func main() { // Set-up micro instance service := micro.NewService( micro.Name("shippy.service.consignment"), ) service.Init() uri := os.Getenv("DB_HOST") if uri == "" { uri = defaultHost } client, err := CreateClient(context.Background(), uri, 0) if err != nil { log.Panic(err) } defer client.Disconnect(context.Background()) consignmentCollection := client.Database("shippy").Collection("consignments") repository := &MongoRepository{consignmentCollection} vesselClient := vesselProto.NewVesselService("shippy.service.client", service.Client()) h := &handler{repository, vesselClient} // Register handlers pb.RegisterShippingServiceHandler(service.Server(), h) // Run the server if err := service.Run(); err != nil { fmt.Println(err) } }

The final bit of tidying up we need to do is to move our gRPC handler code out into our new handler.go file. So let's do that.

// shippy-service-consignment/handler.go package main import ( "context" pb "github.com/<YourUserName>/shippy/shippy-service-consignment/proto/consignment" vesselProto "github.com/<YourUsername>/shippy/shippy-service-vessel/proto/vessel" "github.com/pkg/errors" ) type handler struct { repository vesselClient vesselProto.VesselService } // CreateConsignment - we created just one method on our service, // which is a create method, which takes a context and a request as an // argument, these are handled by the gRPC server. func (s *handler) CreateConsignment(ctx context.Context, req *pb.Consignment, res *pb.Response) error { // Here we call a client instance of our vessel service with our consignment weight, // and the amount of containers as the capacity value vesselResponse, err := s.vesselClient.FindAvailable(ctx, &vesselProto.Specification{ MaxWeight: req.Weight, Capacity: int32(len(req.Containers)), }) if vesselResponse == nil { return errors.New("error fetching vessel, returned nil") } if err != nil { return err } // We set the VesselId as the vessel we got back from our // vessel service req.VesselId = vesselResponse.Vessel.Id // Save our consignment if err = s.repository.Create(ctx, MarshalConsignment(req)); err != nil { return err } res.Created = true res.Consignment = req return nil } // GetConsignments - func (s *handler) GetConsignments(ctx context.Context, req *pb.GetRequest, res *pb.Response) error { consignments, err := s.repository.GetAll(ctx) if err != nil { return err } res.Consignments = UnmarshalConsignmentCollection(consignments) return nil }

Now let's do the same to your vessel-service. I'm not going to demonstrate this in this post, you should have a good feel for it yourself at this point. Remember you can use my repository as a reference.

We will however add a new method to our vessel-service, which will allow us to create new vessels. As ever, let's start by updating our protobuf definition:

syntax = "proto3"; package vessel; service VesselService { rpc FindAvailable(Specification) returns (Response) {} rpc Create(Vessel) returns (Response) {} } message Vessel { string id = 1; int32 capacity = 2; int32 max_weight = 3; string name = 4; bool available = 5; string owner_id = 6; } message Specification { int32 capacity = 1; int32 max_weight = 2; } message Response { Vessel vessel = 1; repeated Vessel vessels = 2; bool created = 3; }

We created a new Create method under our gRPC service, which takes a vessel and returns our generic response. We've added a new field to our response message as well, just a created bool. Re-build your proto definition to update this service. Now we'll add a new handler in shippy-service-vessel/handler.go and a new repository method:

// shippy-vessel-service/handler.go package main import ( "context" pb "github.com/<YourUsername>/shippy-service-vessel/proto/vessel" ) type handler struct { repository } // FindAvailable vessels func (s *handler) FindAvailable(ctx context.Context, req *pb.Specification, res *pb.Response) error { // Find the next available vessel vessel, err := s.repository.FindAvailable(ctx, MarshalSpecification(req)) if err != nil { return err } // Set the vessel as part of the response message type res.Vessel = UnmarshalVessel(vessel) return nil } // Create a new vessel func (s *handler) Create(ctx context.Context, req *pb.Vessel, res *pb.Response) error { if err := s.repository.Create(ctx, MarshalVessel(req)); err != nil { return err } res.Vessel = req return nil }

// shippy-service-vessel/repository.go package main import ( "context" pb "github.com/<YourUsername>/shippy/shippy-service-vessel/proto/vessel" "go.mongodb.org/mongo-driver/bson" "go.mongodb.org/mongo-driver/mongo" ) type repository interface { FindAvailable(ctx context.Context, spec *Specification) (*Vessel, error) Create(ctx context.Context, vessel *Vessel) error } type MongoRepository struct { collection *mongo.Collection } type Specification struct { Capacity int32 MaxWeight int32 } func MarshalSpecification(spec *pb.Specification) *Specification { return &Specification{ Capacity: spec.Capacity, MaxWeight: spec.MaxWeight, } } func UnmarshalSpecification(spec *Specification) *pb.Specification { return &pb.Specification{ Capacity: spec.Capacity, MaxWeight: spec.MaxWeight, } } func MarshalVessel(vessel *pb.Vessel) *Vessel { return &Vessel{ ID: vessel.Id, Capacity: vessel.Capacity, MaxWeight: vessel.MaxWeight, Name: vessel.Name, Available: vessel.Available, OwnerID: vessel.OwnerId, } } func UnmarshalVessel(vessel *Vessel) *pb.Vessel { return &pb.Vessel{ Id: vessel.ID, Capacity: vessel.Capacity, MaxWeight: vessel.MaxWeight, Name: vessel.Name, Available: vessel.Available, OwnerId: vessel.OwnerID, } } type Vessel struct { ID string Capacity int32 Name string Available bool OwnerID string MaxWeight int32 } // FindAvailable - checks a specification against a map of vessels, // if capacity and max weight are below a vessels capacity and max weight, // then return that vessel. func (repository *MongoRepository) FindAvailable(ctx context.Context, spec *Specification) (*Vessel, error) { filter := bson.D{{ "capacity", bson.D{{ "$lte", spec.Capacity, }, { "$lte", spec.MaxWeight, }}, }} vessel := &Vessel{} if err := repository.collection.FindOne(ctx, filter).Decode(vessel); err != nil { return nil, err } return vessel, nil } // Create a new vessel func (repository *MongoRepository) Create(ctx context.Context, vessel *Vessel) error { _, err := repository.collection.InsertOne(ctx, vessel) return err }

Now we can create vessels! I've update the main.go to use our new Create method to store our dummy data.

Re-build your stack $ docker-compose build and re-run it $ docker-compose up .

User service

Now let's create a third service. We'll start by updating our docker-compose.yml file. Also, to mix things up a bit, we'll add Postgres to our docker stack for our user service:

... user: build: ./shippy-service-user ports: - 50053:50051 environment: MICRO_ADDRESS: ":50051" ... database: image: postgres:alpine environment: POSTGRES_PASSWORD: "password" POSTGRES_USER: "admin" ports: - 5432:5432

11 .Now create a shippy-user-service directory in your root. And, as per the previous services. Create the following files: handler.go, main.go, repository.go, database.go, Dockerfile, Makefile, a sub-directory for our proto files, and finally the proto file itself: proto/user/user.proto .

Add the following to user.proto :

syntax = "proto3"; package user; service UserService { rpc Create(User) returns (Response) {} rpc Get(User) returns (Response) {} rpc GetAll(Request) returns (Response) {} rpc Auth(User) returns (Token) {} rpc ValidateToken(Token) returns (Token) {} } message User { string id = 1; string name = 2; string company = 3; string email = 4; string password = 5; } message Request {} message Response { User user = 1; repeated User users = 2; repeated Error errors = 3; } message Token { string token = 1; bool valid = 2; repeated Error errors = 3; } message Error { int32 code = 1; string description = 2; }

Now let's generate our protobuf code, using:

$ protoc --proto_path=. --go_out=. --micro_out=. proto/user/user.proto

As per our previous services, we've created some code to interface our gRPC methods. We're only going to make a few of them work in this part of the series. We just want to be able to create and fetch a user. In the next part of the series, we'll be looking at authentication and JWT. So we'll be leaving anything token related for now. Your handlers should look like this:

// shippy-user-service/handler.go package main import ( "context" "errors" pb "github.com/EwanValentine/shippy/shippy-service-user/proto/user" "golang.org/x/crypto/bcrypt" ) type authable interface { Decode(token string) (*CustomClaims, error) Encode(user *pb.User) (string, error) } type handler struct { repository Repository tokenService authable } func (s *handler) Get(ctx context.Context, req *pb.User, res *pb.Response) error { result, err := s.repository.Get(ctx, req.Id) if err != nil { return err } user := UnmarshalUser(result) res.User = user return nil } func (s *handler) GetAll(ctx context.Context, req *pb.Request, res *pb.Response) error { results, err := s.repository.GetAll(ctx) if err != nil { return err } users := UnmarshalUserCollection(results) res.Users = users return nil } func (s *handler) Auth(ctx context.Context, req *pb.User, res *pb.Token) error { user, err := s.repository.GetByEmail(ctx, req.Email) if err != nil { return err } if err := bcrypt.CompareHashAndPassword([]byte(user.Password), []byte(req.Password)); err != nil { return err } token, err := s.tokenService.Encode(req) if err != nil { return err } res.Token = token return nil } func (s *handler) Create(ctx context.Context, req *pb.User, res *pb.Response) error { hashedPass, err := bcrypt.GenerateFromPassword([]byte(req.Password), bcrypt.DefaultCost) if err != nil { return err } req.Password = string(hashedPass) if err := s.repository.Create(ctx, MarshalUser(req)); err != nil { return err } // Strip the password back out, so's we're not returning it req.Password = "" res.User = req return nil } func (s *handler) ValidateToken(ctx context.Context, req *pb.Token, res *pb.Token) error { claims, err := s.tokenService.Decode(req.Token) if err != nil { return err } if claims.User.Id == "" { return errors.New("invalid user") } res.Valid = true return nil }

Now let's add our repository code:

// shippy-user-service/repository.go package main import ( "context" pb "github.com/<YourUsername>/shippy/shippy-service-user/proto/user" "github.com/jmoiron/sqlx" uuid "github.com/satori/go.uuid" ) type User struct { ID string `sql:"id"` Name string `sql:"name"` Email string `sql:"email"` Company string `sql:"company"` Password string `sql:"password"` } type Repository interface { GetAll(ctx context.Context) ([]*User, error) Get(ctx context.Context, id string) (*User, error) Create(ctx context.Context, user *User) error GetByEmail(ctx context.Context, email string) (*User, error) } type PostgresRepository struct { db *sqlx.DB } func NewPostgresRepository(db *sqlx.DB) *PostgresRepository { return &PostgresRepository{db} } func MarshalUserCollection(users []*pb.User) []*User { u := make([]*User, len(users)) for _, val := range users { u = append(u, MarshalUser(val)) } return u } func MarshalUser(user *pb.User) *User { return &User{ ID: user.Id, Name: user.Name, Email: user.Email, Company: user.Company, Password: user.Password, } } func UnmarshalUserCollection(users []*User) []*pb.User { u := make([]*pb.User, len(users)) for _, val := range users { u = append(u, UnmarshalUser(val)) } return u } func UnmarshalUser(user *User) *pb.User { return &pb.User{ Id: user.ID, Name: user.Name, Email: user.Email, Company: user.Company, Password: user.Password, } } func (r *PostgresRepository) GetAll(ctx context.Context) ([]*User, error) { users := make([]*User, 0) if err := r.db.GetContext(ctx, users, "select * from users"); err != nil { return users, err } return users, nil } func (r *PostgresRepository) Get(ctx context.Context, id string) (*User, error) { var user *User if err := r.db.GetContext(ctx, &user, "select * from users where id = $1", id); err != nil { return nil, err } return user, nil } func (r *PostgresRepository) Create(ctx context.Context, user *User) error { user.ID = uuid.NewV4().String() query := "insert into users (id, name, email, company, password) values ($1, $2, $3, $4, $5)" _, err := r.db.ExecContext(ctx, query, user.ID, user.Name, user.Email, user.Company, user.Password) return err } func (r *PostgresRepository) GetByEmail(ctx context.Context, email string) (*User, error) { query := "select * from users where email = $1" var user *User if err := r.db.GetContext(ctx, &user, query, email); err != nil { return nil, err } return user, nil }

We're just using straight up SQL for our service, you can use an ORM, but for a single microservice, typically just some plain on SQL will do fine. Normally you would use an ORM to manage complex interactions between entities. But with microservices, you tend to do this at the network level, rather than the code/entity level. Which simplifies the data layer, as each microservice typically deals with a single entity.

We need to be able to test creating a user, so let's create another cli tool. This time shippy-user-cli in our project root. Similar as our consignment-cli, but this time:

package main import ( "context" "fmt" "log" proto "github.com/<YourUsername>/shippy/shippy-service-user/proto/user" "github.com/micro/cli/v2" "github.com/micro/go-micro/v2" ) func createUser(ctx context.Context, service micro.Service, user *proto.User) error { client := proto.NewUserService("shippy.service.user", service.Client()) rsp, err := client.Create(ctx, user) if err != nil { return err } // print the response fmt.Println("Response: ", rsp.User) return nil } func main() { // create and initialise a new service service := micro.NewService() service.Init( micro.Action(func(c *cli.Context) error { name := c.String("name") email := c.String("email") company := c.String("company") password := c.String("password") ctx := context.Background() user := &proto.User{ Name: name, Email: email, Company: company, Password: password, } if err := createUser(ctx, service, user); err != nil { log.Println("error creating user: ", err.Error()) return err } return nil }), ) }

Here we've used go-micro's command line helper, which is really neat.

// docker-compose.yml ... user-cli: build: ./shippy-cli-user

We can run this and create a user:

And you should see the created user in a list!

This isn't very secure, as currently we're storing plain-text passwords, but in the next part of the series, we'll be looking at authentication and JWT tokens across our services.

So there we have it, we've created an additional service, an additional command line tool, and we've started to persist our data using two different database technologies. We've covered a lot of ground in this post, and apologies if we went over anything too quickly, covered too much or assumed too much knowledge.

If you are finding this series useful, and you use an ad-blocker (who can blame you). Please consider chucking me a couple of quid for my time and effort. Cheers! https://monzo.me/ewanvalentine

Or, sponsor me on Patreon to support more content like this.