Circuit Breaker

In general, such a common functionality does not require us to develop it ourselves. There are already excellent open-source libraries in the community, such as hystrix-go, gobreaker.

In addition, Micro also provides plugins that adapt to the above libraries e.g. hystrix plugin and gobreaker plugin.

With the help of these plug-ins, using Circuit Breaker in Micro becomes very easy. Take hystrix as an example:

import (

...

"github.com/micro/go-plugins/wrapper/breaker/hystrix/v2"

...

) func main(){

...

// New Service

service := micro.NewService(

micro.Name("com.foo.breaker.example"),

micro.WrapClient(hystrix.NewClientWrapper()),

)

// Initialise service

service.Init()

...

}

All we need to do is to assign hystrix plugin during service creation,

From now on, all calls to the remote services will be tracked by the plugin. When the request times out or the number of concurrent reaches the limit, an error will be returned to the caller immediately.

So what are the default limits for concurrency and time out? The answer lays in the source code of the hystrix-go plugin. Looking over github.com/afex/hystrix-go/hystrix/settings.go you will see several package-level variables:

...

// DefaultTimeout is how long to wait for command to complete, in milliseconds

DefaultTimeout = 1000

// DefaultMaxConcurrent is how many commands of the same type can run at the same time

DefaultMaxConcurrent = 10

...

So the default timeout is 1 second, and the default concurrent limit is 10.

Note: There’re more settings besides these two, but a discussion of all of them is beyond the scope of this post. If you are interested, you can go to the official website of the hystrix-go library for detailed documentation.

If the default settings do not meet our requirements, you can modify them as follows:

import (

...

hystrixGo "github.com/afex/hystrix-go/hystrix"

"github.com/micro/go-plugins/wrapper/breaker/hystrix/v2"

...

) func main(){

...

// New Service

service := micro.NewService(

micro.Name("com.foo.breaker.example"),

micro.WrapClient(hystrix.NewClientWrapper()),

)

// Initialise service

service.Init()

hystrix.DefaultMaxConcurrent = 3//change concurrrent to 3

hystrix.DefaultTimeout = 200 //change timeout to 200 milliseconds ...

}

As shown in the codes, we can change the default timeout and concurrent limit

You may have a question about DefaultMaxConcurrent: What is the scope of it? Suppose there’re 3 services and each service has 3 different methods. And we want to call all these methods at the same time. Does it mean that we must set the DefaultMaxConcurrent to a number greater than 3*3 to achieve completely concurrent?

To answer this question, two points need to be figured out:

First, what is the target of DefaultMaxConcurrent? As you can see from the hystrix documentation, it’s the command in hystrix:

DefaultMaxConcurrent is how many commands of the same type can run at the same time

Next, you need to know how the hystrix plugin handles the mapping between different methods and commands. Check out github.com/micro/go-plugins/wrapper/breaker/hystrix/v2/hystrix.go, you’ll find the relevant codes are as follows:

import(

"github.com/afex/hystrix-go/hystrix"

...

)

... func (c *clientWrapper) Call(ctx context.Context, req client.Request, rsp interface{}, opts ...client.CallOption) error {

return hystrix.Do(req.Service()+"."+req.Endpoint(), func() error {

return c.Client.Call(ctx, req, rsp, opts...)

}, nil)

}

...

Note the req.Service () + “.” + Req.Endpoint () , which IS the command of hystrix. And the command does not contain node information, this means that for the same service, there is no difference in single-node deployment or multiple nodes deployment, all nodes share one limitation.

At this point, it is clear: each method of each service counts independently and does not affect each other. The scope of DefaultMaxConcurrent is method-level, regardless of the number of nodes.

In practice, different methods may require different limitations. How to achieve that? From the source code above, we know that a service method is mapped to a hystrix command, and hystrix supports the independent control for different commands via hystrix.ConfigureCommand :

...

hystrix.ConfigureCommand("com.serviceA.methodFoo",

hystrix.CommandConfig{

MaxConcurrentRequests: 50,

Timeout: 10,

})

hystrix.ConfigureCommand("com.serviceB.methodBar",

hystrix.CommandConfig{

Timeout: 60,

})

...

With the above codes, we set different limits for different methods. If any field of the hystrix.CommandConfig struct is not specified, the system default value will be used.

Summary: Circuit Breaker acts on the client. With an appropriate threshold, it can ensure that the client resources are not exhausted. Even if the service it depends on is unhealthy, the client will quickly return an error instead of leaving the caller wait for a long time.

Rate Limiter

Similar to Circuit Breaker, rate-limiting is also a commonly used function in distributed systems. The difference is that Rate Limiter takes effect on the server-side, and its role is to protect the server: once the speed of request processing reaches a preset limit, the server no longer receives new requests until the in-processing request is completed. Rate Limiter can avoid the server crashing due to massive requests.

Here’s an analogy: suppose we operate a restaurant that can hold 10 guests. If 100 guests come for dinner at the same time, the best way to deal with it is to select the first 10 to serve and tell the other 90 guests: we are currently unable to serve, please come back another day. Although these 90 guests will be unhappy, we guarantee at least the first 10 guests can have a happy meal.

Without Rate Limiter, the result is that 100 guests enter the restaurant, the kitchen is too busy, and all guests have nowhere to sit. No one could get served. The entire restaurant was eventually paralyzed.

Using Rate Limiter in the Micro is very simple, too. We need just one line of code to achieve this. There are two Rate Limiter plug-ins available currently. This article uses the uber rate limiter plug-in as an example (of course, if the existing plug-in does not meet the requirements, you can always develop a more suitable plug-in by yourself). Let’s modify the source file hello-service/main.go:

package main import (

...

limiter "github.com/micro/go-plugins/wrapper/ratelimiter/uber/v2"

...

) func main() {

const QPS = 100

// New Service

service := micro.NewService(

micro.Name("com.foo.srv.hello"),

micro.Version("latest"),

micro.WrapHandler(limiter.NewHandlerWrapper(QPS)),

) ... }

The above codes add server-side Rate Limiter to hello-service, with a QPS cap of 100. This limitation is shared by methods of all handlers in this service. In other words, the scope of this restriction is service-level.