RESTful services become more and more popular, and their neat style always makes us excited. However, it will be disappointing that if the security issues knock on the door and we will be awakened from beautiful dreams. In this article we present a solution, viz. Nginx + 3rd party modules, which can protect the existing RESTful services and eliminate the need to modify the services code.

1. Put Nginx in the Front of Existing RESTful Services

As we all know, Nginx as a reverse proxy can be in the front of Http services to speed up static files or as a load balancer. It can also provides additional protection for http services, such as access control about IP addresses and HTTP request methods, HTTP basic Auth, HTTPS, HttpOnly, limiting request frequence and the maximum number of concurrent connections, etc.

Suppose we deploy Nginx on the same computer with our existing RESTful services. We can simply let our RESTful services using address 127.0.0.1 so that only the local processes can access them. Then let Nginx be as a reverse proxy in front of them. For an instance, the address of RESTful services is 127.0.0.1:8080 and Nginx listens at port 80.

http { upstream restfulServices { server 127.0.0.1:8080; } server { listen 80; server_name example.com; location / { proxy_pass http://restfulServices; } } }

2. Access Control About IP Addresses and Request Methods

Since RESTful services HTTP request methods have been assigned special meanings, such as GET queries records, PUT for updating or creation if the record does not exist, POST is used to create records, DELETE for deleting records, we will use the 3rd party module Nginx Access Plus instead of Nginx build-in Access Module which only restricts client addresses.

For example we accept all GET or HEAD requests but all POST, PUT or DELETE requests will be denied unless they come from 192.168.1.*.

location / { allow_method all get|head; allow_method 192.168.1.0/24 post|put|delete; deny_method all all; proxy_pass http://restfulServices; }

3. HTTP Basic Auth

Sometimes we can not simply restrict client addresses because the client addresses change frequently, in this case, we can try HTTP Basic Auth. Here is an example:

location / { auth basic 'private site'; auth_basic_user_file 'conf/htpasswd-file'; proxy_pass http://restfulServices; }

The file conf/htpasswd-file can be created and managed by tool htpasswd which uses MD5 encrypted passwords by default. Nginx also supports other kinds of password types such as plain text, encrypted with the crypt, etc, for more details please see HERE.

If we are worried about that the user name and password over HTTP will be transmitted in plain text, we can use HTTPS instead. We will disscuss it in chapter 6 Enable HTTPS and Secure Cookie.

4. Enable HttpOnly

If our RESTful services are directly invoked by JavaScript at browser side and sometimes we use cookie to store some security related information such as session ID. To protect the cookie from XSS attacks we need to enable the HttpOnly flag. A more detailed discussion about HttpOnly can be found from the article Protecting Your Cookies: HttpOnly. To enable HttpOnly with Nginx is dead easy when we use the 3rd party module Nginx HTTP Headers More. Here is an example:

location / { more_set_headers 'Set-Cookie: $sent_http_set_cookie; HttpOnly'; proxy_pass http://restfulServices; }

5. Limit Requests Frequence/Connections

Flood/DoS attacks are very, very troublesome even if our RESTful services are secure. Although we won't lose any important information but we'll lose our customers who will complain that the services are too slow or unusable at all. When using Nginx we can limit request frequence and the maximum number of concurrent connections in order to achieve a certain degree of protection to our RESTful services from DoS attacks, e.g. We limit the requests frequence from one IP address no more than 3 per second at an average, with bursts not exceeding 5. Excessive requests will be terminated with an error 503 (Service Temporarily Unavailable) without any delay.

http { limit_req_zone $binary_remote_addr zone=myLimitZone:10m rate=3r/s; ... server { ... location / { limit_req zone=myLimitZone burst=5 nodelay; ... proxy_pass http://restfulServices; } } ... }

Similar to limiting request frequence, we can also limit the number of concurrent connections, e.g. We limit the maximum number of concurrent connections from one IP address no more than 3. Excessive connections will be terminated with an error 503 (Service Temporarily Unavailable).

http { limit_conn_zone $binary_remote_addr zone=myLimitConnZone:10m; ... server { ... location / { limit_conn myLimitConnZone 3; ... proxy_pass http://restfulServices; } } ... }

6. Enable HTTPS and Secure Cookie

Generally HTTPS from Nginx is faster than those provided by Java Web Servers so to use Nginx as a HTTPS front end is very common, viz. the communication:

Client <---HTTP---> RESTful Services.

will be changed into:

Client <---HTTPS----> Nginx <---HTTP---> RESTful Services.

Here is an example for HTTPS with Nginx:

http { upstream restfulServices { server 127.0.0.1:8080; } server { listen 443 ssl; server_name example.com; ssl_certificate /opt/mycert/my-unified.crt; ssl_certificate_key /opt/mycert/my.key; location / { proxy_pass http://restfulServices; } } }

If we have other HTTP services but we want the browser cookie can only be sent via HTTPS, we can enable secure cookie flag. Just like HttpOnly flag here we also use the 3rd party module Nginx HTTP Headers More. Here is an example:

location / { more_set_headers 'Set-Cookie: $sent_http_set_cookie; HttpOnly; Secure'; proxy_pass http://restfulServices; }

7. More Advanced Tasks

If we want to do more advanced tasks with Nginx such as:

Our authentication module need access external user/password store such as MySQL, MongoDB and so on. The key for limit_conn_zone/limit_req_zone is a Nginx Variable which is needed to be computed by some complex methods, For instance the key for limit_conn_zone is a Nginx variable $user_group which will be computed by a nginx rewrite handler. Encrypt cookie values from backend services and decrypt them before sending them to backend services When they are sent back from clients. Dynamic balancer or proxy, e.g. forward the requests from our demo users to the services on the sandbox but won't affect normal users.

We maybe need other powerful Nginx 3rd party modules, such as: