cbcf9dde327c475d99627c87f58cab7ac6689164bf2fe7734c10c78005ed118e == sha256(“[10.08.2015] I’ve discovered that about 2% of the known darkweb is controlled by one organization.”)

Reading articles of deanonymization of hidden services by controlling certain nodes or conducting correlation attacks, I came to an idea that in certain cases it might be much easier to break anonymity. Just by having the same vulnerabilities as in “clearnet”, applications can expose sensitive information and let an attacker gather data from the system and deanonymize the target, with certain “darknet” specifics in the approach.

According to the results of the recent HyperionGray research of scanning the darkweb with PunkSPIDER, approximate number of alive dark services is about 7000. The guys took alive and not-so-hidden services and started to scan those for serious vulnerabilities. I’ve started my own research with slightly different approach - in opposite to searching for critical vulnerabilities like OSCI/SQLi, I’ve taken a closer look to conventionally low-risk information disclosure.

For that I’ve written a simple Python script which, when provided with server/framework, would enumerate accessible files and folders and probably discover certain leaks of server information. To my surprise, fair amount of them actually had quite lame generic server authorization/configuration issues up to world-readable /phpinfo.php.

The most helpful and common fail pattern was, however, the default Apache pages such as /server-info and /server-status. Whereas the first one would give you a nice picture of the server information with current settings, modules and its configuration (and IP address, of course), the second is more valuable in terms of current connections. In a given set of 7k+ alive services almost 500 of them (about 7%) appeared to be vulnerable. Further analysis showed that large-traffic applications are affected, too.

For one of the websites I’ve noticed, that it has several other hosts with completely different kinds of subjects. The only thing which was the same, were those /server-status pages all among them. Quick gather of references on those revealed more than 300 unique services with traffic as much as 50+ Gb per day. Interestingly enough, most of them were referenced from HiddenWiki page, which also resided on the same server. A weaver! As appeared later, it was a hidden hosting service, where anybody could pay certain amount of BTC and rent it for his own dark intentions. Obviously, such disclosure makes it possible for deanonymizer to list all the queries to a particular domain on the hosting server and view parameters with corresponding values for GET requests with full paths to closed parts of the application.

I was lucky again when my script warned me of an external IP address, which accessed “vps.server.com”. If you’ve ever had a look to access.log of your web server, for sure you’ve noticed a lot of connections of all kinds of bots which scan the Internet for vulnerabilities. That was probably the first time in my life, when I was really thankful to them. It meant the following:

clearnet service is also available on port 80

if I manage to access it, my watcher script can isolate it

One of the options to hit that is to basically try to scan the whole Internet on port 80. Sounds crazy? Hold on, check these projects first: Zmap and Massscan!

What’s basically needed, is to access a specific IP address with certain marker, which would identify this IP address uniquely, and monitor such access on /server-status of a target server. I assumed that probably the easiest way to do it is to use the following vector: http://xx.xx.xx.xx/xx.xx.xx.xx. Results haven’t made me wait too long:

Of course, this is not the only way to achieve that. The following scenario is even simpler: many clearnet hosts on the same server are used to redirect traffic to darknet, and this also helps a lot to deanonymize the target. One approach is quite similar to the previous one but more universal in a way that you don’t really need to have control over status page. It is enough to parse those responses, which return 30x code, and check for presense of “.onion” string in the “Location:” header:

For the laziest of researchers, Shodan might help, too:

Finally, researcher can always find a vulnerability in one weak service, and get access to the whole hosting server. Let’s say, I believe it’s possible ;)

Conclusion

The goal of my research was to show that often deanonymization of a hidden service (or even a network) can be done trivially by applying the same pentest approach as in clearnet. Main difference here is that usually non-critical information disclosure plays much more significant role than for “normal” web applications. To summarize, at least the following easy ways may let researcher deanonymize a darknet service:

instant win (server-info, phpinfo, …)

status page access (x.x.x.x/x.x.x.x)

(un)expected redirect (30x clearnet to darknet)

app-level pwnage (missing patches, vulnerabilities in the code, default framework pages…)

P.S. If you’re interested in the topic, you may also want to check TheCtulhu’s blog and find decent instructions on configuring nginx server to host a hidden service in a more secure way.