Server Is Compromised: What Do I Do?

In the vast majority of cases, servers tend to get broken into for the following reasons:

Vulnerabilities in the software environment (firewalls and antiviruses misconfigured or software failing to update on time.) Mistakes in server configuration. Weak passwords. Human factors (giving access to a third party, leaked or stolen passwords, inside jobs.)

As a systems administrator, it's your job to keep tabs on your server. To address the first point, keeping up with antivirus definitions updates and making sure you have the latest service packs for your software, get creative with passwords and change them regularly. Do everything you can to address the above concerns before a breach happens.



But what do we do once we know we have a breach?



First course of action should be to see if your reserve copies are still accessible. You will need to figure out when the system was first compromised and if returning to a copy before the breach is possible, and restore the system. However, it's very possible that the reserve copy is too old or inaccessible. So let's explore your options in the case that a reserve copy is inaccessible.



First, you need to figure out the source of the breach and what's been compromised as that will dictate how you'll be dealing with the problem. If the issue stems from a rogue script or a binary file, see if you can find an unfamiliar program in your list of running processes using the ps auwfx command. The process in question will usually be putting the system under heavier load than usual, and that would be a giveaway. Once found, we need to learn as much as we can about it and what it's doing with your system. The lsof -p PID command will be useful, as it will show you every file used by a process. To follow up, you can use the strace -p PID command to see exactly what the process is doing at any given moment.

You can also run iftop, which is a software that can be used to monitor server traffic in real time, to track processes that might be using your server as a source for DDoS attacks. If you manage to find these processes, you can use the tcpdump utility to see traffic and where it's headed, and you can check connected networks using lsof -i or netstat -tulpan commands. Once you manage to track down the culprit processes and binaries, you should stop the infected services and kill the problem processes using kill -9 PID command.

Debian and Ubunty distros include a utility called debsums that lets you check MD5 hashes of previously installed packets and configuration files for malware. RedHat and associated distros have a rpm command with -qaV that does the same thing.



Just as important is checking through your logs of recent access attempts to your server, through SSH, FTP, email., etc. Don't forget to check the files with bash command history, as intruders often forget to clean the log files. It good to pay special attention to /var/log: messages, secure, audit.log, yum.log, apt.log, lastlog, auth.log, and syslog.



The nature of the breach will dictate how you deal with it. For instance, if you found that the intrusion happened through SSH, then your first course of action would be to go into your /root/.ssh/authorized_keys file, change every password and release new keys. On the other hand if the problem is with your email server, the intruder might be using it to send spam. Emails marked as spam usually tend to get blacklisted and accumulate in the server's queue. You can check this queue to see if your server is sending spam using mailq or exim -bpc if you're running and Exim server, or postqueue -p if you're using Postfix.

Sometimes, the goal of the breach is not to carry anything out immediately but to delay tasks to avoid detection. So it's also important to check you cron scheduler to make sure there aren't any processes being run at undesignated times. It's equally important to check any files that were changed recently using the Find utility using find -mtime command.

It's a great idea to delete and uninstall any services that your server is no longer using, as dormant services can become points of vulnerability. Check all autorunning services too. You can do this using the following:

systemd:

systemctl list-unit-files | grep enabled

systemv:

service --status-all | grep +

upstart:

initctl list | awk '{print $1}' | xargs initctl show-config

You firewall also needs to be configured to make sure that your server is only using the port you've designated. For instance, if you're running MySQL on just one local server, then there's no need to have port 3306 open as it just creates another window of vulnerability.

Ultimately, there's no substitute for being prepared. But even after a breach, it still helps to look into security modules like SELinux, AppArmor, chroot, Docker, LXC, or systemd-nspawn. These modules are great because they can stop and intrusion from getting access to the rest of your server from getting through just a single process. Antivirus definition are also key, as mentioned before, and making sure you have something like maldet, clamav, or ai-bolit. If you aren't already, you should be running virus scans regularly; at least a few times a week if not daily along with the rest of your scheduled tasks. Finally, nikto is a great utility to use as it lets you scan through protocols like http/https and ports, proxy servers and SSL.



That basically concludes our article! If you have questions, check with us on Twitter and Facebook.



-Till next time!

