Recently we’ve been working on PCI-DSS compliance and more precisely on the deployment of a file integrity monitoring (FIM) solution. For those who are not security experts, file integrity solutions are aimed at keeping track of the status of a set of files (which should be somewhat critical) and raising an alert as soon as an unexpected change has been made (permissions, content, etc.). If such an alert is raised, then the administrator should take care of having a deeper look and making sure that nothing bad happened. To be clear, file integrity monitoring is not a prevention against potential attacks, rather it is the last line of defense, meaning that it is a tool that comes into play when all other defense methods have failed.
This being said, we had to face the challenging problem of identifying the most critical files to be monitored on Windows servers and workstations. Although it may seem straightforward, this is a delicate problem that has to be assessed before putting in place a file integrity solution.
Indeed, just monitoring all files and folders would bring lots of drawbacks: first, the amount of data to treat would be so large that it would be unpractical to handle; second, monitoring so many files would produce a huge number of alerts and false positives, which would have the negative effect of diverting the attention from what really matters.
On the other hand, monitoring just a few files would not provide a good protection and would likely be ineffective when trying to find out what has been changed during an attack.
Therefore, it is up to the administrator to carefully decide what files and folders must be monitored. A bad choice may imply a failure in detecting a targeted attack that has modified some critical files.
Example of file integrity monitoring
After a few days of research and several discussions, we came up with a list of files and folders that can arguably be considered critical (or security relevant) and thus need to be monitored. We decided to share it with the world so that anyone can give his 2 cents:
- All files and folders under C:\WINDOWS except the folders listed below, which basically contain log files (the reason is explained below), cache files and other unimportant files: NtServicePackUninstall, NtUninstall, assembly, CSC, DEBUG, HELP, I386, LogFiles, Minidump, Prefetch, Shelliconcache, SoftwareDistribution, system32\Catroot, system32\LogFiles, system32\NtmsData, system32\winevt\Logs, System32\wdi\LogFiles, system32\wbem, Temp, winsxs, rescache, serviceprofiles\networkservice\appdata\local\temp
- The following folders (no files and subfolders) under C:\WINDOWS: assembly, CSC, DEBUG, security, system32\NtmsData, Temp
- The following files in C:\: autoexec.bat, boot.ini, config.sys, io.sys, msdos.sys, ntbootdd.sys, ntdetect.sys, ntldr
- The following folders (no files and subfolders): C:\Documents and Settings, C:\Users, C:\System Volume Information
- The following folders (including files and subfolders) in C:\: ProgramFiles, ProgramFiles(x86)
Finally, an expert eye may correctly suggest that the Windows Registry is another important asset that needs to be monitored. That’s right! However, the files storing Registry keys seem to be locked (at least in some versions of Windows). So file integrity monitoring solutions cannot properly guarantee their integrity (more details on how to secure the Windows Registry are coming soon). In Elastic Detector, we try to address this issue by leveraging our unique approach based on VM cloning in order to detect unexpected changes in the Windows Registry with no impact on performance and productivity. On clones, we can use Windows API to analyze the entire Registry tree and all its keys without directly accessing the raw files.
Last but not least, also log files should be monitored in order to make sure that no unauthorized changes have been made. Unfortunately, standard file integrity monitoring tools do not cope well with log files since, by nature, they are subject to frequent changes. More precisely, if a log file has been modified, a standard FIM tool is not able to distinguish an unauthorized behavior from a normal one. It is not able to detect whether a log file has been tampered (e.g. some lines have been removed in order to cover an attack) or not (e.g. some lines have been appended).
This being said, it is clear that ad-hoc and customized solutions are needed for monitoring the integrity of log files. We at SecludIT are ready for the next challenge: monitoring the integrity of log files without any impact on the performance of production servers.
We let you try Elastic Detector for free. Give your feedback if you thought we took up the challenge 🙂