Saturday, March 28, 2015

SIEM Deployment - Collecting Logs from Linux Servers

Most companies run their business critical systems on Linux servers, which are famous for their stability, performance, security among other capabilities. Collecting logs from Linux servers thus becomes an important step in realizing Log Management projects.

Log management problem and need for Linux and UNIX servers is thought and taken care of long before it is finally taken seriously by Microsoft; therefore configuration is more straightforward and works more stable in my experience.

There is however a list of items to be followed carefully in order to be sure that everything works fine. The list may be hard to keep in mind comparing to steps in Windows, so I list them down below.

  1. Check auditd daemon configuration to see if auditing service works fine (/etc/audit/auditd.conf)
  2. Check audit event dispatcher configuration (/etc/audisp/audispd.conf)
  3. Configure audit event dispatcher syslog configuration (/etc/audisp/plugins.d/syslog.conf)
  4.  Create audit rules by editing /etc/audit/audit.rules file (More detail below)
  5. Configure the syslog daemon to redirect log messages to a collector server.
  6. Restart the daemons to activate the configurations.
I took RedHat family of Linux systems (RHEL, CentOS, Fedora,etc.) for configuration example in this article and configuration steps and commands apply to almost all Linux distributions with small changes.

There is not much to say about first and second steps, as they are routine controls to see if daemons are enabled and fine tunings may be done if necessary.
At the third step, under the syslog.conf file we should configure the args parameter to say what facilities we want to send to the syslog. To be coherent with the below configuration I set it as below:


Then comes a very important step, configuring the audit.rules file which actually is your audit policy for the server. If you are up to this point, most probably your company should already have one and your audit.rules file should not be empty. But in case you started being interested with linux servers just for the sake of log management (like me), I would suggest you to first read and edit, and then copy the usr/share/doc/audit-x.y.z/stig.rules document as your audit.rules file. stig.rules file is a really well prepared document to guide you to write your own rules and it is very good for a starter honestly. In my case the configuration applied was like below:

[root@localhost etc]# vi /usr/share/doc/audit-2.3.7/stig.rules
[root@localhost etc]# cp /usr/share/doc/audit-2.3.7/stig.rules /etc/audit/audit.rules
cp: overwrite `/etc/audit/audit.rules'? y

I can also suggest you to add below lines in your audit.rules file as a best practice. (For 64bit systems in this example)

-a always,exit -F arch=b64 -S sethostname -S setdomainname -k HOSTNAME_CHANGED
-a always,exit -F arch=b64 -S kill -F a1=9 -k KILL9
-a always,exit -F arch=b64 -F subj_type!=ntpd_t -S settimeofday -k SYSTEM_TIME_CHANGED
-a always,exit -F arch=b64 -F subj_type!=ntpd_t -S adjtimex -k SYSTEM_TIME_CHANGED
-a always,exit -F arch=b64 -F subj_type!=ntpd_t -S clock_settime –k SYSTEM_TIME_CHANGED
-w /etc/localtime -p wa -k SYSTEM_TIME_CHANGED
-a always,exit -F arch=b64 -S mount -k DEVICE_MOUNTED
-a always,exit -F dir=/boot -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/root -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/etc -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/bin -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/sbin -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/lib -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/lib64 -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/usr -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/net -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/sys -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/cgroup -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/selinux -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/var/adm -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/var/lib -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/var/spool/cron -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/var/spool/at -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/var/spool/anacron -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F path=/var/log/messages -F perm=wa -F subj_type!=syslogd_t -F subj_type!=logrotate_t -k LOG_ALTERED
-a always,exit -F path=/var/log/dmesg -F perm=wa -F subj_type!=syslogd_t -F subj_type!=logrotate_t -k LOG_ALTERED
-a always,exit -F path=/var/log/secure -F perm=wa -F subj_type!=syslogd_t -F subj_type!=logrotate_t -k LOG_ALTERED

In the fifth step, we should configure the syslog daemon. In Linux systems, rsyslog service is responsible from reading the events and writing them to specific log files. To decide which actions are going to be logged /etc/rsyslog.conf file should be edited with a text editor.
More specifically RULES section in rsyslog.conf file should be edited like below:

#### RULES ####
# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.*                                                                                          /dev/console
# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none                           /var/log/messages
# The authpriv file has restricted access.
authpriv.*                                                                                    /var/log/secure
# Log all the mail messages in one place.
mail.*                                                                                          /var/log/maillog
# Log cron stuff
cron.*                                                                                         /var/log/cron
# Everybody gets emergency messages
# Save news errors of level crit and higher in a special file.
uucp,news.crit                                                                        /var/log/spooler
# Save boot messages also to boot.log
local7.*                                                                                   /var/log/boot.log
#Log Management;;;                               @@CollectorServerIP;;
# System log information,,                                 @@CollectorServerIP

In the above configuration @ sign symbolizes log sending over UDP 514 port and @@ symbolizes TCP 514 port. In order to not to lose any logs I configured it over tcp. I have been told by a colleague recently that this may add a significant load on systems where number of logs are important, but I still believe that tcp method should be given a chance before switching to udp, if it is deemed inevitable.

If you want to complicate things you may choose to send your logs you may send them encrypted but that configuration is not a part of this article.

As a final step, we restart the auditd and rsyslog services. At this point we must be able to see on the collector server log messages arriving to the syslog server software installed.

Friday, March 6, 2015

SIEM Deployment - Collecting Logs from Windows Servers

An important step in SIEM deployments is to collect logs from servers on which your infrastructure, applications and middleware run.

The event collection from servers is to be seen in the 2nd step in a 3 step approach in SIEM deployment, first step being network and network security devices and last step being the applications.

Microsoft proposes a well-designed log collection infrastructure with 2 options left to the administrators. Both methods use WinRM protocol and are safer than collecting logs using RPC.

In the first method, the collector server connects to event sources and pulls the logs with the help of a domain user which is added to the Event Log Readers group. This method is known as “Collector Initiated Event Forwarding” in Microsoft world and can be better remembered as PULL method. This is the easier method but does not scale well in large environments and may be inefficient as the collector server should regularly contact event sources whether they have logs to transmit or not.

In the second method, which I will detail more in this article, event sources push logs to a collector server over http or https. This method is known as “Source Initiated Subscription” and can be better remembered as PUSH method. This method scales much better to larger environments and is definitely more secure when https option is used as transmission protocol. This method requires less configuration when configured through Group Policy. However special attention must be paid for the collector server when logs are sent over https, especially about listener configuration as it is not that well documented on Microsoft’s documentation and needs good system administration skills.


First of all, the winrm service should be activated from the command line with winrm quickconfig command.

In the Local Group Policy Editor, under Computer Configuration -> Administrative Templates ->  Windows Components -> Event Forwarding menu, “Configure the server address, refresh interval, and issuer certificate authority of a target Subscription Manager” setting should be configured as below:


At this point you can prefer to use http or https protocols to transmit your logs to the collector server. The only difference at the event source is the port numbers used for http and https protocols which are 5985 and 5986 respectively as shown below:

Server=http://Servername.Domain:5985/wsman/Subscription Manager/WEC,Refresh=60

Server=https://Servername.Domain:5986/wsman/Subscription Manager/WEC,Refresh=60

Log sending period is chosen as 60 seconds in these examples.

 Once event forwarding configuration is done, the group policy must be updated with  gpupdate /force command.

 In Local Security Policy screen, audit policy settings under Security Setttings -> Local Policies ->  Audit Policy should be configured. It is recommended to enable logging for both successful and failed attempts for all policy items except “Audit object access”, as this setting may generate too many logs if both successful and failed attempts are logged. I recommend configuring this setting only for failed attempts for the beginning and deciding later according to the criticality of the server.


First of all, Windows Event Collector service and WinRM service should be enabled from the command line by using wecutil qc and winrm qc commands respectively.

 After this step, Event Viewer is launched and a new Subscription should be created by right clicking on Subscriptions menu item and selecting Create Subscription or selecting from the menu on the right.

Once the Create Subscription window opens, for source initiated subscriptions settings are configured as below.  These settings largely depend on your own needs however there are real life experiences to follow such as collecting logs in “Hardware Events” destination as logs from this folder are correctly parsed at Arcsight and need no further efforts.

Subscription name: Event collection
Destination log: Hardware Events
Subscription type: Source computer initiated

Configuration ends after this step and if there are no problems, like firewall rules blocking the traffic, you should be seeing transferred logs in your target log collection folder. The best way to check if there is a problem or not is to use "Runtime Status" option from the menu on the right.

For more details you can read the Technet article below: