Showing posts with label Log Collection. Show all posts
Showing posts with label Log Collection. Show all posts

Wednesday, July 1, 2015

SIEM Deployment - Installing HP Arcsight Software Connectors on Linux

In HP ArcSight solution architecture one of the most value adding components is the Smart Connectors. With the several functions they provide, Smart Connectors really help differentiating HP ArcSight’s SIEM solution from other.

So what exactly are ArcSight Smart Connectors? In a 3 layered SIEM Architecture, Arcsight Smart Connectors constitute the second layer between log processing systems ( Arcsight Logger or Arcsight ESM) and source systems generating logs according to defined audit policies.

From technical SIEM perspective, ArcSight Smart Connectors are Java applications which allow receiving or fetching logs from one defined log source, which can be several devices sending their logs in syslog format over the same protocol and port number (e.g. UDP 514) or an application writing its logs to a flat file. ArcSight Smart Connectors come with 256 MB minimum memory size and that memory is adjustable up to 1024 MB by configuration agent.properties file, among other connector properties to be changed according to your specific needs.

One physical server can host up to 8 connector processes, meaning that you can collect logs from 8 different source groups as long as your server support that much capacity.


Below, you can find details about the basic installation of an HP ArcSight Smart Connector on a CentOS Linux server for collecting syslog messages.

REQUIREMENTS / PREREQUISITES

1. A RHEL or CentOS Linux 6.X Server installed.
2. Root or sudo rights for connector user.
3. Connector binaries downloaded. (Download the correct version for your OS, x86 or x64 !!)
4. Connector destinations (Logger and/or ESM) installed and working.
5. Define the protocol and port on which you will listen the incoming logs.
   Choose port numbers over 1024 if you are installing with a non-root user as non-root users are not allowed to listen ports below 1024.

INSTALLATION

1. Create installation directory under /opt path. In this example it is /opt/arcsight/connectors .
2. Create a receiver on the logger to connect the connector.
3. Run the connector binary you previously downloaded. (From /home/arcsight directory in my installation).
# ./ArcSight-7.1.1.7348.0-Connector-Linux64.bin -i console
4. Install the connector to run standalone or as a service.
INSTALLATION_PATH\current\bin\arcsight connectors  à Run Standalone
INSTALLATION_PATH \current\bin\arcsight agentsvc -i -u arcsight -sn syslog_unix à Run with arcsight user as a service with arc_syslog_unix service name
5. Check events on the logger.
6. Set agent.properties parameters (Optional)
7. Set agent.wrapper.conf parameters (Optional)

Saturday, June 27, 2015

SIEM Planning - Capacity Planning and Sizing


SIEM projects are well-known to be demanding and greedy when it comes to the resources and your CIOs/CISOs would like to hear about your direct (software licensing, server investment, etc.) and indirect ( archiving storage, support) costs for at least following 3 years other than the benefits the Project will provide.

In this article, I will of course give the basic formulas about sizing. However the most important information to be provided is the real life experience of a live operation concerning the values and comparison to those values to the benchmarks.

First unit, which constitutes the base for all our calculations, is the Event per Second (EPS) value that each source system generates. EPS value greatly depends on 2 factors, audit policy or rules applied on the source system and business of the system. A server with “Object Access” audit rules enabled and Web Server functionality configured of course would not generate the same number of logs with a standard server. Windows family of servers also tends to generate much greater number of logs than Linux and UNIX servers, all with standard default configurations.

EPS estimations can be reached using links 1, 2 and 3.

Having calculated the number of EPS for each source asset group the next step to do is calculating the Event per Day (EPD) value.

EPD = ∑ EPS X 86400

Once EPD value is calculated, we have to decide an average log message size to know how much storage we will need each day. Log sources generate logs starting from 200 bytes range on network and infrastructure devices to 10 kilobytes or more on application and database side.  Syslog standard (RFC 5424) sets the maximum size of the content field of a log message to 2 kilobytes. In light of this information, it is wise and advisable to assume a raw log message size as 500 bytes.

Average raw log message size being set to 500 bytes, the amount of Daily log messages in GB is calculated as follows:

Daily Raw Log Size = EPD * 500 / (1024)3

Log management appliances do some changes on the log messages to make them understandable and meaningful. This operation is called “Normalization” and it increases the log size depending on the solution you use. In my personal experience with HP ArcSight, normalization increased the log size about 90% to 100%. Some other people have seen up to 200% of increase in their experiences.  As a result we obtain the below given formula for daily normalized log size:

Daily Normalized Log Size = Daily Raw Log Size * 2

The calculated value does not really represent the daily storage value for log management systems. Many vendors came up with proprietary compression solutions and claim they compress logs 10 times (10:1) which is quite idealistic. It is however, safe to consider a ratio of 8:1 for calculations. So the formula becomes:

Daily Storage Requirement = Daily Normalized Log Size / 8

The annual storage need would basically be 365 times the Daily Storage Requirement, if you want your calculations to be on the safe side. Nevertheless, EPS numbers seriously fall during weekends and vacations. Watch how much your average EPS numbers decrease in such periods and do your own calculations for your annual needs.

Annual Storage Requirement = Daily Storage Requirement * 365

The last important point is the retention period when you plan your storage investments for future. 2 factors are decisive in the definition of retention period, Compliance Requirements and Security Requirements.

Compliance Requirements only concern Log Management systems, in HP’s case it is Logger and there is not much to decide really, whatever the legislation obliges, you have to configure.

For security needs which are addressed by HP’s ESM system, the decision is yours. I have seen many decision makers trying to keep themselves on the very safe side and take retention periods unnecessarily long.  According to Mandiant, the median number of days attackers were present on a victim network before they were discovered was 205 days in 2014, down from 229 days in 2013 and 243 days in 2012. This brings me to the conclusion that retention period for security alert creation, monitoring, trending and forensics should be at least 1 year and not longer than 3 years. According to the same study of Mandiant, “The longest time an attacker was present before being detected in 2013 was six years and three months.”. Last but not the least, the retention period of course depends on the sector of activity, defense being the longest and the strictest followed by financial institutions.

A rough estimation about Storage IOPS values can be calculated with the following formulas:

Storage IOPS Needed (Direct Attached Storage) = EPS * 1.2


Storage IOPS Needed (SAN) = EPS * 2.5

Wednesday, June 10, 2015

SIEM Deployment - Windows Local Security Policy (Audit Policy) Configuration

In my previous blog article on collecting logs from Windows Servers, I missed a control point which can be very important while getting results from your SEM installation.

Let me make it more clear for you. While trying to write a correlation rule for an event, in which Windows Server Log Configuration is changed, we first found the event id for such action. After finding the event id, we noticed that although the main audit categories are configured correctly to log both successful and failed attempts, the source system actually did not generate any logs. You will find the reason why below.

The way auditing rules are defined has drastically changed from Windows Server 2003 to Windows Server 2008 and later. In Windows Server 2008, you have the 8 main audit policy categories, each with success and failure options as the scheme below shows.


In addition to main categories, in the same screen at the bottom of the menu, another setting called "Advanced Audit Policy Configuration" exists. Under that option the 8 main categories are detailed under 53 subcategories which provides advanced granularity (not comparing with Linux of course).


The important thing is to know which policy items have precedence over the other, the general policy or the more detailed policy? By default, the main categories have precedence over subcategories, even if you configured both. The way how the default behavior is changed and how it should be configured to avoid complications can be found here. The operation is different for domain computers and non-domain computers.


The best and safe approach to deal with this is to configure the audit policy using the detailed options and force the priority on it over the general policy.

Once it is decided that subcategories are going to be used for auditing, it is important to know what subcategories should be chosen with what actions in order to get the right amount of events. Audit categories concerning object access may generate too many logs to be stored and processed and they can produce important numbers when you have a considerable number of servers to monitor. Once again, the top down approach in design of SEM systems show the correct way, only generate the logs that you are going to need or use for correlation rules or compliance.

After a careful analysis of events and documentation, I ended up creating the policy below for my installations. I believe that this policy covers most of the needs but if you consider using it you'd better spend some time to adjust it according to your own needs.

Finally, the policy slightly differs for Active Directory Domain Controllers and the other servers as DS Access category only concerns Domain Controllers. For a non-DC server all the subcategories under DS Access category should be set to "No auditing".

SYSTEM AUDIT POLICY SETTINGS
Category/Subcategory                       Suggested Settings
System
  Security System Extension                Success and Failure
  System Integrity                         Success and Failure
  IPsec Driver                             No Auditing
  Other System Events                      Failure
  Security State Change                    Success and Failure
Logon/Logoff
  Logon                                    Success and Failure
  Logoff                                   Success and Failure
  Account Lockout                          Success and Failure
  IPsec Main Mode                          No Auditing
  IPsec Quick Mode                         No Auditing
  IPsec Extended Mode                      No Auditing
  Special Logon                            Success and Failure
  Other Logon/Logoff Events                Success and Failure
  Network Policy Server                    Success and Failure
Object Access
  File System                              Success and Failure
  Registry                                 Success and Failure
  Kernel Object                            Success and Failure
  SAM                                      No Auditing
  Certification Services                   Success and Failure
  Application Generated                    Success and Failure
  Handle Manipulation                      No Auditing
  File Share                               Success and Failure
  Filtering Platform Packet Drop           No Auditing
  Filtering Platform Connection            No Auditing
  Other Object Access Events               No Auditing
  Detailed File Share                      No Auditing
Privilege Use
  Sensitive Privilege Use                  No Auditing
  Non Sensitive Privilege Use              No Auditing
  Other Privilege Use Events               No Auditing
Detailed Tracking
  Process Termination                      Success and Failure
  DPAPI Activity                           No Auditing
  RPC Events                               Success and Failure
  Process Creation                         Success and Failure
Policy Change
  Audit Policy Change                      Success and Failure
  Authentication Policy Change             Success and Failure
  Authorization Policy Change              Success and Failure
  MPSSVC Rule-Level Policy Change          No Auditing
  Filtering Platform Policy Change         No Auditing
  Other Policy Change Events               Failure
Account Management
  User Account Management                  Success and Failure
  Computer Account Management              Success and Failure
  Security Group Management                Success and Failure
  Distribution Group Management            Success and Failure
  Application Group Management             Success and Failure
  Other Account Management Events          Success and Failure
DS Access
  Directory Service Changes                Success and Failure
  Directory Service Replication            No Auditing
  Detailed Directory Service Replication   No Auditing
  Directory Service Access                 Success and Failure
Account Logon
  Kerberos Service Ticket Operations       Success and Failure
  Other Account Logon Events               Success and Failure
  Kerberos Authentication Service          Success and Failure
  Credential Validation                    Success and Failure

Sunday, June 7, 2015

SIEM Deployment - Creating Logs on Linux Servers with audit.rules

As I mentioned in several other blog articles, your Security Event Management infrastructure is only as effective as your source auditing capabilities. If you are not generating the necessary logs, containing useful information and in an understandable and meaningful structure, no matter how correctly you deploy your log management product, you end up failing.

That is actually why I am spending this much time (and you also should) examining each and every source system and write about them here in several articles. My first articles were on the mere basics you should follow to see your logs on the log management platform. In these new series of articles, I will mention about what events, as a minimum, we should care and how to log them. Let’s get started!

In Linux systems, what events are going to be logged is managed by the audit.rules file which is most of the times located under /etc/audit/ folder. In the scheme below you can see the way the audit mechanism works in Linux.


The good point concerning auditing features of Linux comparing to Windows Servers is that you can pretty much audit everything and customize the logs according to your needs. You can write audit rules for any process or command you want, you can specifically audit the actions of a special user or audit only a specific action (write, read, execute, append). The name tags that you can add to your log messages may greatly simplify your job when you will be writing your correlation rules on your SEM engine.

The way how an audit rule is written is explained in auditctl man page. Below I am giving a nice template which covers most of the general situations. You should make sure that you cover all your critical processes by adding audit rules for them.

An important thing to know about using this file is to adapt it to your own systems. One of the first things you should do is to use arch= parameter accordingly whether you are using a 32 bit system (b32) or a 64 bit system (b64). Some files and commands change from older versions to newer ones, eg. faillog file which keeps the failed log attempts in Red Hat Enterprise Linux 5 does not exist anymore in later versions and you should configure for pam_tally process and files. Also, please change the comments in rules (text after –k parameter) according to your needs.

Finally there are 2 parameters that need to be used with caution. Do use “e -2” parameter in the end of the file to prevent tampering without logging and reboot. The second parameter “h -2” is more likely to be used in military/defense environments, causes system to halt if logging is crashed, so it should be used with caution.

# First rule - delete all
-D

# Increase the buffers to survive stress events.
# Make this bigger for busy systems
-b 8096

# Feel free to add below this line. See auditctl man page
#Capture all failures to access on critical elements
-a exit,always -F arch=b64 -S open -F dir=/etc -F success=0 -k CriticalElementFailures
-a exit,always -F arch=b64 -S open -F dir=/bin -F success=0 -k CriticalElementFailures
-a exit,always -F arch=b64 -S open -F dir=/sbin -F success=0 -k CriticalElementFailures
-a exit,always -F arch=b64 -S open -F dir=/usr/bin -F success=0 -k CriticalElementFailures
-a exit,always -F arch=b64 -S open -F dir=/usr/sbin -F success=0 -k CriticalElementFailures
-a exit,always -F arch=b64 -S open -F dir=/var -F success=0 -k CriticalElementFailures
-a exit,always -F arch=b64 -S open -F dir=/home -F success=0 -k CriticalElementFailures

#Capture all successful deletions on critical elements
-a exit,always -F arch=b64 -S unlinkat -F success=1 -F dir=/etc -k CriticalElementDeletions
-a exit,always -F arch=b64 -S unlinkat -F success=1 -F dir=/bin -k CriticalElementDeletions
-a exit,always -F arch=b64 -S unlinkat -F success=1 -F dir=/sbin -k CriticalElementDeletions
-a exit,always -F arch=b64 -S unlinkat -F success=1 -F dir=/usr/bin -k CriticalElementDeletions
-a exit,always -F arch=b64 -S unlinkat -F success=1 -F dir=/usr/sbin -k CriticalElementDeletions
-a exit,always -F arch=b64 -S unlinkat -F success=1 -F dir=/var -k CriticalElementDeletions

#Capture all successful modification of owner or permissions on critical elements
-a exit,always -F arch=b64 -S fchmodat -S fchownat -F dir=/etc -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S fchmodat -S fchownat -F dir=/bin -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S fchmodat -S fchownat -F dir=/sbin -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S fchmodat -S fchownat -F dir=/usr/bin -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S fchmodat -S fchownat -F dir=/usr/sbin -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S fchmodat -S fchownat -F dir=/var -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S fchmodat -S fchownat -F dir=/home -F success=1 -k CriticalElementModifications
#Capture all successful modifications of content
-a exit,always -F arch=b64 -S pwrite64 -S write -S writev -S pwritev -F dir=/etc -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S pwrite64 -S write -S writev -S pwritev -F dir=/bin -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S pwrite64 -S write -S writev -S pwritev -F dir=/sbin -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S pwrite64 -S write -S writev -S pwritev -F dir=/usr/bin -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S pwrite64 -S write -S writev -S pwritev -F dir=/usr/sbin -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S pwrite64 -S write -S writev -S pwritev -F dir=/var -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S pwrite64 -S write -S writev -S pwritev -F dir=/home -F success=1 -k CriticalElementModifications

#Capture all successful creations
-a exit,always -F arch=b64 -S creat -F dir=/etc -F success=1 -k CriticalElementCreations
-a exit,always -F arch=b64 -S creat -F dir=/bin -F success=1 -k CriticalElementCreations
-a exit,always -F arch=b64 -S creat -F dir=/sbin -F success=1 -k CriticalElementCreations
-a exit,always -F arch=b64 -S creat -F dir=/usr/bin -F success=1 -k CriticalElementCreations
-a exit,always -F arch=b64 -S creat -F dir=/usr/sbin -F success=1 -k CriticalElementCreations
-a exit,always -F arch=b64 -S creat -F dir=/var -F success=1 -k CriticalElementCreations

#Capture all successful reads (only for High-Impact Systems)
-a exit,always -F arch=b64 -S open -F dir=/etc -F success=1 -k CriticalElementReads
-a exit,always -F arch=b64 -S open -F dir=/bin -F success=1 -k CriticalElementReads
-a exit,always -F arch=b64 -S open -F dir=/sbin -F success=1 -k CriticalElementReads
-a exit,always -F arch=b64 -S open -F dir=/usr/bin -F success=1 -k CriticalElementReads
-a exit,always -F arch=b64 -S open -F dir=/usr/sbin -F success=1 -k CriticalElementReads
-a exit,always -F arch=b64 -S open -F dir=/var -F success=1 -k CriticalElementReads

#Monitor for changes to shadow file (use of passwd command)
-w /usr/bin/passwd -p x
-w /etc/passwd -p ra
-w /etc/shadow -p ra

#Monitor for use of process ID change (switching accounts) applications
-w /bin/su -p x -k PrivilegeEscalation
-w /usr/bin/sudo -p x -k PrivilegeEscalation
-w /etc/sudoers -p rw -k PrivilegeEscalation

#Monitor for use of tools to change group identifiers
-w /usr/sbin/groupadd -p x -k GroupModification
-w /usr/sbin/groupmod -p x -k GroupModification
-w /usr/sbin/useradd -p x -k UserModification
-w /usr/sbin/usermod -p x -k UserModification

#Ensure audit log file modifications are logged.
-a exit,always -F arch=b64 -S unlink -S unlinkat -F dir=/var/log/audit -k AuditLogRemoval

# Monitor for use of audit management tools
-w /sbin/auditctl -p x -k AuditModification
-w /sbin/auditd -p x -k AuditModification

# Ensure critical apps are monitored.  List will vary by mission.
-a exit,always -F arch=b64 -F path=/sbin/init -k CriticalAppMonitoring
-a exit,always -F arch=b64 -F path=/usr/bin/Xorg -k CriticalAppMonitoring
-a exit,always -F arch=b64 -F path=/usr/sbin/sshd -k CriticalAppMonitoring
-a exit,always -F arch=b64 -F path=/sbin/rsyslogd -k CriticalAppMonitoring

#  Ensure attribute changes are audited
-a exit,always -F arch=b64 -S chmod -S chown -S fchmod -S fchown -S setuid -S setreuid -S getxattr -S setxattr -k AttributeChanges

Saturday, May 23, 2015

Considerations About a Successful SIEM and Log Management Project

You may spend a lot of effort to build up an infrastructure of SIEM and Log Management composed of source systems, connectors, loggers, rule creation and correlation engines and management systems and then see very little of valuable output….if you do not pay attention to the first thing you actually do in the first place: Define a proper auditing policy.

A recent and very successful approach for building Log Management and SIEM capabilities actually consists of an inverse installation process. Some Security Service Providers come and collaborate with your teams to apply defined risk scenarii on your infrastructure to see if your infrastructure components generate the log messages which should alert you that something unusual and odd is going on. It is at this very step that your SIEM team learns which type of events they should be collecting among a big pile of others, which in many cases constitute most of your log storage without actually providing a value.

Such an approach may create huge differences in outputs and may trigger changes in your infrastructure. I know companies which changed some of their components just because they do not provide essential log information which would allow security alert generation.

A very important thing to keep in mind when deploying SIEM solutions is the involvement of all infrastructure and application teams. No matter how qualified IT Security guys responsible for SIEM deployment, they do not master Operating Systems as much as Windows, Linux and UNIX administrators do, also considering different versions of OSes that can be in place. To my experience, I know that 3 generations of Windows Servers coexist in majority of companies without counting R2 versions. It goes the same way for databases and applications. A SIEM project would probably fail or underperform if all IT teams do not collaborate with SIEM project team and stay isolated in their silos.

Another way of dealing with this issue would be to create a security team composed of security masters in each domain. First difficulty in that approach is to bring together such talents which is very costly and the second challenge is to keep them in the company and provide consistency because such people are highly in demand. This option seems applicable only in very large structures such as multinationals, especially in finance sector where there really are some things to be at stake, money and also reputation.


There are of course lots and lots to say about other aspects of SIEM and Log Management projects. But maybe the most important things to know about them are to set the expectations correctly (Benefits, Aim: “Security?, Compliance?, Both?”, Scope, Schedule and Budget), be patient, provide continuous support and monitor the output closely. The technology in this market is rapidly evolving and it still has much more to offer in years to come.

Tuesday, May 19, 2015

SIEM Deployment - HP Arcsight Logger Installation

HP Arcsight Logger product constitutes the log management part of HP's Security Event Management and Log Management product portfolio, ESM being the security event management part.

Before getting this much into SEM and Log Management, they both meant the same thing for me, as most of the products available on the market were trying to do. Architecture-wise, Arcsight managed to distinguish its offerings  for different needs and markets. This issue is the topic of another blog entry however if you are looking for a product which will allow you to store all your logs in a stable way and query specific patterns very quickly then Arcsight Logger is the solution you are looking for.

As of mid-2015, the latest version of HP Arcsight Logger is the 6.0 SP1 version with no known security bugs. Arcsight 6.0 SP1  :
  • Distributes latest version of OpenSSL, 0.9.8zc, which addresses multiple vulnerabilities including CVE-2014-0224.
  • Resolves the Bourne-Again Shell (Bash) Code Injection Vulnerability, including CVE-2014-6271, CVE-2014-7169, CVE-2014-7186, CVE-2014-7187, CVE-2014-6277, and CVE-2014-6278.
  • Disables support for SSL v3.0 encryption, to address the Padding Oracle On Downgraded Legacy Encryption (POODLE) vulnerability (CVE-2014-3566).
Version 6.0 SP1 also brings support for doubled local storage size. Each instance of logger now can support up to 8TB of logs before sending logs to archive.

In this article, I tried to resume how you can get your Logger up and running in a short amount of time. This is also the first time for me to include a video which obviously makes the article more interesting.

The installation of Arcsight Logger is a 2 step process, Preparation and Installation itself.

For the preparation you should of course have your server equipped with necessary resources (just like all other Log Management products, Logger also is greedy in resources), Logger software, license,user accounts (root privileges are required for the installation) and ports.
PREPARATION
Server Requirements
OS

  • Red Hat Enterprise Linux (RHEL) versions 6.2 and 6.5 (64-bit),
  • CentOS versions 5.5 and 6.5 (64-bit)

Hardware

For the Trial Logger and VM Instances:
CPU: 1 or 2 x Intel Xeon Quad Core or equivalent
Memory: 4 - 12 GB (12 GB recommended)
Disk Space: 10 GB (minimum) in the Logger installation directory (/opt/...)
Temp directory: 1 GB

For the Enterprise Version of Software Logger:
CPU: 2 x Intel Xeon Quad Core or equivalent
Memory: 12 - 24 GB (24 GB recommended)
Disk Space: 65 GB (minimum) in the Software Logger installation directory. (/opt/...)
Root partition: 400 GB
Temp directory: 1 GB

For performance reasons, it is preferable to use dedicated hardware for Logger rather than using virtual machines. For faster searchs archive connections should be over direct fiber channel rather over NFS.

Logger interface can be reached through all known browsers with recent versions.

Logger can be installed using root and non-root accounts but following points should be taken into consideration:

  • For root installs, allow access to port 443 as well as the ports for any protocol that thelogger receivers need, such as port 514 for the UDP receiver and port 515 for the TCP receiver.
  • For non-root installs, allow access to port 9000 as well as the ports for any protocol that the Logger receivers need, such as port 8514 for the UDP receiver and port 8515 for the TCP receiver.

INSTALLATION
You can follow instructions given below. The video also follows the same steps.


1. Install Linux Server (Minimal Server with GUI for trial installations). Do not "Easy Install" when using Vmware and manually set partitions

2. Adjust partitions as below as a minimum:

/ 10240 MB
/home 10240 MB
swap 4096  MB (Typically half of your RAM but do not exagerrate)
/opt 70000 MB (Give Minimum 65 GB, more is better)
/tmp 2048  MB

3. Create arcsight user

groupadd arcsight
useradd -c "arcsight_software_owner" -g arcsight -d /home/arcsight -m -s /bin/bash arcsight

4. Copy sources and license to /home/arcsight

5. Set hostname in /etc/hosts

#vim /etc/hosts

192.168.X.Y logger.mycompany.com

6. Make sure system time is correct

7. Create /opt/arcsight with arcsight user
chown arcsight:arcsight /opt/arcsight

8. Disable selinux and iptables for performance (Use Network Firewall instead !!)

Selinux can be an important performance drawback!

#chkconfig iptables off
#chkconfig ip6tables off
# vim cat /etc/sysconfig/selinux

SELINUX=disabled

9. Change release file if not using recommended versions
vim /etc/redhat-release
CentOS release 6.5 (Final)

10. Make OS changes specific to Logger
chmod +x /sbin
chmod +x /sbin/ifconfig
chmod +x /sbin/lspci
chmod +x /usr/sbin

#vim /etc/security/limits.d/90-nproc.conf
* soft nproc 10240
* hard nproc 10240
* soft nofile 65536
* hard nofile 65536

#reboot

11.Install Logger
#cd /home/arcsight

#./ArcSight-logger-6.0.0.7307.1.bin

Saturday, March 28, 2015

SIEM Deployment - Collecting Logs from Linux Servers

Most companies run their business critical systems on Linux servers, which are famous for their stability, performance, security among other capabilities. Collecting logs from Linux servers thus becomes an important step in realizing Log Management projects.

Log management problem and need for Linux and UNIX servers is thought and taken care of long before it is finally taken seriously by Microsoft; therefore configuration is more straightforward and works more stable in my experience.

There is however a list of items to be followed carefully in order to be sure that everything works fine. The list may be hard to keep in mind comparing to steps in Windows, so I list them down below.

  1. Check auditd daemon configuration to see if auditing service works fine (/etc/audit/auditd.conf)
  2. Check audit event dispatcher configuration (/etc/audisp/audispd.conf)
  3. Configure audit event dispatcher syslog configuration (/etc/audisp/plugins.d/syslog.conf)
  4.  Create audit rules by editing /etc/audit/audit.rules file (More detail below)
  5. Configure the syslog daemon to redirect log messages to a collector server.
  6. Restart the daemons to activate the configurations.
I took RedHat family of Linux systems (RHEL, CentOS, Fedora,etc.) for configuration example in this article and configuration steps and commands apply to almost all Linux distributions with small changes.

There is not much to say about first and second steps, as they are routine controls to see if daemons are enabled and fine tunings may be done if necessary.
At the third step, under the syslog.conf file we should configure the args parameter to say what facilities we want to send to the syslog. To be coherent with the below configuration I set it as below:

args = LOG_INFO LOG_LOCAL5 LOG_LOCAL4 LOG_LOCAL3

Then comes a very important step, configuring the audit.rules file which actually is your audit policy for the server. If you are up to this point, most probably your company should already have one and your audit.rules file should not be empty. But in case you started being interested with linux servers just for the sake of log management (like me), I would suggest you to first read and edit, and then copy the usr/share/doc/audit-x.y.z/stig.rules document as your audit.rules file. stig.rules file is a really well prepared document to guide you to write your own rules and it is very good for a starter honestly. In my case the configuration applied was like below:

[root@localhost etc]# vi /usr/share/doc/audit-2.3.7/stig.rules
[root@localhost etc]# cp /usr/share/doc/audit-2.3.7/stig.rules /etc/audit/audit.rules
cp: overwrite `/etc/audit/audit.rules'? y

I can also suggest you to add below lines in your audit.rules file as a best practice. (For 64bit systems in this example)

-a always,exit -F arch=b64 -S sethostname -S setdomainname -k HOSTNAME_CHANGED
-a always,exit -F arch=b64 -S kill -F a1=9 -k KILL9
-a always,exit -F arch=b64 -F subj_type!=ntpd_t -S settimeofday -k SYSTEM_TIME_CHANGED
-a always,exit -F arch=b64 -F subj_type!=ntpd_t -S adjtimex -k SYSTEM_TIME_CHANGED
-a always,exit -F arch=b64 -F subj_type!=ntpd_t -S clock_settime –k SYSTEM_TIME_CHANGED
-w /etc/localtime -p wa -k SYSTEM_TIME_CHANGED
-a always,exit -F arch=b64 -S mount -k DEVICE_MOUNTED
-a always,exit -F dir=/boot -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/root -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/etc -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/bin -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/sbin -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/lib -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/lib64 -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/usr -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/net -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/sys -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/cgroup -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/selinux -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/var/adm -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/var/lib -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/var/spool/cron -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/var/spool/at -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F dir=/var/spool/anacron -F perm=wa -k SYSTEM_FILE_CHANGED
-a always,exit -F path=/var/log/messages -F perm=wa -F subj_type!=syslogd_t -F subj_type!=logrotate_t -k LOG_ALTERED
-a always,exit -F path=/var/log/dmesg -F perm=wa -F subj_type!=syslogd_t -F subj_type!=logrotate_t -k LOG_ALTERED
-a always,exit -F path=/var/log/secure -F perm=wa -F subj_type!=syslogd_t -F subj_type!=logrotate_t -k LOG_ALTERED

In the fifth step, we should configure the syslog daemon. In Linux systems, rsyslog service is responsible from reading the events and writing them to specific log files. To decide which actions are going to be logged /etc/rsyslog.conf file should be edited with a text editor.
More specifically RULES section in rsyslog.conf file should be edited like below:

#### RULES ####
# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.*                                                                                          /dev/console
# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none                           /var/log/messages
# The authpriv file has restricted access.
authpriv.*                                                                                    /var/log/secure
# Log all the mail messages in one place.
mail.*                                                                                          /var/log/maillog
# Log cron stuff
cron.*                                                                                         /var/log/cron
# Everybody gets emergency messages
*.emerg                                                     
# Save news errors of level crit and higher in a special file.
uucp,news.crit                                                                        /var/log/spooler
# Save boot messages also to boot.log
local7.*                                                                                   /var/log/boot.log
#Log Management
syslog.info; auth.info; daemon.info;                               @@CollectorServerIP
authpriv.info; cron.info; kern.info
# System log information
local5.info, local4.info, local3.info                                 @@CollectorServerIP

In the above configuration @ sign symbolizes log sending over UDP 514 port and @@ symbolizes TCP 514 port. In order to not to lose any logs I configured it over tcp. I have been told by a colleague recently that this may add a significant load on systems where number of logs are important, but I still believe that tcp method should be given a chance before switching to udp, if it is deemed inevitable.

If you want to complicate things you may choose to send your logs you may send them encrypted but that configuration is not a part of this article.

As a final step, we restart the auditd and rsyslog services. At this point we must be able to see on the collector server log messages arriving to the syslog server software installed.