Showing posts with label Arcsight. Show all posts
Showing posts with label Arcsight. Show all posts

Thursday, March 15, 2018

Reassessment Of SIEM Solutions On The Market


It has been so long since the last time I have written about SIEM solutions in this blog! How long exactly? Well, since the last blog, I have changed 2 companies, 4 functions and relocated in another country type of long.

SIEM and Log Management topics are always very dear to me, although my focus on IT and Information Security has widened, to the point to include Information Risk Management as well.

I am going to resume the landscape in SIEM area in the last 3 years with the main actors involved.

HP ArcSight became Microfocus ArcSight with HP’s spin-off from the software business. The decision seems to be taken a long time ago already. The ambiguity in the way merger/selling operations were dealt between HP and Microfocus pushed the customers to look for alternatives. The acquisition of HP Software division including ArcSight product family by MicroFocus was announced on the 7th of September 2016 and was completed officially on the 1st of September 2017. But people close to the subject know that long before the acquisition announcement, product development activities were severely slowed down if not completely stopped.

HP ArcSight was already having some difficulties addressing long term storage of data on the platform itself other than lacking advanced features proposed by competitors. Running even simple queries on the old-fashioned Logger was taking ages, even when tried on the command line with scripts.

The problems were since addressed ArcSight Data Platform (I will later provide a dedicated post on that) and some advanced features are presented to the customers such as User Entity and Behavior Analysis (UEBA) but the damage I think is done. ArcSight has lost an important part of its customers other than the big accounts which really invested too much on the solution to leave it.

The arch-rival of ArcSight was IBM QRadar at the time we left. QRadar was somehow less customizable comparing to ArcSight but was a strong competitor in regards to the integrations it had such as Network Packet Flow Analysis (QFlow) being the most important. Other than this, platform was and still is, capable of indexing all log fields comparing to limited indexing capability of ArcSight, which can be considered as a huge advantage.

Moreover, architecture-wise, QRadar supported scaling out (increasing the performance/capacity by adding new devices), therefore allowing a much better retention of logs online, without sending logs to external storage while since recently ArcSight’s Logger only supported scaling up (increasing the performance/capacity by increasing system resources).

Relative simplicity of IBM QRadar also helped the solution’s overall stability, which on ArcSight’s side required a separate management appliance (ArcSight Management Center) and sometimes some 3rd party appliances for connector health management.

Another advantage of QRadar is its integrated additional capabilities, such as the User Behavior Analysis module which is not as efficient as a full blown UEBA solution from Exabeam for example to compare with but still does the essentials for enriching the bare log data. When talking about data enrichment ability to consume Vulnerability Management data also should not be left behind.

IBM QRadar seems, in my humble opinion, the best option for large environments and for on premise use.

McAfee ESM used to be the 3rd major actor behind ArcSight and Q-Radar. McAfee had the advantage of being simpler to configure and to license solution within a vendor-controlled package. Not much has changed since other than providing a big-data approach to McAfee ESM’s architecture, transforming the front-end to HTML5. McAfee’s SIEM solution was and is one of the least appealing SIEM solutions for me as it never got the attention it deserved from the organization, always lagging behind McAfee’s flagship products.

It can be advised to small-to-medium size organizations having with strong relations to McAfee.

Saturday, July 25, 2015

SIEM Deployment – Configuring Peering Between ArcSight Loggers


When deploying your SIEM Solution Infrastructure with HP ArcSight products, you may consider installing more than one Logger systems for several reasons.

Without going too much into detail for these reasons, let’s name the 2 major ones, first reaching the computation levels on your system (RAM, CPU or 15000 EPS level indicated in HP ArcSight documents) and second providing redundancy, installing an ArcSight Logger appliance for each datacenter for not consuming too much bandwidth to send logs.

Whatever the reason for using several ArcSight Loggers, the problem of lookup in several databases appears.

The solution for this problem is establishing peering between your Logger appliances. Once peering is established, the pattern you are searching for is executed on all peer Loggers and the result is shown on the Logger you initiated the search.

Below you can find the details on peer configuration between two Loggers.

For peering 2 or more loggers should first authenticate each other. For authentication, 2 methods exist:

  • Authentication with a logger user credentials
  • Authentication with Peer Authorization ID and Code

In this article, we will follow the second method to prevent any problems that may be caused by the user credentials in the first method.

Let's assume, we will initiate the peering on Logger1. To be able to realize it, we should first log in to the Logger2 and generate the Authorization ID and Code for Logger 2.





Once the first step is done, generated values must be entered on Logger1. After successfully saving the configuration Logger Peering is done and logs can be queried through either of the loggers.

UPDATE 29/07/2015: There is something odd about peering config for Loggers. "Add Peer Logger"
option must be configured on both loggers and it is not enough so see one line of peer Logger under Peer Loggers menu. Authorization ID and Code generated on Logger2 for Logger1 must be entered on Logger1 and vice versa. At the end of successfull configuration, you should see 2 identical lines for each Logger you established peering relation under Peer Loggers menu.




Wednesday, July 22, 2015

SIEM Planning - Reference Architecture for Midsize Deployments

After going through several websites and documents, I sadly discovered, like many of you had before, that HP haven’t yet published any reference architecture or certified design documents for different needs.

I decided to write a series blog articles to create reference architectures for SIEM deployments, basically for HP ArcSight, but the fact that solution components are more or less similar in different vendors, I believe they will be applicable to all SIEM environments.

Gartner defines a small deployment as one with around 300 log sources and 1500 EPS. A midsize deployment is considered to have up to 1000 log sources and 7000 EPS. Finally a large deployment generally covers more than 1000 log sources with approximately 15000 EPS. There can of course be larger deployments with over 15000 EPS but architecture-wise they can be considered as very “large” deployments.

In this article, I will give the details of a midsize deployment, covering components both for a primary datacenter and a disaster recovery center, working in an active-passive setup.

The reference architecture for midsize deployment is for a scenario where the company needs both a long term log storage solution (ArcSight Logger) and Security Event Management and SOC capabilities (ArcSight ESM).

The scheme below shows how different components of the architecture are set up.

  • In this setup software SmartConnectors are used to collect the logs. Up to 8 software connectors can be configured on a server and 1 GB of memory should be allocated on the server for each connector instance other than what the server needs for its operation.
  • In case appliances are not used, do pay attention to use built-for-purpose hardware servers where resources are not shared because like other big data solutions, these systems are greedy in terms of resources (CPU, Memory, IOPS rate) and do not perform well on virtual environments.
  • Sources send logs to one SmartConnector only. SmartConnector level redundancy is only possible only for Syslog connectors and that when connectors are put behind a load balancer. This also provides load sharing and scalability and is a best practice. DB and File connectors do not have such options as they pull the logs from sources.
  • When a DB or File collector is down, no log is lost until collector comes back as the logs continue to be written on local resources at the source.
  • For log storage and searching, SmartConnectors in each datacenter send their logs to their respective Logger appliance hosted in the same location, providing  important bandwidth savings. Each logger appliance back up the other one using the failover destinations option configured on the SmartConnectors. Thanks to the peering configuration between loggers, logs can be queried through any of the logger appliances without having to connect on each device.
  • DC ESM is the primary ESM for both datacenters. DRC ESM is only used in case of DR
  • Logs and Alerts are archived daily both on ESM and Logger.
  • In DR case, there is no RPO. Configurations for ESM and Logger are planned to be synchronized manually. ESM and Logger are expected to be operational instantly.
  • Configuration backups for SmartConnectors and Loggers are collected using Arcsight Management Center (Arc MC).
  • SmartConnector statistics and status can be easily followed using Arc MC as well. Realizing SmartConnector updates are also recommended to be done over Arc MC using the GUI.
  • SmartConnector level configuration options (aggregation, filter out, batching etc.) are easier to be configured using Arc MC.
  • Finally it is strongly recommended to use a Test ESM system to test all filters, rules, active lists and other configuration objects before applying them on production systems as a misconfguration in these settings may crash your ESM and make you lose very valuable data.

Tuesday, July 21, 2015

SIEM Deployment - ArcSight Logger 6.0 P2 is out

Logger 6.0 P2 is now available for download from HP Software support download page. Note that it is referred to as Logger 6.02 on the download site.

Logger 6.0 P2 includes:
  • Important security updates (Honestly I could not find what those updates are in the release notes, even though  I went through the document for multiple times)
  • A fix to peer search (LOG-13574).
  • Modifications to SOAP APIs:
  • SOAP API login events in Audit logs
  • SOAP login API now uses the authentication method configured in Logger, which can be an external authentication method, such as Radius. Clients using the SOAP login API must now pass the login credentials for the authentication method configured in Logger (e.g. Radius credentials) instead of the credentials of a local Logger user.

The full release notes can be found HP Protect Website. You may also find the Loggersupport matrix useful.

Some important notes:
  • The data migration tool was updated.
  • Migration from older Logger versions should be done to Logger 6.0 P1 followed by an upgrade to Logger 6.0 P2.
To resume the migration path, it is for most systems 5.0 GA (L5139) > 5.0 Patch 2 (L5355) > 5.1 GA (L5887) > 5.2 Patch 1 (L6307) > 5.3 GA (L6684) > 5.3 SP1 (L6838) > 5.5 (L7049) > 5.5 Patch 1 (L7067) > 6.0 (L7285) > 6.0 Patch 1 (L7307) > 6.0 Patch 2 (L7334).

The Logger trial version is not updated and remains at Logger 6.0 P1.

About OS version supported there are also no changes and RHEL 6.5 and 6.2 as well as CentOS 5.5 and 6.5 are supported.

Saturday, July 11, 2015

SIEM Deployment - Configuring Failover Destinations on HP ArcSight SmartConnectors

SIEM solutions so far seem to be too much focusing on security offerings they propose and they are not offering solid redundancy and disaster solutions.

From architectural perspective, there should be a redundancy option at all layers of the solution architecture. SmartConnectors being the first layer of SIEM interaction with source systems provide a nice redundancy options with “Failover Destination” configuration setting available both for HP ArcSight Logger and HP ArcSight ESM.

For each log processing system, SmartConnector provides a primary destination and a failover destination. As soon as SmartConnector process discovers that logs are not successfully received by the primary destination, they are redirected to the failover destination. Preemption also exists meaning that from the moment primary destination becomes online logs are redirected back to the primary destination.

More detail about the configuration can be seen in the video below.


SIEM Deployment - Installing HP Arcsight SmartConnector on Windows Servers

SIEM system administrators mostly come from Linux world and they prefer using Linux OS for HP ArcSight component installations. I also agree on that decision as the performance and security they provide is superior comparing to Windows Servers.

However there are some situations where you have to use Windows Servers for SmartConnector installations such as when you want to use WinRM application for Windows log collection. Your company may be lacking experienced Linux admins and that could be the second reason to use Windows based SmartConnectors. Last but not the least, as you can see from the installation video, it is much easier to install Windows based SmartConnectors then Linux based ones.


Below, you can find details about the basic installation of an HP ArcSight SmartConnector on a Windows Server for collecting log messages.

REQUIREMENTS / PREREQUISITES
  1. A Windows 2008 Server installed.
  2. A user with sufficient rights to install the software.
  3. A user with added to "Event Log Readers" group to read the logs on the server. (OPTIONAL)
  4. Connector binaries downloaded. (Download the correct version for your OS, x86 or x64!!)
  5. Connector destinations ( ArcSight Logger and/or ArcSight ESM) installed and working.
  6. Create a receiver on the logger to connect the connector.
  7. Create a subscription on Event Viewer to get logs.
  8. Check the configuration of log receiving folders and increase size.
  9. Define the protocol and port on which you will listen the incoming logs.
  10. Firewall permissions given for incoming log collection.
INSTALLATION
  1. Create installation directory preferably under your second partition E:\SmartConnectors\Microsoft.
  2. Run the setup file.
  3. Install the connector to run standalone or as a service.
  4. Start the connector service from services.msc.
  5. Check events on the logger.
  6. Set agent.properties parameters.(OPTIONAL)
  7. Set agent.wrapper.conf parameters.(OPTIONAL)

Saturday, July 4, 2015

SIEM Deployment - Securing HP ArcSight Web Interfaces

2014 and 2015 have been years full of discoveries on cryptographic and algorithmic vulnerabilities starting with Heartbleed following with POODLE and several others. These vulnerabilities pushed many administrators patch their webservers, disable vulnerable protocols (SSLv2, SSLv3 and even TLS1.0) and cipher suites containing weak algorithms (RC4, SHA1, MD5 and others).

ArcSight systems, working over web interfaces, are also subject to these vulnerabilities and possible attacks. Apache web server hosting Logger and Management Center interfaces should be configured to eliminate the cryptographic algorithm and protocol threats.

First of all, to know the status of webserver, we will use the sslscan application with given parameters.



From this output we can see that SSLv2 and SSLv3 protocols are already disabled but protocols such as RC4, DES and Diffie-Hellman are still accepted.

In order to force the webserver to use secure algorithms and protocols we will modify apache configuration file httpd.conf under  <LOGGER_INSTALLATION_DIRECTORY> /local/apache/conf directory, which in my own installation is /opt/arcsight/current/local/apache/conf/.

It is wise to take a backup of the httpd.conf file before making any changes.

# cp httpd.conf httpd.conf.backup

Then we should edit this file with a text editor such as nano or vi.


We should modify the line starting with SSLProtocol and SSLCipherSuite as follows and save the file.

SSLProtocol ALL -SSLv2 -SSLv3

SSLCipherSuite !RC4:!DH:!MD5:!aNULL:!eNULL:!MEDIUM:!LOW:HIGH

After this operation, so that the changes become active we should restart logger services under <LOGGER_INSTALLATION_DIRECTORY>/arcsight/logger/bin directory with ./loggerd restart command.

When we recheck with sslscan, we see that vulnerable options are no longer supported.




Wednesday, July 1, 2015

SIEM Deployment - Installing HP Arcsight Software Connectors on Linux

In HP ArcSight solution architecture one of the most value adding components is the Smart Connectors. With the several functions they provide, Smart Connectors really help differentiating HP ArcSight’s SIEM solution from other.

So what exactly are ArcSight Smart Connectors? In a 3 layered SIEM Architecture, Arcsight Smart Connectors constitute the second layer between log processing systems ( Arcsight Logger or Arcsight ESM) and source systems generating logs according to defined audit policies.

From technical SIEM perspective, ArcSight Smart Connectors are Java applications which allow receiving or fetching logs from one defined log source, which can be several devices sending their logs in syslog format over the same protocol and port number (e.g. UDP 514) or an application writing its logs to a flat file. ArcSight Smart Connectors come with 256 MB minimum memory size and that memory is adjustable up to 1024 MB by configuration agent.properties file, among other connector properties to be changed according to your specific needs.

One physical server can host up to 8 connector processes, meaning that you can collect logs from 8 different source groups as long as your server support that much capacity.


Below, you can find details about the basic installation of an HP ArcSight Smart Connector on a CentOS Linux server for collecting syslog messages.

REQUIREMENTS / PREREQUISITES

1. A RHEL or CentOS Linux 6.X Server installed.
2. Root or sudo rights for connector user.
3. Connector binaries downloaded. (Download the correct version for your OS, x86 or x64 !!)
4. Connector destinations (Logger and/or ESM) installed and working.
5. Define the protocol and port on which you will listen the incoming logs.
   Choose port numbers over 1024 if you are installing with a non-root user as non-root users are not allowed to listen ports below 1024.

INSTALLATION

1. Create installation directory under /opt path. In this example it is /opt/arcsight/connectors .
2. Create a receiver on the logger to connect the connector.
3. Run the connector binary you previously downloaded. (From /home/arcsight directory in my installation).
# ./ArcSight-7.1.1.7348.0-Connector-Linux64.bin -i console
4. Install the connector to run standalone or as a service.
INSTALLATION_PATH\current\bin\arcsight connectors  à Run Standalone
INSTALLATION_PATH \current\bin\arcsight agentsvc -i -u arcsight -sn syslog_unix à Run with arcsight user as a service with arc_syslog_unix service name
5. Check events on the logger.
6. Set agent.properties parameters (Optional)
7. Set agent.wrapper.conf parameters (Optional)

Saturday, June 27, 2015

SIEM Planning - Capacity Planning and Sizing


SIEM projects are well-known to be demanding and greedy when it comes to the resources and your CIOs/CISOs would like to hear about your direct (software licensing, server investment, etc.) and indirect ( archiving storage, support) costs for at least following 3 years other than the benefits the Project will provide.

In this article, I will of course give the basic formulas about sizing. However the most important information to be provided is the real life experience of a live operation concerning the values and comparison to those values to the benchmarks.

First unit, which constitutes the base for all our calculations, is the Event per Second (EPS) value that each source system generates. EPS value greatly depends on 2 factors, audit policy or rules applied on the source system and business of the system. A server with “Object Access” audit rules enabled and Web Server functionality configured of course would not generate the same number of logs with a standard server. Windows family of servers also tends to generate much greater number of logs than Linux and UNIX servers, all with standard default configurations.

EPS estimations can be reached using links 1, 2 and 3.

Having calculated the number of EPS for each source asset group the next step to do is calculating the Event per Day (EPD) value.

EPD = ∑ EPS X 86400

Once EPD value is calculated, we have to decide an average log message size to know how much storage we will need each day. Log sources generate logs starting from 200 bytes range on network and infrastructure devices to 10 kilobytes or more on application and database side.  Syslog standard (RFC 5424) sets the maximum size of the content field of a log message to 2 kilobytes. In light of this information, it is wise and advisable to assume a raw log message size as 500 bytes.

Average raw log message size being set to 500 bytes, the amount of Daily log messages in GB is calculated as follows:

Daily Raw Log Size = EPD * 500 / (1024)3

Log management appliances do some changes on the log messages to make them understandable and meaningful. This operation is called “Normalization” and it increases the log size depending on the solution you use. In my personal experience with HP ArcSight, normalization increased the log size about 90% to 100%. Some other people have seen up to 200% of increase in their experiences.  As a result we obtain the below given formula for daily normalized log size:

Daily Normalized Log Size = Daily Raw Log Size * 2

The calculated value does not really represent the daily storage value for log management systems. Many vendors came up with proprietary compression solutions and claim they compress logs 10 times (10:1) which is quite idealistic. It is however, safe to consider a ratio of 8:1 for calculations. So the formula becomes:

Daily Storage Requirement = Daily Normalized Log Size / 8

The annual storage need would basically be 365 times the Daily Storage Requirement, if you want your calculations to be on the safe side. Nevertheless, EPS numbers seriously fall during weekends and vacations. Watch how much your average EPS numbers decrease in such periods and do your own calculations for your annual needs.

Annual Storage Requirement = Daily Storage Requirement * 365

The last important point is the retention period when you plan your storage investments for future. 2 factors are decisive in the definition of retention period, Compliance Requirements and Security Requirements.

Compliance Requirements only concern Log Management systems, in HP’s case it is Logger and there is not much to decide really, whatever the legislation obliges, you have to configure.

For security needs which are addressed by HP’s ESM system, the decision is yours. I have seen many decision makers trying to keep themselves on the very safe side and take retention periods unnecessarily long.  According to Mandiant, the median number of days attackers were present on a victim network before they were discovered was 205 days in 2014, down from 229 days in 2013 and 243 days in 2012. This brings me to the conclusion that retention period for security alert creation, monitoring, trending and forensics should be at least 1 year and not longer than 3 years. According to the same study of Mandiant, “The longest time an attacker was present before being detected in 2013 was six years and three months.”. Last but not the least, the retention period of course depends on the sector of activity, defense being the longest and the strictest followed by financial institutions.

A rough estimation about Storage IOPS values can be calculated with the following formulas:

Storage IOPS Needed (Direct Attached Storage) = EPS * 1.2


Storage IOPS Needed (SAN) = EPS * 2.5

Wednesday, June 10, 2015

SIEM Deployment - Windows Local Security Policy (Audit Policy) Configuration

In my previous blog article on collecting logs from Windows Servers, I missed a control point which can be very important while getting results from your SEM installation.

Let me make it more clear for you. While trying to write a correlation rule for an event, in which Windows Server Log Configuration is changed, we first found the event id for such action. After finding the event id, we noticed that although the main audit categories are configured correctly to log both successful and failed attempts, the source system actually did not generate any logs. You will find the reason why below.

The way auditing rules are defined has drastically changed from Windows Server 2003 to Windows Server 2008 and later. In Windows Server 2008, you have the 8 main audit policy categories, each with success and failure options as the scheme below shows.


In addition to main categories, in the same screen at the bottom of the menu, another setting called "Advanced Audit Policy Configuration" exists. Under that option the 8 main categories are detailed under 53 subcategories which provides advanced granularity (not comparing with Linux of course).


The important thing is to know which policy items have precedence over the other, the general policy or the more detailed policy? By default, the main categories have precedence over subcategories, even if you configured both. The way how the default behavior is changed and how it should be configured to avoid complications can be found here. The operation is different for domain computers and non-domain computers.


The best and safe approach to deal with this is to configure the audit policy using the detailed options and force the priority on it over the general policy.

Once it is decided that subcategories are going to be used for auditing, it is important to know what subcategories should be chosen with what actions in order to get the right amount of events. Audit categories concerning object access may generate too many logs to be stored and processed and they can produce important numbers when you have a considerable number of servers to monitor. Once again, the top down approach in design of SEM systems show the correct way, only generate the logs that you are going to need or use for correlation rules or compliance.

After a careful analysis of events and documentation, I ended up creating the policy below for my installations. I believe that this policy covers most of the needs but if you consider using it you'd better spend some time to adjust it according to your own needs.

Finally, the policy slightly differs for Active Directory Domain Controllers and the other servers as DS Access category only concerns Domain Controllers. For a non-DC server all the subcategories under DS Access category should be set to "No auditing".

SYSTEM AUDIT POLICY SETTINGS
Category/Subcategory                       Suggested Settings
System
  Security System Extension                Success and Failure
  System Integrity                         Success and Failure
  IPsec Driver                             No Auditing
  Other System Events                      Failure
  Security State Change                    Success and Failure
Logon/Logoff
  Logon                                    Success and Failure
  Logoff                                   Success and Failure
  Account Lockout                          Success and Failure
  IPsec Main Mode                          No Auditing
  IPsec Quick Mode                         No Auditing
  IPsec Extended Mode                      No Auditing
  Special Logon                            Success and Failure
  Other Logon/Logoff Events                Success and Failure
  Network Policy Server                    Success and Failure
Object Access
  File System                              Success and Failure
  Registry                                 Success and Failure
  Kernel Object                            Success and Failure
  SAM                                      No Auditing
  Certification Services                   Success and Failure
  Application Generated                    Success and Failure
  Handle Manipulation                      No Auditing
  File Share                               Success and Failure
  Filtering Platform Packet Drop           No Auditing
  Filtering Platform Connection            No Auditing
  Other Object Access Events               No Auditing
  Detailed File Share                      No Auditing
Privilege Use
  Sensitive Privilege Use                  No Auditing
  Non Sensitive Privilege Use              No Auditing
  Other Privilege Use Events               No Auditing
Detailed Tracking
  Process Termination                      Success and Failure
  DPAPI Activity                           No Auditing
  RPC Events                               Success and Failure
  Process Creation                         Success and Failure
Policy Change
  Audit Policy Change                      Success and Failure
  Authentication Policy Change             Success and Failure
  Authorization Policy Change              Success and Failure
  MPSSVC Rule-Level Policy Change          No Auditing
  Filtering Platform Policy Change         No Auditing
  Other Policy Change Events               Failure
Account Management
  User Account Management                  Success and Failure
  Computer Account Management              Success and Failure
  Security Group Management                Success and Failure
  Distribution Group Management            Success and Failure
  Application Group Management             Success and Failure
  Other Account Management Events          Success and Failure
DS Access
  Directory Service Changes                Success and Failure
  Directory Service Replication            No Auditing
  Detailed Directory Service Replication   No Auditing
  Directory Service Access                 Success and Failure
Account Logon
  Kerberos Service Ticket Operations       Success and Failure
  Other Account Logon Events               Success and Failure
  Kerberos Authentication Service          Success and Failure
  Credential Validation                    Success and Failure

Sunday, June 7, 2015

SIEM Deployment - Creating Logs on Linux Servers with audit.rules

As I mentioned in several other blog articles, your Security Event Management infrastructure is only as effective as your source auditing capabilities. If you are not generating the necessary logs, containing useful information and in an understandable and meaningful structure, no matter how correctly you deploy your log management product, you end up failing.

That is actually why I am spending this much time (and you also should) examining each and every source system and write about them here in several articles. My first articles were on the mere basics you should follow to see your logs on the log management platform. In these new series of articles, I will mention about what events, as a minimum, we should care and how to log them. Let’s get started!

In Linux systems, what events are going to be logged is managed by the audit.rules file which is most of the times located under /etc/audit/ folder. In the scheme below you can see the way the audit mechanism works in Linux.


The good point concerning auditing features of Linux comparing to Windows Servers is that you can pretty much audit everything and customize the logs according to your needs. You can write audit rules for any process or command you want, you can specifically audit the actions of a special user or audit only a specific action (write, read, execute, append). The name tags that you can add to your log messages may greatly simplify your job when you will be writing your correlation rules on your SEM engine.

The way how an audit rule is written is explained in auditctl man page. Below I am giving a nice template which covers most of the general situations. You should make sure that you cover all your critical processes by adding audit rules for them.

An important thing to know about using this file is to adapt it to your own systems. One of the first things you should do is to use arch= parameter accordingly whether you are using a 32 bit system (b32) or a 64 bit system (b64). Some files and commands change from older versions to newer ones, eg. faillog file which keeps the failed log attempts in Red Hat Enterprise Linux 5 does not exist anymore in later versions and you should configure for pam_tally process and files. Also, please change the comments in rules (text after –k parameter) according to your needs.

Finally there are 2 parameters that need to be used with caution. Do use “e -2” parameter in the end of the file to prevent tampering without logging and reboot. The second parameter “h -2” is more likely to be used in military/defense environments, causes system to halt if logging is crashed, so it should be used with caution.

# First rule - delete all
-D

# Increase the buffers to survive stress events.
# Make this bigger for busy systems
-b 8096

# Feel free to add below this line. See auditctl man page
#Capture all failures to access on critical elements
-a exit,always -F arch=b64 -S open -F dir=/etc -F success=0 -k CriticalElementFailures
-a exit,always -F arch=b64 -S open -F dir=/bin -F success=0 -k CriticalElementFailures
-a exit,always -F arch=b64 -S open -F dir=/sbin -F success=0 -k CriticalElementFailures
-a exit,always -F arch=b64 -S open -F dir=/usr/bin -F success=0 -k CriticalElementFailures
-a exit,always -F arch=b64 -S open -F dir=/usr/sbin -F success=0 -k CriticalElementFailures
-a exit,always -F arch=b64 -S open -F dir=/var -F success=0 -k CriticalElementFailures
-a exit,always -F arch=b64 -S open -F dir=/home -F success=0 -k CriticalElementFailures

#Capture all successful deletions on critical elements
-a exit,always -F arch=b64 -S unlinkat -F success=1 -F dir=/etc -k CriticalElementDeletions
-a exit,always -F arch=b64 -S unlinkat -F success=1 -F dir=/bin -k CriticalElementDeletions
-a exit,always -F arch=b64 -S unlinkat -F success=1 -F dir=/sbin -k CriticalElementDeletions
-a exit,always -F arch=b64 -S unlinkat -F success=1 -F dir=/usr/bin -k CriticalElementDeletions
-a exit,always -F arch=b64 -S unlinkat -F success=1 -F dir=/usr/sbin -k CriticalElementDeletions
-a exit,always -F arch=b64 -S unlinkat -F success=1 -F dir=/var -k CriticalElementDeletions

#Capture all successful modification of owner or permissions on critical elements
-a exit,always -F arch=b64 -S fchmodat -S fchownat -F dir=/etc -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S fchmodat -S fchownat -F dir=/bin -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S fchmodat -S fchownat -F dir=/sbin -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S fchmodat -S fchownat -F dir=/usr/bin -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S fchmodat -S fchownat -F dir=/usr/sbin -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S fchmodat -S fchownat -F dir=/var -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S fchmodat -S fchownat -F dir=/home -F success=1 -k CriticalElementModifications
#Capture all successful modifications of content
-a exit,always -F arch=b64 -S pwrite64 -S write -S writev -S pwritev -F dir=/etc -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S pwrite64 -S write -S writev -S pwritev -F dir=/bin -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S pwrite64 -S write -S writev -S pwritev -F dir=/sbin -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S pwrite64 -S write -S writev -S pwritev -F dir=/usr/bin -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S pwrite64 -S write -S writev -S pwritev -F dir=/usr/sbin -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S pwrite64 -S write -S writev -S pwritev -F dir=/var -F success=1 -k CriticalElementModifications
-a exit,always -F arch=b64 -S pwrite64 -S write -S writev -S pwritev -F dir=/home -F success=1 -k CriticalElementModifications

#Capture all successful creations
-a exit,always -F arch=b64 -S creat -F dir=/etc -F success=1 -k CriticalElementCreations
-a exit,always -F arch=b64 -S creat -F dir=/bin -F success=1 -k CriticalElementCreations
-a exit,always -F arch=b64 -S creat -F dir=/sbin -F success=1 -k CriticalElementCreations
-a exit,always -F arch=b64 -S creat -F dir=/usr/bin -F success=1 -k CriticalElementCreations
-a exit,always -F arch=b64 -S creat -F dir=/usr/sbin -F success=1 -k CriticalElementCreations
-a exit,always -F arch=b64 -S creat -F dir=/var -F success=1 -k CriticalElementCreations

#Capture all successful reads (only for High-Impact Systems)
-a exit,always -F arch=b64 -S open -F dir=/etc -F success=1 -k CriticalElementReads
-a exit,always -F arch=b64 -S open -F dir=/bin -F success=1 -k CriticalElementReads
-a exit,always -F arch=b64 -S open -F dir=/sbin -F success=1 -k CriticalElementReads
-a exit,always -F arch=b64 -S open -F dir=/usr/bin -F success=1 -k CriticalElementReads
-a exit,always -F arch=b64 -S open -F dir=/usr/sbin -F success=1 -k CriticalElementReads
-a exit,always -F arch=b64 -S open -F dir=/var -F success=1 -k CriticalElementReads

#Monitor for changes to shadow file (use of passwd command)
-w /usr/bin/passwd -p x
-w /etc/passwd -p ra
-w /etc/shadow -p ra

#Monitor for use of process ID change (switching accounts) applications
-w /bin/su -p x -k PrivilegeEscalation
-w /usr/bin/sudo -p x -k PrivilegeEscalation
-w /etc/sudoers -p rw -k PrivilegeEscalation

#Monitor for use of tools to change group identifiers
-w /usr/sbin/groupadd -p x -k GroupModification
-w /usr/sbin/groupmod -p x -k GroupModification
-w /usr/sbin/useradd -p x -k UserModification
-w /usr/sbin/usermod -p x -k UserModification

#Ensure audit log file modifications are logged.
-a exit,always -F arch=b64 -S unlink -S unlinkat -F dir=/var/log/audit -k AuditLogRemoval

# Monitor for use of audit management tools
-w /sbin/auditctl -p x -k AuditModification
-w /sbin/auditd -p x -k AuditModification

# Ensure critical apps are monitored.  List will vary by mission.
-a exit,always -F arch=b64 -F path=/sbin/init -k CriticalAppMonitoring
-a exit,always -F arch=b64 -F path=/usr/bin/Xorg -k CriticalAppMonitoring
-a exit,always -F arch=b64 -F path=/usr/sbin/sshd -k CriticalAppMonitoring
-a exit,always -F arch=b64 -F path=/sbin/rsyslogd -k CriticalAppMonitoring

#  Ensure attribute changes are audited
-a exit,always -F arch=b64 -S chmod -S chown -S fchmod -S fchown -S setuid -S setreuid -S getxattr -S setxattr -k AttributeChanges