SIEM
systems can be easily considered as “Big Data” systems as the resources they
use and the amount of information they process fit into Big Data systems scale.
Running SIEM components require serious computing resources not only RAM, CPU
and hard disk space but also a high IOPS rate.
For those
reasons, if you do not have a chance to have a test environment at work or at
home (my home server with i7 CPU (8 thread) and 16GB Ram with 256 GB SSD did
not really satisfy me honestly) you should look for another solution. After
considering my options about buying a better home server, I found out that
cloud solutions such as Amazon’s EC2 offer a way better TCO and ROI comparing
to owning a home server.
In this
article, I’ll guide you through setting up your own Red Hat Enterprise Linux
6.5 server on which you can either install ArcSight ESM, ArcSight Logger or
ArcSight Management Center.
For those
who are new to Amazon’s Elastic Cloud 2 (EC2) service, it is basically an
Infrastructure as a Service (IaaS) offering where you can build a server with
the resources you like and pay as you go. Nothing is charged when you keep your
system shut. What’s even better is that, Amazon offers entry level machines for
free if you create an account an share your credit card information. As long as
you use free tier servers, not even a dime is charged.
Actually
these free tier servers come installed with the OS you want and they are the
best option to use as SmartConnector servers or as log sources. They are ready
to be used only in 10 minutes or less.
First we
should log into EC2 console from http://aws.amazon.com/ec2
link. (Registration required)
From the
next screen, we choose EC2 option under Compute options.
Once
successfully logged into EC2 Management Console, we should launch an instance,
either using the “Launch Instance” link on the home page or under Instance
option on the left-side menu.
On the next
screen, we start configuring our server. The first step is to choose the OS
(RHEL 6.5), among the Community AMIs proposed by Amazon.
In Step 2,
we should choose an Instance Type among many pre-set instances. This is a very
important step as how much we are going to pay and the performance we are going
to get largely depends on the instance we choose. For ArcSight Logger, an
m4.large instance is sufficient for test purposes as well as for ESM starting
with m4.2xlarge or higher is a wise choice.
In Step 3, almost
all other details not mentioned in Step 2 are chosen. At this step, I highly
recommend choosing a /28 subnet for your environment and enabling “Auto-assign
Public IP” option. You may choose to use dedicated hardware resources, if you
have some bucks to spend, that would provide you better performance for sure (I
never used dedicated hardware on EC2). Finally, you may create more than 1
instances if you plan to install more than one component (ESM, Logger, ArcMC), that would definitely save you some time.
In Step 4,
we set up the storage options. For both ESM and Logger, we need at least 50 GB
space for /opt/arcsight directory and at least 5 GB free space for /tmp
directory. Because of that we add 2 new volumes having slightly more space than
the bare minimum demanded, which we will configure later.
After Step
4 we jump to Step 6 and set access rules for accessing our system. Giving
inbound permission for tcp 9000, tcp 8443 and tcp 443 ports (in addition to
initial tcp 22 for SSH) in a security group will allow us to apply this
security group both to ESM and Logger. You should of course limit access to
your system only to your public IP, in case you a have static IP.
After Step
6 our configuration is ready to launch. Accessing Linux devices on EC2 may be a
new thing, which I will mention in a short article.
Once we log
in to our newly installed server with ec2-user account, we should format and
mount partitions following the instructions below (after applying sudo su command of course).
1. First we check our partitions with lsblk command and see some output like
below:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0
0 10G 0 disk
└─xvda1 202:1 0 6G 0 part /
xvdb 202:16 0 52G 0 disk
xvdc 202:17 0 8G 0 disk
2. Then we format xvdb and xvdc drives
in ext4 filesystem as below
$ mkfs –t ext4 /dev/xvdb
$ mkfs –t ext4 /dev/xvdc
3. In the third step, we should mount
the partitions to the directories we need as follows:
$ mount /dev/xvdb
/opt/arcsight
$ mount /dev/xvdb
/tmp
4. Basically
after this step, we are ready to follow the same steps we do at work or home to
install ArcSight systems. However if we miss the following step, the
configuration we did will not persist and we may not see the additional
partitions mounted after the next reboot. For that reason, we apply following
commands:
$ cp /etc/fstab /etc/fstab.orig
$ vi /etc/fstab
Insert the lines below at the bottom
of /etc/fstab file and save the file.
/dev/xvdb /opt/arcsight ext4 defaults,nofail 0 2
/dev/xvdc /tmp ext4 defaults,nofail 0 2
You can follow AWS documentation link for more detail about this configuration.
Once you finished all these steps,
you have a consistent environment for your ArcSight systems.
Aw, this was an extremely good post. Taking a few minutes
ReplyDeleteand actual effort to generate a very good article…
but what can I say… I put things off a whole lot
and never manage to get anything done.
Here is my homepage: good landscape (obeycraft.com)
This comment has been removed by a blog administrator.
ReplyDelete