Showing posts with label AWS. Show all posts
Showing posts with label AWS. Show all posts

Thursday, August 1, 2019

AWS S3 Access Control Options


Understanding which access control mechanism to employ in order to control and audit access to your S3 buckets and objects is tricky. This is because the method to be chosen really depends on how you intend to use the buckets and also the way you work within your organization.

I went through several resources, blogs, forums and Amazon's own resources to make it understandable and easy to remember. It helped me understand many points, I hope it helps you as well.

There are mainly 3 ways of regulating the access to the buckets and objects in S3 which are namely:
  • Bucket Policies
  • Bucket ACLs
  • IAM Policies

Bucket Policies: A “Bucket Policy” is an internal regulation structure specific to S3 which means that bucket policies can only be employed within S3 and nowhere else. They are applied at the bucket level, which also means that a same policy should be manually applied to each and every bucket for the same controls.

It allows AWS admins to apply enforcement actions (allow or deny) per users/ groups (principals) for specific actions (put, delete, read, etc.).

Typical Use Cases
  • When granting cross-account access to S3 resources in a simple way, without using IAM.

You can use ACLs to grant cross-account permissions to other accounts, but ACLs support only a finite set of permission (List, Read, Write), these don't include all Amazon S3 permissions. For example, you cannot grant permissions on bucket sub-resources using an ACL. Although both bucket and user policies support granting permission for all Amazon S3 operations, the IAM policies are for managing permissions for ONLY users in your account. For cross-account permissions to other AWS accounts or users in another account, you must use a bucket policy.
  • When there is a need to write bigger policies in size. Bucket Policies can be up to 20 KB, (IAM policies can be up to 2 KB for users, 5 KB for groups and 10 KB for roles).
  • When you prefer keeping the access controls within S3.
IAM Policies: An IAM policy is the de facto way of regulating access control for all the resources in AWS, therefore they are more general.

An interesting difference between S3 Bucket Policies and IAM Policies is that in the Bucket Policies JSON document, there is a “Principal” field to be filled detailing to which user or group the actions are going to be applied. The principal field does not exist in IAM policies because in order to be functional, they already have to be assigned to a user or a group.

Typical Use Cases
  • Creating centrally managed, user-based access policies and control everything from IAM.
  • Manage a bigger number of buckets.

Bucket ACLs: The Bucket ACLs are the legacy way of controlling access to buckets and objects in S3. They are more granular compared to bucket policies as they can be applied per object and not per bucket.

Bucket ACLs use an Amazon S3–specific XML schema and do not look like bucket policies or IAM policies which are JSON files.

There are currently only 3 actions supported by Bucket ACLs which are List, Read and Write. Detailed permissions such as in bucket policies or IAM policies are not possible with Bucket ACLs.

There are limits to managing permissions using ACLs. For example:
  • You can grant permissions only to other AWS accounts; you cannot grant permissions to users in your account.
  •  You cannot grant conditional permissions, nor can you explicitly deny permissions.

ACLs are suitable for specific scenarios. For example, if a bucket owner allows other AWS accounts to upload objects, permissions to these objects can only be managed using object ACL by the AWS account that owns the object.

Typical Use Cases
  • Cross-account access.
  • Object level permission setting requirements within a bucket.
  • The only recommended use case for the bucket ACL is to grant write permission to the Amazon S3 Log Delivery group to write access log objects to your bucket.

Bucket Policies and IAM Policies are User-based policies while Bucket ACLs are resource based.



If you’re still unsure of which to use, consider which audit question is most important to you:
  • If you’re more interested in “What can this user do in AWS?” then IAM policies are probably the way to go. You can easily answer this by looking up an IAM user and then examining their IAM policies to see what rights they have. 
  • If you’re more interested in “Who can access this S3 bucket?” then S3 bucket policies will likely suit you better. You can easily answer this by looking up a bucket and examining the bucket policy.
Avoid using Bucket ACLs except for the specific cases mentioned above.


Tuesday, July 16, 2019

Account compromise incident response in AWS

Account compromise incident response in AWS

In case of account compromise, the suggested actions to take are:
  • Change the root password and delete root access keys if you haven’t done that before.
  • Add MFA to the root account  if you haven’t done that before.
  • Change all user account passwords ( I strongly doubt about this one but the documentation says so, for certification exam purposes consider this one true)
  • Delete or  rotate potentially compromised account access keys.
  • Delete unrecognized/unauthorized instances and IAM users through the help of AWS Config and CloudTrail.

Realizing Security Assessments in AWS

Realizing Security Assessments in AWS

I tried to resume very shortly what services in AWS can be assessed for security under which conditions. Security assessments cover application and infrastructure penetration tests, DDoS tests and other network stress tests.
Pentesting is only allowed for below given 8 AWS services:
  • EC2 instances, NAT Gateways, ELBs
  • RDS
  • CloudFront
  • Aurora
  • API Gateways
  • Lambda and Lambda Edge
  • Lightsail
  • Elastic Beanstalk
Prior to the pentest pen-test-nda@amazon.com shoud be contacted for a private preview and NDA.
Following activities are prohibited:
  • DNS  zone walking via Route53 Hosted Zones
  • DoS, Simulated DoS and DDoS
  • Port Flooding
  • Protocol Flooding
  • Request Flooding
Scans are suggested to be limited to 1 Gbps or 10K Requests per Second.
Below given instance types are recommended to be excluded from security assessments.
  • T3.nano
  • T2.nano
  • T1.micro
  • M1.small
IP addresses to be used during the security assessment should be sent to aws-security-simulated-event@amazon.com
Following events are considered as simulated events:
  • Security simulations or security game days
  • Support simulations or support game days
  • War game simulations
  • White cards
  • Red team and blue team testing
  • Disaster recovery simulations.
  • Other simulated events
AWS must be informed about these events through aws-security-simulated-event@amazon.com and a detailed examination takes place before approval.
For Network stress testing such as DDoS tests, customers are  supported via pre-approved vendors noted below.
For more information, you can consult https://aws.amazon.com/security/penetration-testing page.

Sunday, August 2, 2015

SIEM Deployment - Installing ArcSight Systems on Amazon EC2

SIEM systems can be easily considered as “Big Data” systems as the resources they use and the amount of information they process fit into Big Data systems scale. Running SIEM components require serious computing resources not only RAM, CPU and hard disk space but also a high IOPS rate.

For those reasons, if you do not have a chance to have a test environment at work or at home (my home server with i7 CPU (8 thread) and 16GB Ram with 256 GB SSD did not really satisfy me honestly) you should look for another solution. After considering my options about buying a better home server, I found out that cloud solutions such as Amazon’s EC2 offer a way better TCO and ROI comparing to owning a home server.

In this article, I’ll guide you through setting up your own Red Hat Enterprise Linux 6.5 server on which you can either install ArcSight ESM, ArcSight Logger or ArcSight Management Center.

For those who are new to Amazon’s Elastic Cloud 2 (EC2) service, it is basically an Infrastructure as a Service (IaaS) offering where you can build a server with the resources you like and pay as you go. Nothing is charged when you keep your system shut. What’s even better is that, Amazon offers entry level machines for free if you create an account an share your credit card information. As long as you use free tier servers, not even a dime is charged.

Actually these free tier servers come installed with the OS you want and they are the best option to use as SmartConnector servers or as log sources. They are ready to be used only in 10 minutes or less.

First we should log into EC2 console from http://aws.amazon.com/ec2 link. (Registration required)



From the next screen, we choose EC2 option under Compute options.


Once successfully logged into EC2 Management Console, we should launch an instance, either using the “Launch Instance” link on the home page or under Instance option on the left-side menu.


On the next screen, we start configuring our server. The first step is to choose the OS (RHEL 6.5), among the Community AMIs proposed by Amazon.


In Step 2, we should choose an Instance Type among many pre-set instances. This is a very important step as how much we are going to pay and the performance we are going to get largely depends on the instance we choose. For ArcSight Logger, an m4.large instance is sufficient for test purposes as well as for ESM starting with m4.2xlarge or higher is a wise choice.


In Step 3, almost all other details not mentioned in Step 2 are chosen. At this step, I highly recommend choosing a /28 subnet for your environment and enabling “Auto-assign Public IP” option. You may choose to use dedicated hardware resources, if you have some bucks to spend, that would provide you better performance for sure (I never used dedicated hardware on EC2). Finally, you may create more than 1 instances if you plan to install more than one component (ESM, Logger,  ArcMC), that would definitely save you some time.


In Step 4, we set up the storage options. For both ESM and Logger, we need at least 50 GB space for /opt/arcsight directory and at least 5 GB free space for /tmp directory. Because of that we add 2 new volumes having slightly more space than the bare minimum demanded, which we will configure later.


After Step 4 we jump to Step 6 and set access rules for accessing our system. Giving inbound permission for tcp 9000, tcp 8443 and tcp 443 ports (in addition to initial tcp 22 for SSH) in a security group will allow us to apply this security group both to ESM and Logger. You should of course limit access to your system only to your public IP, in case you a have static IP.


After Step 6 our configuration is ready to launch. Accessing Linux devices on EC2 may be a new thing, which I will mention in a short article.

Once we log in to our newly installed server with ec2-user account, we should format and mount partitions following the instructions below (after applying sudo su command of course).

1. First we check our partitions with lsblk command and see some output like below:
   
    $ lsblk
    NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    xvda        202:0       0     10G  0   disk
   └─xvda1 202:1       0       6G   0   part   /
    xvdb        202:16    0      52G  0  disk    
    xvdc        202:17    0         8G  0  disk    

2. Then we format xvdb and xvdc drives in ext4 filesystem as below
  
   $ mkfs –t ext4  /dev/xvdb
   $ mkfs –t ext4  /dev/xvdc

3. In the third step, we should mount the partitions to the directories we need as follows:
   
   $ mount  /dev/xvdb  /opt/arcsight
   $ mount  /dev/xvdb  /tmp

4. Basically after this step, we are ready to follow the same steps we do at work or home to install ArcSight systems. However if we miss the following step, the configuration we did will not persist and we may not see the additional partitions mounted after the next reboot. For that reason, we apply following commands:

   $ cp /etc/fstab  /etc/fstab.orig
   $ vi /etc/fstab

Insert the lines below at the bottom of /etc/fstab file and save the file.

/dev/xvdb          /opt/arcsight     ext4       defaults,nofail  0 2
/dev/xvdc          /tmp                  ext4       defaults,nofail  0 2

You can follow AWS documentation link for more detail about this configuration.

Once you finished all these steps, you have a consistent environment for your ArcSight systems.