Monday, November 7, 2016

Secure your AWS account using CloudFormation


      The very first thing you need to do while building your AWS infrastructure is to enable and configure all AWS account level security features such as: CloudTrail, CloudConfig, CloudWatch, IAM, etc..
       To do this, you can use mine Amazon AWS Account level security checklist and how-to or any other source.
        To avoid manual steps and to be align with SecuityAsCode concept, I use set of CloudFormation templates, simplified version of which I would like to share: 


Global Security stack template structure:


security.global.json - parent template for all nested templates to link them together and control dependency between nested stacks.


cloudtrail.clobal.json - nested template for Global configuration of the CloudTrail:

  • creates S3 bucket for the logs
  • creates CloudTrail-related IAM roles and policies
  • creates CloudLog LogGroup
  • enables CloudTrail on default region including global events and multi-region feature
  • creates SNS ans SQS configuration for easy integration with Splunk AWS app.

cloudtrailalarms.global.json - nested template for Global CloudWatch Logs alarms and security metrics creation. Uses FilterMap to create different security-related filters for ClouTrail LogGroup, corresponding metrics and notifications for suspicious or dangerous events. You can customise filter per environment basis.

Predefined filters are:
  • rds-change: RDS related changes
  • iam-change; IAM changes
  • srt-instance: Start, Reboot, Terminate instance
  • large-instance: launching large instances
  • massive-operations: massive operations- more 10 in 5 min 
  • massive-terminations: massive terminations- more 10 in 5 min 
  • detach-force-ebs: force detachment of the EBS volume from the instance
  • change-critical-ebs: any changes related to the critical EBS volumes
  • change-secgroup: any changes related to the security group
  • create-delete-secgroup: creation and deletion of the security group 
  • secgroup-instance: attaching security group to the instance
  • route-change: routing  changes
  • create-delete-vpc: creation and deletion of a VPC
  • netacl-change: changes at Network ACL
  • cloudtrail-change: changes in the CloudTrail configuration
  • cloudformation-change: changes related to the CloudFormation
  • root-access: any root access events
  • unauthorised: failed and unauthorised operations
  • igw-change: Internet Gateway related changes
  • vpc-flow-logs: Delete or Create VPC flow logs
  • critical-instance: any operation on the critical instances
  • eip-change: Elastic IP changes
  • net-access: Any access outside of predefined known IP ranges

4 preconfigured notification topics : 
  • InfosecEmailTopic, 
  • DevOpsEmailTopic
  • InfosecSMSTopic
  • DevOpsSMSTopic


awsconfig.global.json - nested template for Global AWS Config Service configuration.
  • creates S3 bucket for the config dumps
  • creates AWS Config-related IAM roles and policies
  • creates AWS config delivery channel and schedule config dumps (hourly)
  • creates and enables AWS config recorder 
  • creates SNS ans SQS configuration for easy integration with Splunk AWS app.

cloudwatchsubs.global.json - nested template for configuring AWS CloudWatch Subscription Filter to extract and analyse most severe CloudTrail events using custom Lambda function:
  • creates Lambda function and all requred roles and permissions
  • creates Subscription filter as a compilation of the filters from the FilterMap
      
Currently uses following filters and aggregate them to one due to the AWS CloudWatch Logs subscription limitation (only one filter supported):
  • critical-instance
  • iam-change
  • srt-instance
  • cloudtrail-change
  • root-access
  • net-access
  • detach-force-ebs
  • unauthorised

iam.global.json - nested template for IAM Global configuration: 
  • creates Infosec Team IAM Group and managed policy
  • creates DevOps Team IAM Group and managed policy
  • creates DBA Team IAM Group and managed policy
  • creates Self Service Policy  for users to manage API keys and MFA
  • creates  ProtectProdEnviroment to protect production environment from destructive actions 
  • creates EnforceMFAPolicy to enforce MFA for sensitive operations
  • creates EnforceAccessFromOfficePolicy to restrict some operation to office source IPs
  • creates DomainJoin role and all required policy to perform automated domain join
  • creates SaltMasterPolicy and Role for Configuration Management Tool (in this case - Salt)
  • creates SQLDataBaseInstancePolicy and Instance profile example policy
  • creates SIEM system example policy
  • creates VPC flow log role
  • creates and manages SIEM user and API keys
  • creates and manages SMTP user (for the AWS SES service ) and API keys


cloudwatchsubs_kinesis.global.json - PoC template (not linked as nested to the security.global.json)  for configuring AWS CloudWatch Subscription Filter to send most severe CloudTrail events to AWS Kinesis stream using subscription filter similar to the cloudwatchsubs.global.json 

Supported features:


Environments and regions: Stack supports unlimited amount of environments with 4 environments predefined (staging, dev, prod, and dr) and use 1 account and 1 region per environment concept to reduce blast radius (if account become compromised)

AWS services used by stack: CloudTrail, AWS Config, CloudWatch, CloudWatch Logs and Events, IAM,  Lambda, Kinesis.

To deploy:

  1. Create bucket using following naming convention: com.ChangeMe.EnviromentName.cloudform, replacing ChangeMe and EnviromentName with your value to make it look like this:            com.it-security.prod.cloudform
  2. Enable bucket versioning 
  3. in the templates  security.global.json and cloudwatchsubs.global.json replace "ChangeMe" with name used in the bucket creation.
  4. In the template cloudtrailalarms.global.json modify SNS endpoints for email notification infosec@ChangeMe.com and devops@ChangeMe.com; Add endpoints with mobile phone numbers for SMS notification to appropriate SNS topics if needed.
  5. Modify iam.global.json template to adrress you SQL DataBase bucket location (com-ChangeMe-", {"Ref": "Environment"} , "-sqldb/)  and modify any permission if need according to your organisation structure, roles, responsibilities and services.
  6. Modify FilterMap in cloudtrailalarms.global.json and cloudwatchsubs.global.json templates make filters work for your infrastructure (Critical Instance IDs, Critical Volume IDs, you ofiice IP range, you NAT gateways, etc) 
  7. Zip example Lambda function LogCritical_lambda_security_global.py like LogCritical_lambda_security_global.zip
  8. Upload this function into S3 bucket created at step 1 and copy object version (GUI- show version -object properties ) and insert into cloudwatchsubs.global.json template into "LogCriticalLambdaCodeVer" mapping at the appropriate environment (prod, staging ..)
  9. Modify "regions" Environments mapping in the iam.global.json, and cloudwatchsubs.global.json templates to specify correct AWS region you are using for the deployment.
  10. Upload all *.global.json templates into S3 bucket created at step 1. 
  11. Create new CloudFormation stack  using parent security template security.global.json and your bucket name (Example: ttps://s3.amazonaws.com/com.it-security.prod.cloudform/security.global.json ),  call it "Security" and specify environment name you going to deploy.
  12. Done!

Tuesday, October 18, 2016

Self-Defending Cloud PoC or Amazon CloudWatch Events usage

Problem:  Malicious attacker get privileged access to your AWS account and destroying your production infrastructure in a matter of seconds.

In case of cloud based infrastructures you can't rely on classic SIEM solutions - by the time your SIEM will detect attack, your infrastructure will be gone. We need a "near real time" way to detect and mitigate the attack.

Attack scenario: using compromised aws api key (no MFA) and CLI/SDK to perform destructive actions.

Attack detection and mitigation strategy:  
All destructive actions start with EC2 instance termination. To prevent such scenario, you should always have "TerminationProtection" feature enabled on your production instances. Based on this, attacker must disable termination protection before demolishing your environment. For the PoC, I will use disabling "TerminationProtection" event as a attack detector (sure thing, real attack detection is a way more complicated process).

Starting point: AWS api key with admin policy attached, all production instances protected using AWS Termination Protection.




Possible solutions and attack mitigation delays:

SIEM:

CloudWatch Logs and alarms:


CloudWatch Subscriptions:


CloudWatch Events:




Implementation: 

Design:
Based on the implementation scenarios shown above and their performance (tested during PoC)  - the fastest way is to leverage AWS CloudWatch events and trigger a lambda function.

Speaking AWS technical language we need:

  1. choose what type of AWS event we are looking for. Based on the our attack detection strategy, we are looking for ec2 call: modify-instance-attribute. Using exactly this API call you can enable/disable TerminationProtection. So let's look for "AWS API Call Events" type of the CloudWatch events.
  2. Create: event rule : "match incoming events and route them to one or more targets for processing" in our case target is a Lambda function.
  3. Create all required policies and Lambda function role in IAM.
  4. Build the Lambda function itself.


Setting-up event rule:

I was using following event pattern inside the CloudWatch Event rule:
{ "detail-type": [ "AWS API Call via CloudTrail" ], "detail": { "eventSource": [ "ec2.amazonaws.com" ], "eventName": [ "ModifyInstanceAttribute" ] } }


Getting sample event:

To start writing our event-detection-mitigation lambda function we need to get example of the AWS events for the API call we are monitoring.
We can achieve this with the following simple Lambda function:

import json

def lambda_handler(event, context):
    print event


or, if you need nice formatted json to test you lambda function offline:

import json

def lambda_handler(event, context):
    print json.dumps(event, indent=4, sort_keys=False)

You will find output of your lambda function (result of the print statement) in appropriate (naming as your lambda function) CloudWatch Log Stream


Challenges with event format:

During first tests, I've found that Amazon AWS not following any JSON contract (defined format)  even for the same 1 API call. Making the same API call in the 3 different ways produced 3 different event formats:

Disabling TerminationProtection from GUI with MFA:

{u'account': u'150905', u'region': u'eu-west-1', u'detail': {u'eventVersion': u'1.05', u'eventID': u'b3c4d3b4-353e-44bf-8973-37abccd085b5', u'eventTime': u'2016-10-14T17:31:37Z', u'requestParameters': {u'instanceId': u'i-d8916d57', u'disableApiTermination': {u'value': False}}, u'eventType': u'AwsApiCall', u'responseElements': {u'_return': True}, u'awsRegion': u'eu-west-1', u'eventName': u'ModifyInstanceAttribute', u'userIdentity': {u'userName': u'ihork', u'principalId': u'AIDAI3UNW', u'accessKeyId': u'ASIAIN2', u'invokedBy': u'signin.amazonaws.com', u'sessionContext': {u'attributes': {u'creationDate': u'2016-10-14T16:48:44Z', u'mfaAuthenticated': u'true'}}, u'type': u'IAMUser', u'arn': u'arn:aws:iam::150905:user/igor', u'accountId': u'150905'}, u'eventSource': u'ec2.amazonaws.com', u'requestID': u'e7b585e-af38-49d0-88a8-979ef5052f', u'userAgent': u'signin.amazonaws.com', u'sourceIPAddress': u'174.231.5.2'}, u'detail-type': u'AWS API Call via CloudTrail', u'source': u'aws.ec2', u'version': u'0', u'time': u'2016-10-14T17:31:37Z', u'id': u'55084ea-e4bc-45e6-a7a6-0c8e7d16b32', u'resources': []}

Disabling TerminationProtection from aws cli (no MFA):

command:
$ aws ec2 modify-instance-attribute --no-disable-api-termination --instance-id i-378579b8 

event:
{u'account': u'150905', u'region': u'eu-west-1', u'detail': {u'eventVersion': u'1.05', u'eventID': u'f8ae9323-91b0-4100-b27b-dce348641a5c', u'eventTime': u'2016-10-14T17:31:46Z', u'requestParameters': {u'instanceId': u'i-d8916d57', u'disableApiTermination': {u'value': True}}, u'eventType': u'AwsApiCall', u'responseElements': {u'_return': True}, u'awsRegion': u'eu-west-1', u'eventName': u'ModifyInstanceAttribute', u'userIdentity': {u'userName': u'ihork', u'principalId': u'AIDAI3UNW', u'accessKeyId': u'ASIAIN2', u'invokedBy': u'signin.amazonaws.com', u'sessionContext': {u'attributes': {u'creationDate': u'2016-10-14T16:48:44Z', u'mfaAuthenticated': u'true'}}, u'type': u'IAMUser', u'arn': u'arn:aws:iam::150905:user/igor', u'accountId': u'150905'}, u'eventSource': u'ec2.amazonaws.com', u'requestID': u'cd889e-039e-4f8f-bfe9-4d293012335', u'userAgent': u'signin.amazonaws.com', u'sourceIPAddress': u'174.231.5.2'}, u'detail-type': u'AWS API Call via CloudTrail', u'source': u'aws.ec2', u'version': u'0', u'time': u'2016-10-14T17:31:46Z', u'id': u'cd32fc6-39ae-4237-b46d-62d237d4d89', u'resources': []}

Disabling TerminationProtection from aws cli. 2nd variant

command:
$ aws ec2 modify-instance-attribute --attribute disableApiTermination --value false --instance-id i-199a6696 

event:
{u'account': u'150905', u'region': u'eu-west-1', u'detail': {u'eventVersion': u'1.05', u'eventID': u'75fe4852-d3d9-4c9c-a702-7f025e0c4c50', u'eventTime': u'2016-10-14T17:29:24Z', u'requestParameters': {u'instanceId': u'i-d8916d57', u'attribute': u'disableApiTermination', u'value': u'false'}, u'eventType': u'AwsApiCall', u'responseElements': {u'_return': True}, u'awsRegion': u'eu-west-1', u'eventName': u'ModifyInstanceAttribute', u'userIdentity': {u'userName': u'ihork', u'principalId': u'AIDAI3U7LBIY', u'accessKeyId': u'AKIAJL7', u'type': u'IAMUser', u'arn': u'arn:aws:iam::150905:user/igor', u'accountId': u'150905'}, u'eventSource': u'ec2.amazonaws.com', u'requestID': u'd3c46-2def-4450-b3c1-4827d9f78', u'userAgent': u'aws-cli/1.10.45 Python/2.7.11 Linux/4.7.3-100.fc23.x86_64 botocore/1.4.60', u'sourceIPAddress': u'174.231.5.2'}, u'detail-type': u'AWS API Call via CloudTrail', u'source': u'aws.ec2', u'version': u'0', u'time': u'2016-10-14T17:29:24Z', u'id': u'9fb88a9e-025b-4859-9b98-6180cd14a9b', u'resources': []}


Take a precise look on:

'sessionContext': {u'attributes': {u'creationDate': u'2016-10-14T16:48:44Z', u'mfaAuthenticated': u'true'}} - not expect this part of JSON if you are not using Mfa

 u'disableApiTermination': {u'value': True}} and 'attribute': u'disableApiTermination', u'value': u'false'} same API call done using different AWS CLI options, but serving the same purpose produce 2 different events

How we can disable a user in AWS ?


You just can't disable user in AWS. You can delete it, but you need to remove it from the groups first. It takes times, lines of code and API calls. Solution? - attach inline user policy with explicit deny (will override all allows) for all the actions you need to block.

Lambda function:

Here my PoC lambda function: really "dirty" and serving only one simple use case:

def lambda_handler(event, context):
    print event
# analyzing event
    if event['detail']['requestParameters'].get('disableApiTermination')!= None:
        protection_status = event['detail']['requestParameters']['disableApiTermination']['value']
        UserName = event['detail']['userIdentity']['userName']
        UserID = event['detail']['userIdentity']['principalId']
        if event['detail']['userIdentity'].get('sessionContext') != None:
            mfa = event['detail']['userIdentity']['sessionContext']['attributes']['mfaAuthenticated']
        else:
            mfa = "false"
        print protection_status, UserName, UserID, mfa
# disabling user using inline user policy if no MFA being used
        if mfa != "true" and not protection_status:
            iam = boto3.resource('iam')
            user_policy = iam.UserPolicy(UserName,'disable_user')
            response = user_policy.put(PolicyDocument='{ "Version": "2012-10-17", "Statement": [{"Sid": "Disableuser01","Effect": "Deny","Action": ["ec2:StopInstances", "ec2:TerminateInstances"],"Resource": ["*"]}]}')
            print response



How near  this "near real time" events:
My test showed about 40 second delay. IMHO too much for the "near real time". I'm looking for the potential bottleneck and delays that may caused by Lambda function itself on Event type I used. 


Conclusions:
- not a "near real time" to react fast and mitigate attack without additional protective measures.
- could work if you able to detect attack 40 second earlier
- could reduce overall damages
- definitely very very promising if reaction delay will be less (let's say 5-10 sec).

Update:

Fill free to pull from GitHub AWS CloudFormation template for the  PoC above.

To deploy you need: 

1. selfdefence.infosec.vpc.json - template itself.

2. selfdefence_infosec.py - Lambda function. You will need to Zip it and upload to the s3 bucket with versioning enabled.

3. Edit template (selfdefence.infosec.vpc.json) and specify: S3 bucket name in format you.bucket.name.env.cloudform (where env - is your environment name: prod, test, staging, etc) and S3 version for  selfdefence_infosec.zip file. 

4. upload template to the same s3 bucket.

5. Create a stack using this template end specify corresponding environment name at the creation time.

Enjoy! 

Wednesday, September 21, 2016

S3 bucket policies for sensitive security logs storage

Inspired by this AWS  blog post : How to Restrict Amazon S3 Bucket Access to a Specific IAM Role

Goal:
Build a storage for sensitive security logs using S3 bucket.

Restrictions:  

  • EC2 instances could only upload logs. 
  • Infosec team could only download logs and (just for this particular case) delete them with MFA .
  •  All other user must not have any access despite whatever mentioned in their IAM policies.
 Solution:
custom bucket policy


        "PolicyDocument": {
          "Version": "2012-10-17",
          "Statement": [
            {
              "Sid": "OnlyForInfosecEyes",
              "Effect": "Deny",
              "Principal":"*",
              "Action": ["s3:GetObject*", "s3:Delete*", "s3:PutObjectAcl", "s3:PutObjectVersionAcl"],
              "Resource": "s3-top-secret-bucket/*",
              "Condition": {
                "StringNotLike": {
                  "aws:userId":  "InfosecGroupUserIDs"
                }
              }
            },
            {
              "Sid": "OnlyServerAllowToPut",
              "Effect": "Deny",
              "Principal":"*",
              "Action": ["s3:PutObject"],
              "Resource": "s3-top-secret-bucket/*",
              "Condition": {
                "StringNotLike": {
                  "aws:userId":  "SeverIAMRoleID:*"
                }
              }
            },
            {
              "Sid": "EnforceEncryption",
              "Effect": "Deny",
              "Principal":"*",
              "Action": ["s3:PutObject"],
              "Resource": "s3-top-secret-bucket/*",
              "Condition": {
                "Null": {
                  "s3:x-amz-server-side-encryption": "true"
                }
              }
            },
            {
              "Sid": "EnforceMFADelete",
              "Effect": "Deny",
              "Principal":"*",
              "Action": ["s3:Delete*"],
              "Resource": "s3-top-secret-bucket/*",
              "Condition": {
                "Null": {
                  "aws:MultiFactorAuthAge": true
                }
              }
            }
          ]
        }

Where:

InfosecGroupUserIDs - list of IAM infosec users' IDs (aws iam get-user -–user-name USER-NAME)

SeverIAMRoleID:* - ID of the IAM role used by the your EC2 server instances with ":*" added to cover all instances in this role (aws iam get-role -–role-name ROLE-NAME.)

Thursday, August 4, 2016

AWS EC2 status check alarms using python and boto3

Important part of security that we (infosec guys) often delegate :-)  to the Operation teams(NOC) is Availability.
       For the IaaS service provider (Amazon AWS) is responsible for Infrastructure availability, but we must design all layers above ( Availability Zones, VPCs, Networks, Instances and LB ) for high availability or at least fault tolerance. One of the most important step in this process is actually detection IaS failure.
From AWS:
"With instance status monitoring, you can quickly determine whether Amazon EC2 has detected any problems that might prevent your instances from running applications. Amazon EC2 performs automated checks on every running EC2 instance to identify hardware and software issues. You can view the results of these status checks to identify specific and detectable problems."

Below simple python script that will help you to configure status check alarms for all you running instances:

#!/usr/bin/python

import boto3
import pprint

boto3.setup_default_session(profile_name='staging', region_name='eu-west-1')
ec2 = boto3.resource('ec2')
cloudwatch=boto3.resource('cloudwatch')

# Getting all running instances
instance_iterator = ec2.instances.all()
for instance in instance_iterator:
    instance_name = "unnamed"
    for tag in instance.tags:
        if tag['Key'] == "Name":
            instance_name = tag['Value']
    print instance_name, instance.id
    if instance.state["Name"] == "running" :
        metric = cloudwatch.Metric("AWS/EC2", "StatusCheckFailed")
        response = metric.put_alarm(
        AlarmName = instance.id + "/" + instance_name + "-status-alarm",
        AlarmDescription = 'status check for %s %s' % (instance.id, instance_name),
        ActionsEnabled = True,
        OKActions = ["arn:aws:sns:eu-west-1:your_account_id:YOUR_SNS-EmailSMS-Notification"],
        AlarmActions = ["arn:aws:sns:eu-west-1:your_account_id:YOUR_SNS-EmailSMS-Notification"],
        Statistic = "Maximum",
        Dimensions = [{'Name': 'InstanceId', 'Value': instance.id}],
        Period = 60,
        EvaluationPeriods = 2,
        Threshold = 1.0,
        ComparisonOperator = "GreaterThanOrEqualToThreshold"
        )
        pprint.pprint(response)

Thursday, July 14, 2016

AWS s3 bucket encryption audit (Updated)

Tool, mentioned in my previous blog post article got some new functionality:

https://github.com/IhorKravchuk/it-security/blob/master/s3_enc_check.py

1. batch mode.

$ python s3_enc_check.py --bucket com-company-prod-data-backup --profile prod-read

will check the bucket mentioned and give you the option to save to file or print report on screen

$ s3_enc_check.py --bucket com-company-prod-data-backup --profile prod-read  --file test_results.txt 

will check the bucket mentioned and save report to the file. Very useful for the large buckets with thousands of objects

2. Interactive mode.

run tool, specifying just AWS profile name, and it will scan your account for s3 bucket available and let you choose one for detailed audit.

$ python s3_enc_check.py --profile staging

3. Ability to check if encryption is enforced on the bucket level using AWS bucket policy.

Whatever way you start the tool, it will verify if bucket/buckets has s3 server side encryption enforced:








Thursday, June 9, 2016

Amazon AWS Account level security checklist and how-to

Disclaimer :-):
There are bunch of Amazon AWS security checklists and recommendations online. Definitely the best one is https://d0.awsstatic.com/whitepapers/compliance/AWS_CIS_Foundations_Benchmark.pdf 
I'm not trying to reinvent the wheel, but integrate and summarize lessons I learned and advices given to me by other AWS experts.

This checklist starts from the moment when you begin AWS account creation.


  1.  Create dedicated email address for AWS account registration. This email will become you root account login name, so, please, do not use your daily used or published online email
  2. Enable MFA  (Multi Factor Authentication) on the root account. Details: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa.html
  3. Remove or DO NOT create any API key associated with root account. API keys has no MFA - anyone who has root API keys gets full  access to you account. Unintentional leaking of the API key quite common security incident.
  4. Copy/bookmark/save IAM sign-in url. You will need to access you AWS Web GUI.
  5. Create IAM user with  AdministratorAccess policy attached. It will be your new  "root" like account.
  6. Create other IAM users required. Minimize their permission using built-in AWS managed policies like: PowerUserAccess; ReadOnlyAccess; AmazonEC2FullAccess , etc
  7. Enable MFA on all users created.  Details: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa.html
  8. Enforce strict password policy. Details: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_account-policy.html
  9. Generate API keys for users who needs it. For "high-power" user make this keys inactive. They will activate keys through MFA protected AWS Web GUI only when it needed.
  10. Do not use API keys in applications running inside AWS. Use IAM roles instead. Details: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
  11. Enable and configure CloudTrail  for all regions  + s3 bucket for the CloudTrailLogs.  Details: http://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-a-trail-using-the-console-first-time.html
  12. Send CloudTrails Events to the CloudWatch Logs. Details: http://docs.aws.amazon.com/awscloudtrail/latest/userguide/send-cloudtrail-events-to-cloudwatch-logs.html
  13. Configure monitoring of the CloudTrail Log Files using Amazon CloudWatch Logs metric filters and alarms. Details: http://docs.aws.amazon.com/awscloudtrail/latest/userguide/monitor-cloudtrail-log-files-with-cloudwatch-logs.html
  14. Configure near-real time Log data processing using Subscriptions or/and using lambda function.  Details: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/Subscriptions.html
  15. Using #13 and 14 configure notification for suspicions events
  16. Enable AWS Config Service to get AWS configuration snapshots and change notifications. Details: http://docs.aws.amazon.com/config/latest/developerguide/gs-console.html
  17. Enable and configure AWS VPC flow logs to get visibility on network level. Details: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.html
  18. Enforce server side encryption on your S3 buckets: Details: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
  19. Enable encryption on you EBS volumes: Details: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html



Almost all steps covered above could and must be automated. I already published and will publish more automation examples in this blog.


Check your resulted account security status:
And do this periodically. 



Checklists and Best Practices:

AWS CIS Foundations Benchmark (must read document)
https://d0.awsstatic.com/whitepapers/compliance/AWS_CIS_Foundations_Benchmark.pdf

AWS Auditing Security Checklist
https://d0.awsstatic.com/whitepapers/compliance/AWS_Auditing_Security_Checklist.pdf

PS. I would like to thank Liem aka Pimpon  for advices in preparing this checklist.



Wednesday, June 8, 2016

AWS "one-liners": Configure AWS password policy in one shot

"As soon as you have passwords you need a password policy" - © captain obvious

Limitations:
AWS allows you to have only one password policy for whole AWS account.

You can configure it using web GUI or, if you prefer to have all your infrastructure and security as code, using boto and python:

#!/usr/bin/python

import boto3
import pprint

boto3.setup_default_session(profile_name='staging')
iam=boto3.resource('iam')
account_password_policy = iam.AccountPasswordPolicy()
response = account_password_policy.update(
    MinimumPasswordLength=12,
    RequireSymbols=True,
    RequireNumbers=True,
    RequireUppercaseCharacters=True,
    RequireLowercaseCharacters=True,
    AllowUsersToChangePassword=True,
    MaxPasswordAge=90,
    PasswordReusePrevention=12,
    HardExpiry=False
)

pprint.pprint(response)


You can find more details about particular password policy parameters here:

http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_account-policy.html

Tuesday, March 22, 2016

Scary false positive or story about Best practice to secure your root AWS account



What the best practice of securing AWS root account? - Not using it at all!

Let's clean it up first:


  1. remove any API key associated with root account
  2.  reset root password and change email assoicated
  3. enable MFA (or deactivate previous and create new) on the root account.


Start using IAM:


  1. Copy/bookmark/save IAM sign-in url 
  2. create required users including one with AdministratorAccess policy attached. 
  3. Enable MFA on all users created


Secure root account:

  1. Print you root account credentials.
  2. Log in using printed credentials to ensure that it works.
  3. Put in tamper evident envelope
  4. Add some signatures, stamps or voodoo on envelope. 
  5. Hide it in SafeBox
  6. Use it only in case of emergency :-)  

Now let's add some monitoring just in case:

  1. Enable and configure CloudTrail + bucket for Logs
  2. Configure CloudWatchLogs (CloudWatch) to process CloudTrail logs
  3. Add metric filters to detect root-user related events
  4. Set-up alarm and notifications (SNS) for the metrics


For root users  CloudWatchLog metric filter looks like:

Filter Name:
Security-CloudWarchAlarms-RootAccessMetricFilte
Filter Pattern:
{$.userIdentity.type = "Root"}



I did everything mentioned above and was ,let's say, "surprised" to get months after notification saying "Root log-in  detected" . Checked CloudTrail looking for  the root user - nothing....Hmm.. Start looking into CloudTrailLogs content  for the detailed row events and found this:

"eventVersion": "1.02", "userIdentity": { "type": "Root", "principalId": "577343344455", "arn": "arn:aws:iam::577343344455:root", "accountId": "5577343344455", "userName": "my_company", "invokedBy": "support.amazonaws.com" }, "eventTime": "2016-03-22T19:22:23Z", "eventSource": "iam.amazonaws.com", "eventName": "GetAccountSummary", "awsRegion": "us-east-1", "sourceIPAddress": "support.amazonaws.com", "userAgent": "support.amazonaws.com", "requestParameters": null, "responseElements": null, "requestID": "675d-fxx3-1x5-9xxd-4768xxx17", "eventID": "b9xxxxfcaf-3xx7-4xxd-a220-exxxx8", "eventType": "AwsApiCall" "recipientAccountId": "577343344455"

Dear AWS support - you got me :-))

Sunday, March 13, 2016

AWS s3 bucket encryption audit

Storing sensitive information at AWS S3?- it's a must to encrypt your data at rest.
How?

  • do it yourself (client side encryption) and transfer to S3 already encrypted
  • ask AWS to do it for you (server side encryption). In this case you have 2 options: S3 managed encryption keys or KMS-managed encryption keys.

If you create a new bucket for sensitive data NEVER create it without AWS bucket  policy enforcing encryption: encryption is object level attribute at S3 and user specify (technically request) encryption during upload process. Policy will block all uploads if encryption not requested. Simple and Easy.. Except:

      You have existing S3 bucket with data uploaded before you enable this policy, you have mixed (encrypted and non encrypted objects) or just doing security audit. In this case you need to scan the bucket to find unencrypted objects. How? quite easy using  few python lines bellow:


import boto3
import pprint
import sys
boto3.setup_default_session(profile_name='prod')
s3 = boto3.resource('s3')
if len(sys.argv) < 2:
   print "Missing bucket name"
   sys.exit
bucket = s3.Bucket(sys.argv[1])
for obj in bucket.objects.all():
   key = s3.Object(bucket.name, obj.key)
   if key.server_side_encryption is None:
       print "Not encrypted object found:", key

Nice, Yep, But it will take almost forever to scan bucket that contains thousand or tens of thousand of objects. In this it would be nice to have some counters, progress bar, ETA , summary, etc.. So, vuala:

https://github.com/IhorKravchuk/it-security/blob/master/s3_enc_check.py


Small program providing all these features mentioned. Feel free to use it or request reasonable changes/modifications. 

Friday, February 12, 2016

Videowall for SOC. v2

Sure thing you need it for security events visibility. It could be LCD, Plasma or just a projector.

Usually you have more than 5 different security management programs ( SIEM, IDS management, system logs, cloud monitoring,  etc) , so, you need method to show all these on display. You can't tile one display with all these windows - lack of resolution for huge amount of information.
Recently I rewrote quite useful script from my previous post: to just do one simple function: Activate Chrome browser and switch tabs. New days guys - all our security dashboards now in browser. 

Set WshShell = WScript.CreateObject("WScript.Shell") 
ex = True 
WshShell.AppActivate("Google Chrome")
Do
  WshShell.AppActivate(2116)
  WshShell.SendKeys "^{TAB}" 
  WScript.Sleep 10000
  if WshShell.AppActivate("Untitled - Notepad") Then Set ex=False
Loop While ex=True

It gives you possibility to see and read all security information on video wall and adjust visibility interval between tabs.

PS. you must run notepad.exe to kill the script.

Wednesday, February 3, 2016

AWS CloudFormation template security group viewer

        Almost any AWS CloudFormation template are more then long enough. It's OK when you are dealing with different relatively "static" resources but become a big  problem for something way more dynamic like security group.
    This kind of resource you need to modify and review a lot, especially if you cloud security professional.  Reading AWS CloudFromation template JSON manually  makes your life miserable and you can easily miss bunch of security problems and holes.
     My small aws_secgroup_viewer Python program helps you to quickly review and analyse all security groups in your template.

     https://github.com/IhorKravchuk/it-security/blob/master/aws_secgroup_viewer.py

     Supports both security group notations used by CloudFormation: firewall rules inside security group or as separate resources linked to group.

Saturday, January 23, 2016

Remote access to the car or practical aspects of the ELM 327 security

      What would you say if I tell you that many car owners grant open remote wireless access to their cars? More over many of them available on the 4/7 basis.. You probably would tell me that it's impossible or might think I discovered new security bug in the new cars with built-in WiFi or Bluetooth... Nope! I'm taking about yours 5-7-10 years old cars! How come?
      Many of you probably know about OBD-II connector installed in your car.


 It is used for  diagnostic, but at the same time for cleaning  car error codes or even for car firmware upgrade.  Nice and quite useful interface that give you access to the CAN bus and widely used by a car owners to check their cars.
     Currently cheapest 10$  (thanks to our Chinese friends) and widely used adapters are based on  ELM 327 chip plus, guess what, Bluetooth or even WiFi interface.


 Sure things they definitely need wireless interface so you can do car diagnostic using you smartphone or tablet :-)
To collect more data and to simplify life (finding the port and connecting device while sitting in a car on driver's seat is definitely a gymnastic trick :-)  ) many car owners just leave the device always connected!
Not only connected but in the most cases (based on the adapter version ) always powered ON.

So, we have wireless adapter always connected to you car and using as security measure .....  default unchangeable PIN (1234 for Bluetooth and 12345678 for WiFi). What a gift!

What you can do knowing all above? Scan for Bluetooth of WiFi devices broadcasting OBD II name and...

Have a Good Hack  Luck and stay secure!

PS.

Useful links related to CAN bus security: