Tuesday, January 30, 2018

AWS Route53 DNS records backup/change using aws cli

Challenge:
 you need to change a lot of DNS records inside the AWS Route53 hosted zone. In prod...
 Let's  skip the obvious question why these DNS records are not managed as Infra-as-aCode..
Sure thing, you need to backup all these record prior to change for rollback purpose.

Solution: 
1. create a list of the dns names to change
cat multisitest.it-security.ca.list 
test1.it-security.ca.
test2.it-security.ca.
test3.it-security.ca.

2. get zone id from AWS cli:
aws route53 list-hosted-zones

3. Normally aws route53 list-resource-record-sets --hosted-zone-id Z1YS
will give you JSON, but unfortunately it's not useful for quick restore due to the format difference from the change-resource-record-sets.json file you need to have to change/restore records.

4. With a quick and quite dirty bash we can get better formatted JSON:
while read site; do echo '{ "Action": "UPSERT","ResourceRecordSet":';  aws route53 list-resource-record-sets --hosted-zone-id Z1YS --query "ResourceRecordSets[?Name == '$site']" --profile it-sec | jq .[] ; echo "},"; done < multisitest.it-security.ca.list > multisitest.it-security.ca.back.json

This file has almost everything needed to build change-batch file for the aws cli: https://docs.aws.amazon.com/cli/latest/reference/route53/change-resource-record-sets.html
Almost.. We need to add
{
  "Comment": "Point some Test TLS1.2 enviroments to the Incapsula",
  "Changes": [
in the beginning of the change set, and
remove "," and add
  ]
}
 att the end.

5. Now you have Route53 DNS records backed up and  ready to restore.
Next step is to create a copy of you backup file and modify it to reflect changes you need to make.

6. Final step: apply your changes:
aws route53 change-resource-record-sets --hosted-zone-id Z1YS  --change-batch file://multisitest.it-security.ca.json --profile it-sec

7. And, in case of disaster, use the same command to roll it back quickly specifying backup file:

aws route53 change-resource-record-sets --hosted-zone-id Z1YS  --change-batch file://multisitest.it-security.ca.back.json --profile it-sec





Saturday, January 20, 2018

Secure your AWS account using Terrafrom and CloudFormation

This is very updated version of the blog post: http://blog.it-security.ca/2016/11/secure-your-aws-account-using.html

As I mention before:
The very first thing you need to do while building your AWS infrastructure is to enable and configure all AWS account level security features such as: CloudTrail, CloudConfig, CloudWatch, IAM, etc.

Time flies when you're having fun and flies even faster in the infosec world. My templates become outdated and now I'm presenting an updated version of the AWS security automation with following new features:

  1. integrated with Terraform (use terraform templates in the folder tf)
  2. creates prerequisites for Splunk integration (User, key, SNS, and SQS)
  3. configures cross-account access (for multiaccount organizations, adding ITOrganizationAccountAccessRole with MFA enforced)
  4. implements Section 3 (Monitoring) of the CIS Amazon Web Services Foundations benchmark.
  5. configures CloudTrail according to the new best practices (KMS encryption, validation etc)
  6. configures basic set of the CloudConfig rules to monitor best practices
First, my security framework now consists of two main parts: cf (CloudFormation) and tf (Terraform) with Terraform template as a bootstrapper of the  whole deployment.

You can use  Terraform, you can use CloudFormation, but why both ?
Terraform is very quickly evolves, has cross-cloud support and implements some missing in CloudFormation features (like account level password policy configuration, etc); CloudFormation is native for AWS, well supported, and, most important, AWS provides a lot of best practices and solutions in the form of the CloudFormation templates.

Using both (tf and cf) gives me (and you) ability to reuse solutions, suggested and provided by AWS, without rewriting the code, have flexibility and power of terraform and one single interface for whole cloud automation.
No more bucket pre-creation or specific sequence of the CloudFormation deployment - just terraform apply. It will take care of all CloudFormation prerequisites, version control and template updates.
But,  if you wish, at current state you can use only my CloudFormation templates - cf still does all heavy lifting.

The main trick of the Terraform - CloudFormation integration was to tell terrafrom when CloudFormation template is updated to ensure that terraform will trigger cf stack update.
I achieved this using S3 bucket with version control enabled and always updating (just setting template version) security.global.yaml.

This code takes care of Terraform and CloudFormation integration:
# creating Security cloudforation stack

resource "aws_cloudformation_stack" "Security" {
  name = "Security"
  depends_on = ["aws_s3_bucket_object.iam_global", "aws_s3_bucket_object.cloudtrailalarms_global", "aws_s3_bucket_object.awsconfig_global", "aws_s3_bucket_object.cloudtrail_global", "aws_s3_bucket_object.security_global"]
  parameters {
    AccountNickname = "${var.enviroment_name}",
    CompanyName = "${var.company_name}",
    MasterAccount = "${var.master_account}"
  }
  template_url = "https://s3.amazonaws.com/${aws_s3_bucket.CFbucket.bucket}/${var.security_global}?versionId=${aws_s3_bucket_object.security_global.version_id}"
  capabilities = [ "CAPABILITY_NAMED_IAM" ]
  tags { "owner" = "infosec"}
}

And finally deployment steps are:

  1. Get code from my git repo:  https://github.com/IhorKravchuk/it-security
  2. Switch to tf folder and update terraform.tfvars specifying: your AWS profile name (configured for aws cli using aws configure --profile profile_name); name for the environment (prod, test, dev ..) ; company(or division) name; region and AWS master account ID.
  3. terraform init to get aws provider downloaded by terraform
  4. terraform plan
  5. terraform apply