Saturday, May 4, 2019

Awesome list of Native AWS logging capabilities

While looking on centralized logging  capabilities of AWS and going trough bunch of documentation, I noticed lack of the one "big table" where I can find all AWS native logging capabilities per each service and up-to-date service coverage for AWS cloudwatch logs service.
Building I big table is not really version control friendly, so please welcome:

Awesome list of Native AWS logging capabilities:

While I was building this list, some service have already changed their capabilities causing some information in the list being out-of-sync.
 I'll try my best to regularly review existing services and keep adding new one, but if you find mistake or would like to contribute feel free to contact me or create a PR.

Friday, April 12, 2019

Using Terraform to create project and users required in GCP and GSuite

This article is more like quick HOWTO/QuickNote page to start using Terraform with GCP, grant required permission, connect Terraform to GSuite and create users and projects using Terraform.

Connect Terraform to GCP:

1. Download and install Google Cloud SDK:

2. Initialize SDK gcloud init 
 This process will launch browser-based authorization flow

3. Use browser to create  project, service account and download credentials: Note: You need to have GCP billing account and payment method configured first. You can use cli as well:

gcloud projects list
gcloud beta billing accounts list
gcloud beta billing projects link infosec-gcp --billing-account 01122-74525-1222
gcloud config list
gcloud iam service-accounts create infosec-terraform --display-name "Infosec Terraform admin account"
gcloud iam service-accounts keys create ~/.config/gcloud/infosec-terraform-admin.json --iam-account

4. Give appropriate permissions to the Terraform:
get you organization id
gcloud organizations list

Enable iam api (yes you need to enable each api set you are planning to use with GCP, they are disabled by default) you can check what services are enabled using gcloud services list --available
gcloud services enable

Check existing IAM policies in you org:
gcloud organizations get-iam-policy ORGANIZATION_ID

Grant all required permissions(example):
gcloud organizations add-iam-policy-binding ORGANIZATION_ID --member --role roles/resourcemanager.projectCreator

gcloud organizations add-iam-policy-binding ORGANIZATION_ID --member --role roles/billing.user

gcloud organizations add-iam-policy-binding ORGANIZATION_ID --member --role roles/owner

5. Start using terraform from my example to create project and grant access to it.

The only missing part is actually users.
Connecting Terraform to GSuite:

Why do we need GSuite at all? GCP does not provide you any built-in identity and rely on the user identities from Gmail, GSuite or Google Cloud Identity (+ service accounts)

As AWS user I do really love to have user/groups management and infra/project creation using the same automation tool. Unfortunately, user/GSuite functionality is not provided by GCP Terraform provider. Luckily, there is pretty nice open-sourced Terraform provider for GSuite writtend by DeviaVir:
At the moment when I tested it, some group membership functionality was still lacking idempotency, but  using the way from my example everything started to work like a charm.

So the code finally:

Way more details and examples are in the articles below :

Tuesday, February 26, 2019

Revamping AWS APIs' security review and SCP policy generation process.

AWS Cloud provides endless amount of the capabilities and services. Unleashing all this power on the without proper security review process is extremely risky.
Each service and quite often even each api call should be reviewed and evaluated according to the organizational security standards  and compliance requirements. Yes, but.. curently AWS has about 170 services and endless amount of APIs. AWS constantly evolves, introduce new services, APIs and modifying existing.
One of the biggest challenge for me was finding a way to automatically fetch up-to-date annotated  list of the services and api provided by AWS. Luckily, Matt Weagle suggested to use AWS GO SDK as a source of truth. This SDK provides well documented lists of the AWS APIs (docs-2.json)

I crafted small python program that builds/updates following yaml files (one per each service) using json files as a source:

  description: Assess, monitor, manage, and remediate security issues across your
    AWS infrastructure, applications, and data.
  links: []
  security_risk: Cloud IDS
  - Prod_en
  - none
  description: Accepts the invitation to be monitored by a master GuardDuty account.
  links: []
  security_risk: should be allowed only from trusted accounts
  - none
  - none
  description: Archives Amazon GuardDuty findings specified by the list of finding
  links: []
  security_risk: Not defined
  - none
  - none

Structure of this file is quite self-explanatory and simplifies security review(still manual process) of the AWS APIs. During security review,  you specify which services/api are enabled/disabled and on which environments by adding environment name to the Allowed_on  and Denied_on lists. Files are stored in the git repo.

After the review, using these files as a source of truth, I (actually another python program) generate an SCP (Service Control Policy) for AWS Organization's accounts, IAM policies and permission boundaries (it depends on the case.)
Due to the very strict SCP size restrictions , generating this policy using automation allows you:

  • aggregate APIs using wildcards to reduce SCP size
  • validate API wildcards preventing unintentional service exposure/blockage
  • perform cross check for the API to avoid whitelisting/blacklisting conflicts
  • re-generate/validate SCP if AWS introduces new API calls/services
Everything mentioned above is valid not only for the SCP, but for the IAM policy/permission boundaries generation process.

This automated approach opens another possibility - automated compliance validation for AWS: using the same yaml files as a source of truth ,  perform API calls to the AWS to ensure that these calls will fail. This step could be done after deployment (to validate deployment) or on a regular basis(audit).

PS. Unfortunately code of the tools can't be open-sourced as of now.

Tuesday, November 27, 2018

AWS Landing Zones current docs

It took me a quite time to find latest AWS Landing Zones official docs. 
To save you time here are they (November 2018):

Deployment Guide:

User Guide:

Developer Guide:

Please be aware that before deploying AWS landing zones solution in your account, you need to contact AWS Support to get default AWS Account limits extended.

Wednesday, September 26, 2018

Happy to present new self-scan service -

What it does? TCP Scan of you external IP.
What it scanning for: 100 most used tcp ports. Actually a bit more than 100 - I'm slowly adding more ports.
How to use: simply curl from your console/terminal or open it in browser.
How fast: whole scan takes about a second. Results for each requester IP are cached for 1 hour to reduce load and prevent abuse.

Why? Needed quick way to check open ports on server/gateway/fw/router while being inside the console.

New features? Coming...
Feature request, bug, service down? Let me know!

Tuesday, January 30, 2018

AWS Route53 DNS records backup/change using aws cli

 you need to change a lot of DNS records inside the AWS Route53 hosted zone. In prod...
 Let's  skip the obvious question why these DNS records are not managed as Infra-as-aCode..
Sure thing, you need to backup all these record prior to change for rollback purpose.

1. create a list of the dns names to change

2. get zone id from AWS cli:
aws route53 list-hosted-zones

3. Normally aws route53 list-resource-record-sets --hosted-zone-id Z1YS
will give you JSON, but unfortunately it's not useful for quick restore due to the format difference from the change-resource-record-sets.json file you need to have to change/restore records.

4. With a quick and quite dirty bash we can get better formatted JSON:
while read site; do echo '{ "Action": "UPSERT","ResourceRecordSet":';  aws route53 list-resource-record-sets --hosted-zone-id Z1YS --query "ResourceRecordSets[?Name == '$site']" --profile it-sec | jq .[] ; echo "},"; done < >

This file has almost everything needed to build change-batch file for the aws cli:
Almost.. We need to add
  "Comment": "Point some Test TLS1.2 enviroments to the Incapsula",
  "Changes": [
in the beginning of the change set, and
remove "," and add
 att the end.

5. Now you have Route53 DNS records backed up and  ready to restore.
Next step is to create a copy of you backup file and modify it to reflect changes you need to make.

6. Final step: apply your changes:
aws route53 change-resource-record-sets --hosted-zone-id Z1YS  --change-batch file:// --profile it-sec

7. And, in case of disaster, use the same command to roll it back quickly specifying backup file:

aws route53 change-resource-record-sets --hosted-zone-id Z1YS  --change-batch file:// --profile it-sec

Saturday, January 20, 2018

Secure your AWS account using Terrafrom and CloudFormation

This is very updated version of the blog post:

As I mention before:
The very first thing you need to do while building your AWS infrastructure is to enable and configure all AWS account level security features such as: CloudTrail, CloudConfig, CloudWatch, IAM, etc.

Time flies when you're having fun and flies even faster in the infosec world. My templates become outdated and now I'm presenting an updated version of the AWS security automation with following new features:

  1. integrated with Terraform (use terraform templates in the folder tf)
  2. creates prerequisites for Splunk integration (User, key, SNS, and SQS)
  3. configures cross-account access (for multiaccount organizations, adding ITOrganizationAccountAccessRole with MFA enforced)
  4. implements Section 3 (Monitoring) of the CIS Amazon Web Services Foundations benchmark.
  5. configures CloudTrail according to the new best practices (KMS encryption, validation etc)
  6. configures basic set of the CloudConfig rules to monitor best practices
First, my security framework now consists of two main parts: cf (CloudFormation) and tf (Terraform) with Terraform template as a bootstrapper of the  whole deployment.

You can use  Terraform, you can use CloudFormation, but why both ?
Terraform is very quickly evolves, has cross-cloud support and implements some missing in CloudFormation features (like account level password policy configuration, etc); CloudFormation is native for AWS, well supported, and, most important, AWS provides a lot of best practices and solutions in the form of the CloudFormation templates.

Using both (tf and cf) gives me (and you) ability to reuse solutions, suggested and provided by AWS, without rewriting the code, have flexibility and power of terraform and one single interface for whole cloud automation.
No more bucket pre-creation or specific sequence of the CloudFormation deployment - just terraform apply. It will take care of all CloudFormation prerequisites, version control and template updates.
But,  if you wish, at current state you can use only my CloudFormation templates - cf still does all heavy lifting.

The main trick of the Terraform - CloudFormation integration was to tell terrafrom when CloudFormation template is updated to ensure that terraform will trigger cf stack update.
I achieved this using S3 bucket with version control enabled and always updating (just setting template version)

This code takes care of Terraform and CloudFormation integration:
# creating Security cloudforation stack

resource "aws_cloudformation_stack" "Security" {
  name = "Security"
  depends_on = ["aws_s3_bucket_object.iam_global", "aws_s3_bucket_object.cloudtrailalarms_global", "aws_s3_bucket_object.awsconfig_global", "aws_s3_bucket_object.cloudtrail_global", "aws_s3_bucket_object.security_global"]
  parameters {
    AccountNickname = "${var.enviroment_name}",
    CompanyName = "${var.company_name}",
    MasterAccount = "${var.master_account}"
  template_url = "${aws_s3_bucket.CFbucket.bucket}/${var.security_global}?versionId=${aws_s3_bucket_object.security_global.version_id}"
  capabilities = [ "CAPABILITY_NAMED_IAM" ]
  tags { "owner" = "infosec"}

And finally deployment steps are:

  1. Get code from my git repo:
  2. Switch to tf folder and update terraform.tfvars specifying: your AWS profile name (configured for aws cli using aws configure --profile profile_name); name for the environment (prod, test, dev ..) ; company(or division) name; region and AWS master account ID.
  3. terraform init to get aws provider downloaded by terraform
  4. terraform plan
  5. terraform apply