Wednesday, September 26, 2018

nmap.me

Happy to present new self-scan service - nmap.me:

What it does? TCP Scan of you external IP.
What it scanning for: 100 most used tcp ports. Actually a bit more than 100 - I'm slowly adding more ports.
How to use: simply curl nmap.me from your console/terminal or open it in browser.
How fast: whole scan takes about a second. Results for each requester IP are cached for 1 hour to reduce load and prevent abuse.

Why? Needed quick way to check open ports on server/gateway/fw/router while being inside the console.

New features? Coming...
Feature request, bug, service down? Let me know!

Tuesday, January 30, 2018

AWS Route53 DNS records backup/change using aws cli

Challenge:
 you need to change a lot of DNS records inside the AWS Route53 hosted zone. In prod...
 Let's  skip the obvious question why these DNS records are not managed as Infra-as-aCode..
Sure thing, you need to backup all these record prior to change for rollback purpose.

Solution: 
1. create a list of the dns names to change
cat multisitest.it-security.ca.list 
test1.it-security.ca.
test2.it-security.ca.
test3.it-security.ca.

2. get zone id from AWS cli:
aws route53 list-hosted-zones

3. Normally aws route53 list-resource-record-sets --hosted-zone-id Z1YS
will give you JSON, but unfortunately it's not useful for quick restore due to the format difference from the change-resource-record-sets.json file you need to have to change/restore records.

4. With a quick and quite dirty bash we can get better formatted JSON:
while read site; do echo '{ "Action": "UPSERT","ResourceRecordSet":';  aws route53 list-resource-record-sets --hosted-zone-id Z1YS --query "ResourceRecordSets[?Name == '$site']" --profile it-sec | jq .[] ; echo "},"; done < multisitest.it-security.ca.list > multisitest.it-security.ca.back.json

This file has almost everything needed to build change-batch file for the aws cli: https://docs.aws.amazon.com/cli/latest/reference/route53/change-resource-record-sets.html
Almost.. We need to add
{
  "Comment": "Point some Test TLS1.2 enviroments to the Incapsula",
  "Changes": [
in the beginning of the change set, and
remove "," and add
  ]
}
 att the end.

5. Now you have Route53 DNS records backed up and  ready to restore.
Next step is to create a copy of you backup file and modify it to reflect changes you need to make.

6. Final step: apply your changes:
aws route53 change-resource-record-sets --hosted-zone-id Z1YS  --change-batch file://multisitest.it-security.ca.json --profile it-sec

7. And, in case of disaster, use the same command to roll it back quickly specifying backup file:

aws route53 change-resource-record-sets --hosted-zone-id Z1YS  --change-batch file://multisitest.it-security.ca.back.json --profile it-sec





Saturday, January 20, 2018

Secure your AWS account using Terrafrom and CloudFormation

This is very updated version of the blog post: http://blog.it-security.ca/2016/11/secure-your-aws-account-using.html

As I mention before:
The very first thing you need to do while building your AWS infrastructure is to enable and configure all AWS account level security features such as: CloudTrail, CloudConfig, CloudWatch, IAM, etc.

Time flies when you're having fun and flies even faster in the infosec world. My templates become outdated and now I'm presenting an updated version of the AWS security automation with following new features:

  1. integrated with Terraform (use terraform templates in the folder tf)
  2. creates prerequisites for Splunk integration (User, key, SNS, and SQS)
  3. configures cross-account access (for multiaccount organizations, adding ITOrganizationAccountAccessRole with MFA enforced)
  4. implements Section 3 (Monitoring) of the CIS Amazon Web Services Foundations benchmark.
  5. configures CloudTrail according to the new best practices (KMS encryption, validation etc)
  6. configures basic set of the CloudConfig rules to monitor best practices
First, my security framework now consists of two main parts: cf (CloudFormation) and tf (Terraform) with Terraform template as a bootstrapper of the  whole deployment.

You can use  Terraform, you can use CloudFormation, but why both ?
Terraform is very quickly evolves, has cross-cloud support and implements some missing in CloudFormation features (like account level password policy configuration, etc); CloudFormation is native for AWS, well supported, and, most important, AWS provides a lot of best practices and solutions in the form of the CloudFormation templates.

Using both (tf and cf) gives me (and you) ability to reuse solutions, suggested and provided by AWS, without rewriting the code, have flexibility and power of terraform and one single interface for whole cloud automation.
No more bucket pre-creation or specific sequence of the CloudFormation deployment - just terraform apply. It will take care of all CloudFormation prerequisites, version control and template updates.
But,  if you wish, at current state you can use only my CloudFormation templates - cf still does all heavy lifting.

The main trick of the Terraform - CloudFormation integration was to tell terrafrom when CloudFormation template is updated to ensure that terraform will trigger cf stack update.
I achieved this using S3 bucket with version control enabled and always updating (just setting template version) security.global.yaml.

This code takes care of Terraform and CloudFormation integration:
# creating Security cloudforation stack

resource "aws_cloudformation_stack" "Security" {
  name = "Security"
  depends_on = ["aws_s3_bucket_object.iam_global", "aws_s3_bucket_object.cloudtrailalarms_global", "aws_s3_bucket_object.awsconfig_global", "aws_s3_bucket_object.cloudtrail_global", "aws_s3_bucket_object.security_global"]
  parameters {
    AccountNickname = "${var.enviroment_name}",
    CompanyName = "${var.company_name}",
    MasterAccount = "${var.master_account}"
  }
  template_url = "https://s3.amazonaws.com/${aws_s3_bucket.CFbucket.bucket}/${var.security_global}?versionId=${aws_s3_bucket_object.security_global.version_id}"
  capabilities = [ "CAPABILITY_NAMED_IAM" ]
  tags { "owner" = "infosec"}
}

And finally deployment steps are:

  1. Get code from my git repo:  https://github.com/IhorKravchuk/it-security
  2. Switch to tf folder and update terraform.tfvars specifying: your AWS profile name (configured for aws cli using aws configure --profile profile_name); name for the environment (prod, test, dev ..) ; company(or division) name; region and AWS master account ID.
  3. terraform init to get aws provider downloaded by terraform
  4. terraform plan
  5. terraform apply


Monday, July 24, 2017

S3 buckets audit: check bucket's public access level, etc .. updated with authorised audit support

I previous post: S3 buckets audit: check bucket existence, public access level, etc - without having access to target AWS account I described and released tool to audit s3 buckets even without access to the AWS account these buckets belong to.

But what about if I have access to the bucket's account or I would like to audit all buckets in my AWS account?

These features have been addressed in the new release of the s3 audit tool:

https://github.com/IhorKravchuk/it-security/tree/master/scripts.aws

$python aws_test_bucket.py --profile prod-read --bucket bucket2test

$python aws_test_bucket.py --profile prod-read --file aws

$python aws_test_bucket.py --profile prod-read --file buckets.list

  -P AWS_PROFILE, --profile=AWS_PROFILE
                        Please specify AWS CLI profile
  -B BUCKET, --bucket=BUCKET
                        Please provide bucket name
  -F FILE, --file=FILE  Optional: file with buckets list to check or aws to check all buckets in your account


Note:
--profile=AWS_PROFILE - yours AWS access profile (from aws cli). This profile  might or might not have access to the audited bucket (we need this just to become Authenticated User from AWS point of view ).

If  AWS_PROFILE allows authorised access to the bucket being audited - tool will fetch bucket's ACLs, Policies and S3 Static Web setting and perform authorised audit.

If AWS_PROFILE does not allow authorised access - tool will work in pentester mode

You can specify:
  •  one bucket to check using --bucket option
  •  file with list of buckets(one bucket name per line) using --file option
  •  all buckets in your AWS account (accessible using AWS_PROFILE) using --file=aws option

Based on the your AWS profile limitations tool will provide you:
  • indirect scan results (AWS_profile have no API access to the bucket being audited)
  • validated scan results based on you s3 buckets settings like ACL, bucket policy and s3 website config. (AWS_profile have API access to the bucket being audited )
Enjoy and stay secured.

PS. Currently tool does not support bucket check for Frankfurt region (AWS Signature Version 4). Working on it.

Wednesday, July 19, 2017

S3 buckets audit: check bucket existence, public access level, etc - without having access to target AWS account

      Currently, publicly accessible buckets become a big deal and root cause of many recent data leaks.
All of these events even drive Amazon AWS to proactively send out emails to the customers who has such s3 configurations. Let's become a bit more proactive as well and audit s3 buckets

        First, let's take look why bucket might become publicly available:
- Configured for public access intentionally (S3 static web hosting or just public resource) or by mistake
- Configured for the access of the Authenticated Users  (option, misinterpreted by many as users from your account, which is wrong, it's any AWS authenticated user from any account)
     
         Auditing AWS account you have full access to is quite easy - just list the buckets and check theirs ACL, users and bucket policies via aws cli or web gui.

         What about cases when you:
- have many accounts and buckets (will take forever to audit manually)
- do not have enough permissions in the target AWS account to check bucket access
- you do not have permissions at all in this account (pentester mode)

To address everything above I've created small tool to do all dirty job for you (updated to v2):
https://github.com/IhorKravchuk/it-security/tree/master/scripts.aws


$python aws_test_bucket.py --profile prod-read --bucket test.bcuket

  -P AWS_PROFILE, --profile=AWS_PROFILE
                        Please specify AWS CLI profile
  -B BUCKET, --bucket=BUCKET
                        Please provide bucket name
  -F FILE, --file=FILE  Optional: file with buckets list to check

Note: --profile=AWS_PROFILE - any your AWS access profile (from aws cli). This profile HAS to NOT have access to the audited bucket (we need this just to become Authenticated User from AWS point of view )

You can specify one bucket to check using --bucket option or file with list of buckets(one bucket name per line) using --file option


Based on the bucket access status tool will provide you following responses:

Bucket: test.bucktet - The specified bucket does not exist
Bucket: test.bucktet -  Bucket exists, but Access Denied
Bucket: test.bucktet -  Found index.html, most probably S3 static web hosting is enabled
Bucket: test.bucktet - Bucket exists, publicly available and no S3 static web hosting, most probably misconfigured! 

Enjoy!

PS. More over, you can create list of the buckets(even using some DNS/name alterations and permutations) to test in the file and loop through it checking each.

Stay secure.


Thursday, March 30, 2017

Trailing dot in DNS name, incorrect S3 website endpoint work and possible back-end information leak

I discovered that AWS S3 website endpoint incorrectly interpret trailing dot (which is actually essential part of FQDN according to RFC1034 ) in the website FQDN. 
Instead of referring to the correct bucket endpoint gives "No such bucket error" revealing information about web site back-end. 
I have not considered this initially as a security issue more as a misconfiguration or even expected undocumented behaviour , but found one case that could lead to others:

If web site use 3rd party DDOS and WAF protection service like CloudFlare this technic(adding trailing dot ) could reveal and expose web-site origin. 

Example of the possible information disclose below:

Dns name resolution pointing to the CloudFlare:


Trailing
 dot error pointing to S3 bucket back-end with rest of information pointing to CloudFlare:

PS. One of the possible usage of the s3 back-end information leak could be  s3 backet name squatting to block possible sub-domain usage due to the uniqueness of the s3 bucket names.

Wednesday, February 1, 2017

MediaWiki as a static website and content sharing

Using wiki for knowledge management in a teams or individually is easy and often is an obvious choice.
       Challenges appear when you need to share information stored in the wiki. 
Challenges are: hardening MediaWiki installation for public access and partially sharing wiki content.

If your main goal is to just to publish content, you can extract wiki pages as a static html pages using relatively simple wget one-liner. After extracting, you can publish your wiki using AWS  S3 static web hosting.

To share only part of the information available in the wiki you can leverage Categoies and restrict user access to specified categories using special extension. Afterwards, you can use this user restricted access to grab wiki content.
Another simple way is to use Category special wiki page as a starting point for crawler to grab pages related to the specific category, let's say Public category.
The code is way shorter than all description above:

# get the wiki content
wget --recursive --level=1 --page-requisites --html-extension --no-directories --convert-links --no-parent -R "*Special*" -R "*action=*" -R "*printable=*"  -R "*oldid=*" -R "*title=Talk:*" -R "*limit=*" "http://mywikiprivate:80/wiki/index.php/Category:Public"
# replace sensitive by the link to the stub page
sed -i -E 's/http:\/\/mywikiprivate[^"]*/http:\/\/wiki.your_website.ca\/404.html/g' *.html 
# remove sensitive file
rm Category\:Public.1.html
# rename Public category pages to be an a list of published pages
mv Category:Public.html Public.html 
# sync content to AWS
aws s3 sync ./ s3://you_bucket/


Result of such script running along with some public notes from my wiki could be found here:
http://wiki.it-security.ca


Disclaimer: current wiki publication contains only small part of the information available and will be updated on almost daily basis to add more content cleared for publishing. Main purpose of this wiki is to keep technical notes and references in the structured way. Some of them are obvious, outdated or incomplete.

Goal of the establishing public publishing process is to keep wiki information up-do-date  and have ability to publish small useful notes which does not fit blog format and style.