Monday, July 24, 2017

S3 buckets audit: check bucket's public access level, etc .. updated with authorised audit support

I previous post: S3 buckets audit: check bucket existence, public access level, etc - without having access to target AWS account I described and released tool to audit s3 buckets even without access to the AWS account these buckets belong to.

But what about if I have access to the bucket's account or I would like to audit all buckets in my AWS account?

These features have been addressed in the new release of the s3 audit tool:

https://github.com/IhorKravchuk/it-security/tree/master/scripts.aws

$python aws_test_bucket.py --profile prod-read --bucket bucket2test

$python aws_test_bucket.py --profile prod-read --file aws

$python aws_test_bucket.py --profile prod-read --file buckets.list

  -P AWS_PROFILE, --profile=AWS_PROFILE
                        Please specify AWS CLI profile
  -B BUCKET, --bucket=BUCKET
                        Please provide bucket name
  -F FILE, --file=FILE  Optional: file with buckets list to check or aws to check all buckets in your account


Note:
--profile=AWS_PROFILE - yours AWS access profile (from aws cli). This profile  might or might not have access to the audited bucket (we need this just to become Authenticated User from AWS point of view ).

If  AWS_PROFILE allows authorised access to the bucket being audited - tool will fetch bucket's ACLs, Policies and S3 Static Web setting and perform authorised audit.

If AWS_PROFILE does not allow authorised access - tool will work in pentester mode

You can specify:
  •  one bucket to check using --bucket option
  •  file with list of buckets(one bucket name per line) using --file option
  •  all buckets in your AWS account (accessible using AWS_PROFILE) using --file=aws option

Based on the your AWS profile limitations tool will provide you:
  • indirect scan results (AWS_profile have no API access to the bucket being audited)
  • validated scan results based on you s3 buckets settings like ACL, bucket policy and s3 website config. (AWS_profile have API access to the bucket being audited )
Enjoy and stay secured.

PS. Currently tool does not support bucket check for Frankfurt region (AWS Signature Version 4). Working on it.

Wednesday, July 19, 2017

S3 buckets audit: check bucket existence, public access level, etc - without having access to target AWS account

      Currently, publicly accessible buckets become a big deal and root cause of many recent data leaks.
All of these events even drive Amazon AWS to proactively send out emails to the customers who has such s3 configurations. Let's become a bit more proactive as well and audit s3 buckets

        First, let's take look why bucket might become publicly available:
- Configured for public access intentionally (S3 static web hosting or just public resource) or by mistake
- Configured for the access of the Authenticated Users  (option, misinterpreted by many as users from your account, which is wrong, it's any AWS authenticated user from any account)
     
         Auditing AWS account you have full access to is quite easy - just list the buckets and check theirs ACL, users and bucket policies via aws cli or web gui.

         What about cases when you:
- have many accounts and buckets (will take forever to audit manually)
- do not have enough permissions in the target AWS account to check bucket access
- you do not have permissions at all in this account (pentester mode)

To address everything above I've created small tool to do all dirty job for you (updated to v2):
https://github.com/IhorKravchuk/it-security/tree/master/scripts.aws


$python aws_test_bucket.py --profile prod-read --bucket test.bcuket

  -P AWS_PROFILE, --profile=AWS_PROFILE
                        Please specify AWS CLI profile
  -B BUCKET, --bucket=BUCKET
                        Please provide bucket name
  -F FILE, --file=FILE  Optional: file with buckets list to check

Note: --profile=AWS_PROFILE - any your AWS access profile (from aws cli). This profile HAS to NOT have access to the audited bucket (we need this just to become Authenticated User from AWS point of view )

You can specify one bucket to check using --bucket option or file with list of buckets(one bucket name per line) using --file option


Based on the bucket access status tool will provide you following responses:

Bucket: test.bucktet - The specified bucket does not exist
Bucket: test.bucktet -  Bucket exists, but Access Denied
Bucket: test.bucktet -  Found index.html, most probably S3 static web hosting is enabled
Bucket: test.bucktet - Bucket exists, publicly available and no S3 static web hosting, most probably misconfigured! 

Enjoy!

PS. More over, you can create list of the buckets(even using some DNS/name alterations and permutations) to test in the file and loop through it checking each.

Stay secure.


Thursday, March 30, 2017

Trailing dot in DNS name, incorrect S3 website endpoint work and possible back-end information leak

I discovered that AWS S3 website endpoint incorrectly interpret trailing dot (which is actually essential part of FQDN according to RFC1034 ) in the website FQDN. 
Instead of referring to the correct bucket endpoint gives "No such bucket error" revealing information about web site back-end. 
I have not considered this initially as a security issue more as a misconfiguration or even expected undocumented behaviour , but found one case that could lead to others:

If web site use 3rd party DDOS and WAF protection service like CloudFlare this technic(adding trailing dot ) could reveal and expose web-site origin. 

Example of the possible information disclose below:

Dns name resolution pointing to the CloudFlare:


Trailing
 dot error pointing to S3 bucket back-end with rest of information pointing to CloudFlare:

PS. One of the possible usage of the s3 back-end information leak could be  s3 backet name squatting to block possible sub-domain usage due to the uniqueness of the s3 bucket names.

Wednesday, February 1, 2017

MediaWiki as a static website and content sharing

Using wiki for knowledge management in a teams or individually is easy and often is an obvious choice.
       Challenges appear when you need to share information stored in the wiki. 
Challenges are: hardening MediaWiki installation for public access and partially sharing wiki content.

If your main goal is to just to publish content, you can extract wiki pages as a static html pages using relatively simple wget one-liner. After extracting, you can publish your wiki using AWS  S3 static web hosting.

To share only part of the information available in the wiki you can leverage Categoies and restrict user access to specified categories using special extension. Afterwards, you can use this user restricted access to grab wiki content.
Another simple way is to use Category special wiki page as a starting point for crawler to grab pages related to the specific category, let's say Public category.
The code is way shorter than all description above:

# get the wiki content
wget --recursive --level=1 --page-requisites --html-extension --no-directories --convert-links --no-parent -R "*Special*" -R "*action=*" -R "*printable=*"  -R "*oldid=*" -R "*title=Talk:*" -R "*limit=*" "http://mywikiprivate:80/wiki/index.php/Category:Public"
# replace sensitive by the link to the stub page
sed -i -E 's/http:\/\/mywikiprivate[^"]*/http:\/\/wiki.your_website.ca\/404.html/g' *.html 
# remove sensitive file
rm Category\:Public.1.html
# rename Public category pages to be an a list of published pages
mv Category:Public.html Public.html 
# sync content to AWS
aws s3 sync ./ s3://you_bucket/


Result of such script running along with some public notes from my wiki could be found here:
http://wiki.it-security.ca


Disclaimer: current wiki publication contains only small part of the information available and will be updated on almost daily basis to add more content cleared for publishing. Main purpose of this wiki is to keep technical notes and references in the structured way. Some of them are obvious, outdated or incomplete.

Goal of the establishing public publishing process is to keep wiki information up-do-date  and have ability to publish small useful notes which does not fit blog format and style.

Monday, November 7, 2016

Secure your AWS account using CloudFormation


      The very first thing you need to do while building your AWS infrastructure is to enable and configure all AWS account level security features such as: CloudTrail, CloudConfig, CloudWatch, IAM, etc..
       To do this, you can use mine Amazon AWS Account level security checklist and how-to or any other source.
        To avoid manual steps and to be align with SecuityAsCode concept, I use set of CloudFormation templates, simplified version of which I would like to share: 


Global Security stack template structure:


security.global.json - parent template for all nested templates to link them together and control dependency between nested stacks.


cloudtrail.clobal.json - nested template for Global configuration of the CloudTrail:

  • creates S3 bucket for the logs
  • creates CloudTrail-related IAM roles and policies
  • creates CloudLog LogGroup
  • enables CloudTrail on default region including global events and multi-region feature
  • creates SNS ans SQS configuration for easy integration with Splunk AWS app.

cloudtrailalarms.global.json - nested template for Global CloudWatch Logs alarms and security metrics creation. Uses FilterMap to create different security-related filters for ClouTrail LogGroup, corresponding metrics and notifications for suspicious or dangerous events. You can customise filter per environment basis.

Predefined filters are:
  • rds-change: RDS related changes
  • iam-change; IAM changes
  • srt-instance: Start, Reboot, Terminate instance
  • large-instance: launching large instances
  • massive-operations: massive operations- more 10 in 5 min 
  • massive-terminations: massive terminations- more 10 in 5 min 
  • detach-force-ebs: force detachment of the EBS volume from the instance
  • change-critical-ebs: any changes related to the critical EBS volumes
  • change-secgroup: any changes related to the security group
  • create-delete-secgroup: creation and deletion of the security group 
  • secgroup-instance: attaching security group to the instance
  • route-change: routing  changes
  • create-delete-vpc: creation and deletion of a VPC
  • netacl-change: changes at Network ACL
  • cloudtrail-change: changes in the CloudTrail configuration
  • cloudformation-change: changes related to the CloudFormation
  • root-access: any root access events
  • unauthorised: failed and unauthorised operations
  • igw-change: Internet Gateway related changes
  • vpc-flow-logs: Delete or Create VPC flow logs
  • critical-instance: any operation on the critical instances
  • eip-change: Elastic IP changes
  • net-access: Any access outside of predefined known IP ranges

4 preconfigured notification topics : 
  • InfosecEmailTopic, 
  • DevOpsEmailTopic
  • InfosecSMSTopic
  • DevOpsSMSTopic


awsconfig.global.json - nested template for Global AWS Config Service configuration.
  • creates S3 bucket for the config dumps
  • creates AWS Config-related IAM roles and policies
  • creates AWS config delivery channel and schedule config dumps (hourly)
  • creates and enables AWS config recorder 
  • creates SNS ans SQS configuration for easy integration with Splunk AWS app.

cloudwatchsubs.global.json - nested template for configuring AWS CloudWatch Subscription Filter to extract and analyse most severe CloudTrail events using custom Lambda function:
  • creates Lambda function and all requred roles and permissions
  • creates Subscription filter as a compilation of the filters from the FilterMap
      
Currently uses following filters and aggregate them to one due to the AWS CloudWatch Logs subscription limitation (only one filter supported):
  • critical-instance
  • iam-change
  • srt-instance
  • cloudtrail-change
  • root-access
  • net-access
  • detach-force-ebs
  • unauthorised

iam.global.json - nested template for IAM Global configuration: 
  • creates Infosec Team IAM Group and managed policy
  • creates DevOps Team IAM Group and managed policy
  • creates DBA Team IAM Group and managed policy
  • creates Self Service Policy  for users to manage API keys and MFA
  • creates  ProtectProdEnviroment to protect production environment from destructive actions 
  • creates EnforceMFAPolicy to enforce MFA for sensitive operations
  • creates EnforceAccessFromOfficePolicy to restrict some operation to office source IPs
  • creates DomainJoin role and all required policy to perform automated domain join
  • creates SaltMasterPolicy and Role for Configuration Management Tool (in this case - Salt)
  • creates SQLDataBaseInstancePolicy and Instance profile example policy
  • creates SIEM system example policy
  • creates VPC flow log role
  • creates and manages SIEM user and API keys
  • creates and manages SMTP user (for the AWS SES service ) and API keys


cloudwatchsubs_kinesis.global.json - PoC template (not linked as nested to the security.global.json)  for configuring AWS CloudWatch Subscription Filter to send most severe CloudTrail events to AWS Kinesis stream using subscription filter similar to the cloudwatchsubs.global.json 

Supported features:


Environments and regions: Stack supports unlimited amount of environments with 4 environments predefined (staging, dev, prod, and dr) and use 1 account and 1 region per environment concept to reduce blast radius (if account become compromised)

AWS services used by stack: CloudTrail, AWS Config, CloudWatch, CloudWatch Logs and Events, IAM,  Lambda, Kinesis.

To deploy:

  1. Create bucket using following naming convention: com.ChangeMe.EnviromentName.cloudform, replacing ChangeMe and EnviromentName with your value to make it look like this:            com.it-security.prod.cloudform
  2. Enable bucket versioning 
  3. in the templates  security.global.json and cloudwatchsubs.global.json replace "ChangeMe" with name used in the bucket creation.
  4. In the template cloudtrailalarms.global.json modify SNS endpoints for email notification infosec@ChangeMe.com and devops@ChangeMe.com; Add endpoints with mobile phone numbers for SMS notification to appropriate SNS topics if needed.
  5. Modify iam.global.json template to adrress you SQL DataBase bucket location (com-ChangeMe-", {"Ref": "Environment"} , "-sqldb/)  and modify any permission if need according to your organisation structure, roles, responsibilities and services.
  6. Modify FilterMap in cloudtrailalarms.global.json and cloudwatchsubs.global.json templates make filters work for your infrastructure (Critical Instance IDs, Critical Volume IDs, you ofiice IP range, you NAT gateways, etc) 
  7. Zip example Lambda function LogCritical_lambda_security_global.py like LogCritical_lambda_security_global.zip
  8. Upload this function into S3 bucket created at step 1 and copy object version (GUI- show version -object properties ) and insert into cloudwatchsubs.global.json template into "LogCriticalLambdaCodeVer" mapping at the appropriate environment (prod, staging ..)
  9. Modify "regions" Environments mapping in the iam.global.json, and cloudwatchsubs.global.json templates to specify correct AWS region you are using for the deployment.
  10. Upload all *.global.json templates into S3 bucket created at step 1. 
  11. Create new CloudFormation stack  using parent security template security.global.json and your bucket name (Example: ttps://s3.amazonaws.com/com.it-security.prod.cloudform/security.global.json ),  call it "Security" and specify environment name you going to deploy.
  12. Done!

Tuesday, October 18, 2016

Self-Defending Cloud PoC or Amazon CloudWatch Events usage

Problem:  Malicious attacker get privileged access to your AWS account and destroying your production infrastructure in a matter of seconds.

In case of cloud based infrastructures you can't rely on classic SIEM solutions - by the time your SIEM will detect attack, your infrastructure will be gone. We need a "near real time" way to detect and mitigate the attack.

Attack scenario: using compromised aws api key (no MFA) and CLI/SDK to perform destructive actions.

Attack detection and mitigation strategy:  
All destructive actions start with EC2 instance termination. To prevent such scenario, you should always have "TerminationProtection" feature enabled on your production instances. Based on this, attacker must disable termination protection before demolishing your environment. For the PoC, I will use disabling "TerminationProtection" event as a attack detector (sure thing, real attack detection is a way more complicated process).

Starting point: AWS api key with admin policy attached, all production instances protected using AWS Termination Protection.




Possible solutions and attack mitigation delays:

SIEM:

CloudWatch Logs and alarms:


CloudWatch Subscriptions:


CloudWatch Events:




Implementation: 

Design:
Based on the implementation scenarios shown above and their performance (tested during PoC)  - the fastest way is to leverage AWS CloudWatch events and trigger a lambda function.

Speaking AWS technical language we need:

  1. choose what type of AWS event we are looking for. Based on the our attack detection strategy, we are looking for ec2 call: modify-instance-attribute. Using exactly this API call you can enable/disable TerminationProtection. So let's look for "AWS API Call Events" type of the CloudWatch events.
  2. Create: event rule : "match incoming events and route them to one or more targets for processing" in our case target is a Lambda function.
  3. Create all required policies and Lambda function role in IAM.
  4. Build the Lambda function itself.


Setting-up event rule:

I was using following event pattern inside the CloudWatch Event rule:
{ "detail-type": [ "AWS API Call via CloudTrail" ], "detail": { "eventSource": [ "ec2.amazonaws.com" ], "eventName": [ "ModifyInstanceAttribute" ] } }


Getting sample event:

To start writing our event-detection-mitigation lambda function we need to get example of the AWS events for the API call we are monitoring.
We can achieve this with the following simple Lambda function:

import json

def lambda_handler(event, context):
    print event


or, if you need nice formatted json to test you lambda function offline:

import json

def lambda_handler(event, context):
    print json.dumps(event, indent=4, sort_keys=False)

You will find output of your lambda function (result of the print statement) in appropriate (naming as your lambda function) CloudWatch Log Stream


Challenges with event format:

During first tests, I've found that Amazon AWS not following any JSON contract (defined format)  even for the same 1 API call. Making the same API call in the 3 different ways produced 3 different event formats:

Disabling TerminationProtection from GUI with MFA:

{u'account': u'150905', u'region': u'eu-west-1', u'detail': {u'eventVersion': u'1.05', u'eventID': u'b3c4d3b4-353e-44bf-8973-37abccd085b5', u'eventTime': u'2016-10-14T17:31:37Z', u'requestParameters': {u'instanceId': u'i-d8916d57', u'disableApiTermination': {u'value': False}}, u'eventType': u'AwsApiCall', u'responseElements': {u'_return': True}, u'awsRegion': u'eu-west-1', u'eventName': u'ModifyInstanceAttribute', u'userIdentity': {u'userName': u'ihork', u'principalId': u'AIDAI3UNW', u'accessKeyId': u'ASIAIN2', u'invokedBy': u'signin.amazonaws.com', u'sessionContext': {u'attributes': {u'creationDate': u'2016-10-14T16:48:44Z', u'mfaAuthenticated': u'true'}}, u'type': u'IAMUser', u'arn': u'arn:aws:iam::150905:user/igor', u'accountId': u'150905'}, u'eventSource': u'ec2.amazonaws.com', u'requestID': u'e7b585e-af38-49d0-88a8-979ef5052f', u'userAgent': u'signin.amazonaws.com', u'sourceIPAddress': u'174.231.5.2'}, u'detail-type': u'AWS API Call via CloudTrail', u'source': u'aws.ec2', u'version': u'0', u'time': u'2016-10-14T17:31:37Z', u'id': u'55084ea-e4bc-45e6-a7a6-0c8e7d16b32', u'resources': []}

Disabling TerminationProtection from aws cli (no MFA):

command:
$ aws ec2 modify-instance-attribute --no-disable-api-termination --instance-id i-378579b8 

event:
{u'account': u'150905', u'region': u'eu-west-1', u'detail': {u'eventVersion': u'1.05', u'eventID': u'f8ae9323-91b0-4100-b27b-dce348641a5c', u'eventTime': u'2016-10-14T17:31:46Z', u'requestParameters': {u'instanceId': u'i-d8916d57', u'disableApiTermination': {u'value': True}}, u'eventType': u'AwsApiCall', u'responseElements': {u'_return': True}, u'awsRegion': u'eu-west-1', u'eventName': u'ModifyInstanceAttribute', u'userIdentity': {u'userName': u'ihork', u'principalId': u'AIDAI3UNW', u'accessKeyId': u'ASIAIN2', u'invokedBy': u'signin.amazonaws.com', u'sessionContext': {u'attributes': {u'creationDate': u'2016-10-14T16:48:44Z', u'mfaAuthenticated': u'true'}}, u'type': u'IAMUser', u'arn': u'arn:aws:iam::150905:user/igor', u'accountId': u'150905'}, u'eventSource': u'ec2.amazonaws.com', u'requestID': u'cd889e-039e-4f8f-bfe9-4d293012335', u'userAgent': u'signin.amazonaws.com', u'sourceIPAddress': u'174.231.5.2'}, u'detail-type': u'AWS API Call via CloudTrail', u'source': u'aws.ec2', u'version': u'0', u'time': u'2016-10-14T17:31:46Z', u'id': u'cd32fc6-39ae-4237-b46d-62d237d4d89', u'resources': []}

Disabling TerminationProtection from aws cli. 2nd variant

command:
$ aws ec2 modify-instance-attribute --attribute disableApiTermination --value false --instance-id i-199a6696 

event:
{u'account': u'150905', u'region': u'eu-west-1', u'detail': {u'eventVersion': u'1.05', u'eventID': u'75fe4852-d3d9-4c9c-a702-7f025e0c4c50', u'eventTime': u'2016-10-14T17:29:24Z', u'requestParameters': {u'instanceId': u'i-d8916d57', u'attribute': u'disableApiTermination', u'value': u'false'}, u'eventType': u'AwsApiCall', u'responseElements': {u'_return': True}, u'awsRegion': u'eu-west-1', u'eventName': u'ModifyInstanceAttribute', u'userIdentity': {u'userName': u'ihork', u'principalId': u'AIDAI3U7LBIY', u'accessKeyId': u'AKIAJL7', u'type': u'IAMUser', u'arn': u'arn:aws:iam::150905:user/igor', u'accountId': u'150905'}, u'eventSource': u'ec2.amazonaws.com', u'requestID': u'd3c46-2def-4450-b3c1-4827d9f78', u'userAgent': u'aws-cli/1.10.45 Python/2.7.11 Linux/4.7.3-100.fc23.x86_64 botocore/1.4.60', u'sourceIPAddress': u'174.231.5.2'}, u'detail-type': u'AWS API Call via CloudTrail', u'source': u'aws.ec2', u'version': u'0', u'time': u'2016-10-14T17:29:24Z', u'id': u'9fb88a9e-025b-4859-9b98-6180cd14a9b', u'resources': []}


Take a precise look on:

'sessionContext': {u'attributes': {u'creationDate': u'2016-10-14T16:48:44Z', u'mfaAuthenticated': u'true'}} - not expect this part of JSON if you are not using Mfa

 u'disableApiTermination': {u'value': True}} and 'attribute': u'disableApiTermination', u'value': u'false'} same API call done using different AWS CLI options, but serving the same purpose produce 2 different events

How we can disable a user in AWS ?


You just can't disable user in AWS. You can delete it, but you need to remove it from the groups first. It takes times, lines of code and API calls. Solution? - attach inline user policy with explicit deny (will override all allows) for all the actions you need to block.

Lambda function:

Here my PoC lambda function: really "dirty" and serving only one simple use case:

def lambda_handler(event, context):
    print event
# analyzing event
    if event['detail']['requestParameters'].get('disableApiTermination')!= None:
        protection_status = event['detail']['requestParameters']['disableApiTermination']['value']
        UserName = event['detail']['userIdentity']['userName']
        UserID = event['detail']['userIdentity']['principalId']
        if event['detail']['userIdentity'].get('sessionContext') != None:
            mfa = event['detail']['userIdentity']['sessionContext']['attributes']['mfaAuthenticated']
        else:
            mfa = "false"
        print protection_status, UserName, UserID, mfa
# disabling user using inline user policy if no MFA being used
        if mfa != "true" and not protection_status:
            iam = boto3.resource('iam')
            user_policy = iam.UserPolicy(UserName,'disable_user')
            response = user_policy.put(PolicyDocument='{ "Version": "2012-10-17", "Statement": [{"Sid": "Disableuser01","Effect": "Deny","Action": ["ec2:StopInstances", "ec2:TerminateInstances"],"Resource": ["*"]}]}')
            print response



How near  this "near real time" events:
My test showed about 40 second delay. IMHO too much for the "near real time". I'm looking for the potential bottleneck and delays that may caused by Lambda function itself on Event type I used. 


Conclusions:
- not a "near real time" to react fast and mitigate attack without additional protective measures.
- could work if you able to detect attack 40 second earlier
- could reduce overall damages
- definitely very very promising if reaction delay will be less (let's say 5-10 sec).

Update:

Fill free to pull from GitHub AWS CloudFormation template for the  PoC above.

To deploy you need: 

1. selfdefence.infosec.vpc.json - template itself.

2. selfdefence_infosec.py - Lambda function. You will need to Zip it and upload to the s3 bucket with versioning enabled.

3. Edit template (selfdefence.infosec.vpc.json) and specify: S3 bucket name in format you.bucket.name.env.cloudform (where env - is your environment name: prod, test, staging, etc) and S3 version for  selfdefence_infosec.zip file. 

4. upload template to the same s3 bucket.

5. Create a stack using this template end specify corresponding environment name at the creation time.

Enjoy! 

Wednesday, September 21, 2016

S3 bucket policies for sensitive security logs storage

Inspired by this AWS  blog post : How to Restrict Amazon S3 Bucket Access to a Specific IAM Role

Goal:
Build a storage for sensitive security logs using S3 bucket.

Restrictions:  

  • EC2 instances could only upload logs. 
  • Infosec team could only download logs and (just for this particular case) delete them with MFA .
  •  All other user must not have any access despite whatever mentioned in their IAM policies.
 Solution:
custom bucket policy


        "PolicyDocument": {
          "Version": "2012-10-17",
          "Statement": [
            {
              "Sid": "OnlyForInfosecEyes",
              "Effect": "Deny",
              "Principal":"*",
              "Action": ["s3:GetObject*", "s3:Delete*", "s3:PutObjectAcl", "s3:PutObjectVersionAcl"],
              "Resource": "s3-top-secret-bucket/*",
              "Condition": {
                "StringNotLike": {
                  "aws:userId":  "InfosecGroupUserIDs"
                }
              }
            },
            {
              "Sid": "OnlyServerAllowToPut",
              "Effect": "Deny",
              "Principal":"*",
              "Action": ["s3:PutObject"],
              "Resource": "s3-top-secret-bucket/*",
              "Condition": {
                "StringNotLike": {
                  "aws:userId":  "SeverIAMRoleID:*"
                }
              }
            },
            {
              "Sid": "EnforceEncryption",
              "Effect": "Deny",
              "Principal":"*",
              "Action": ["s3:PutObject"],
              "Resource": "s3-top-secret-bucket/*",
              "Condition": {
                "Null": {
                  "s3:x-amz-server-side-encryption": "true"
                }
              }
            },
            {
              "Sid": "EnforceMFADelete",
              "Effect": "Deny",
              "Principal":"*",
              "Action": ["s3:Delete*"],
              "Resource": "s3-top-secret-bucket/*",
              "Condition": {
                "Null": {
                  "aws:MultiFactorAuthAge": true
                }
              }
            }
          ]
        }

Where:

InfosecGroupUserIDs - list of IAM infosec users' IDs (aws iam get-user -–user-name USER-NAME)

SeverIAMRoleID:* - ID of the IAM role used by the your EC2 server instances with ":*" added to cover all instances in this role (aws iam get-role -–role-name ROLE-NAME.)