Ante Miličević
January 7, 2024

26 AWS Security Best Practices: Part 2

In part two of our series, we delve into best security practices for Amazon S3, AWS CloudTrail, and AWS Config. So, keep reading, and learn even more.

Welcome to part 2 of our in-depth series on AWS security. If you haven’t read part 1 yet that delves into the basics, and IAM policies, we recommend you head over there right away. Now, without further ado, let’s get started with AmazonS3.

Amazon S3

The object storage service, Amazon S3, is well known for its scalability, data availability, security, performance etc. It is recommended to follow certain security practices when working with it.

9. Enable S3 Block Public Access Setting

The Amazon S3 public access block is a control mechanism that spans an entire AWS account or operates on an individual S3 bucket level to ensure that objects within never have public access. Public access is typically granted through access control lists (ACLs), bucket policies, or a combination of both.

Unless the intention is to make S3 buckets publicly accessible, it is advisable to configure the Amazon S3 Block Public Access feature at the account level.

Retrieve the names of all S3 buckets in your AWS account:

<pre class="codeWrap"><code>aws s3api list-buckets --query 'Buckets[*].Name'</code></pre>

For each returned bucket, obtain its S3 Block Public Access configuration:

<pre class="codeWrap"><code>aws s3api get-public-access-block --bucket BUCKET_NAME</code></pre>

The output for the above command should resemble the following:

<pre class="codeWrap"><code>"PublicAccessBlockConfiguration": {
 "BlockPublicAcls": false,
 "IgnorePublicAcls": false,
 "BlockPublicPolicy": false,
 "RestrictPublicBuckets": false

If any of the values is set to false, take corrective action to safeguard your data privacy:

<pre class="codeWrap"><code>aws s3api put-public-access-block
--region REGION
--bucket BUCKET_NAME

10. Activate Server-Side Encryption for S3 Buckets

For enhanced security of sensitive data stored in S3 buckets, it is essential to configure the buckets with server-side encryption, ensuring data protection at rest. Amazon S3 employs unique keys to encrypt each object, and for an additional layer of security, it encrypts the key itself with a regularly rotated root key. The server-side encryption in Amazon S3 utilizes the robust 256-bit Advanced Encryption Standard (AES-256) block cipher.

Image of AWS encryption illuatration

List all existing S3 buckets in your AWS account:

<pre class="codeWrap"><code>aws s3api list-buckets --query 'Buckets[*].Name'</code></pre>

Use the names of the S3 buckets obtained in the previous step to check their Default Encryption status:

<pre class="codeWrap"><code>aws s3api get-bucket-encryption --bucket BUCKET_NAME</code></pre>

The output should provide details about the configuration of the requested feature. If an error occurs, it indicates that default encryption is not enabled for the selected S3 bucket, and consequently, objects are not automatically encrypted when stored in Amazon S3.

Repeat this procedure for each of your S3 buckets.

11. Activate S3 Block Public Access Setting(at the Bucket Level)

The Amazon S3 public access block feature is crafted to offer controls either across the entire AWS account or at the specific S3 bucket level. Its purpose is to ensure that objects within the buckets do not have public access. Public access can be granted to both buckets and objects through the use of access control lists (ACLs), bucket policies, or a combination of both.

If you do not intend for your S3 buckets to be accessible to the public, which is generally advisable, it is recommended to configure the Amazon S3 Block Public Access feature at the account level.

To identify publicly accessible S3 buckets, you can employ the Cloud Custodian rule mentioned below:

<pre class="codeWrap"><code>- name: buckets-public-access-block
  description: Amazon S3 provides Block public access (bucket settings) and Block public access (account settings) to help you manage public access to Amazon S3 resources. By default, S3 buckets and objects are created with public access disabled. However, an IAM principle with sufficient S3 permissions can enable public access at the bucket and/or object level. While enabled, Block public access (bucket settings) prevents an individual bucket, and its contained objects, from becoming publicly accessible. Similarly, Block public access (account settings) prevents all buckets, and contained objects, from becoming publicly accessible across the entire account.
  resource: s3
    - or:
      - type: check-public-block
        BlockPublicAcls: false
      - type: check-public-block
        BlockPublicPolicy: false
      - type: check-public-block
        IgnorePublicAcls: false
      - type: check-public-block
        RestrictPublicBuckets: false

AWS CloudTrail

Following IAM in the hierarchy of AWS security best practices, CloudTrail emerges as a crucial service for threat detection. AWS CloudTrail serves as a fundamental AWS tool facilitating governance, compliance, and operational risk auditing within your AWS account. Every action executed by a user, role, or AWS service is meticulously recorded as events within CloudTrail.

These events encompass actions performed in the AWS Management Console, AWS Command Line Interface, as well as through AWS SDKs and APIs. The upcoming segment will guide you through the process of configuring CloudTrail to effectively monitor your infrastructure across all regions.

12. Activate and Set Up CloudTrail, Including a Multi-Region Trail

CloudTrail maintains a comprehensive record of AWS API calls within an account, encompassing calls from the AWS Management Console, AWS SDKs, and command line tools. It also covers API calls from higher-level AWS services like AWS CloudFormation.

Image of AWS CloudTrail illustration

The AWS API call history provided by CloudTrail serves multiple purposes, including security analysis, tracking resource changes, and facilitating compliance auditing.

Multi-Region trails offer additional advantages:

  • Detect unexpected activities in rarely used Regions.
  • Ensure default activation of global service event logging for a trail, capturing events from AWS global services.
  • For multi-Region trails, management events cover all read and write operations, ensuring CloudTrail records management operations on all resources in an AWS account.

When created through the AWS Management Console, CloudTrail trails default to being multi-Region trails.

To view all trails in the chosen AWS region input the following command: 

<pre class="codeWrap"><code>aws cloudtrail describe-trails</code></pre>

The output displays each AWS CloudTrail trail along with its configuration details. If the IsMultiRegionTrail configuration parameter is set to false, the selected trail is not currently enabled for all AWS regions:

<pre class="codeWrap"><code>{
   "trailList": [
           "IncludeGlobalServiceEvents": true,
           "Name": "ExampleTrail",
           "TrailARN": "arn:aws:cloudtrail:us-east-1:123456789012:trail/ExampleTrail",
           "LogFileValidationEnabled": false,
           "IsMultiRegionTrail": false,
           "S3BucketName": "ExampleLogging",
           "HomeRegion": "us-east-1"

Ensure that all your trails are reviewed, confirming that at least one is configured as a multi-Region trail.

13. Implement Encryption at Rest for CloudTrail Logs

Ensure that CloudTrail is set up with server-side encryption (SSE) using the AWS Key Management Service (KMS) customer master key (CMK) encryption.

To confirm encryption, check if the KmsKeyId is defined. For enhanced security of sensitive CloudTrail log files, employ server-side encryption with AWS KMS-managed keys (SSE-KMS) for encrypting log files at rest. It's important to note that by default, CloudTrail log files delivered to your buckets are encrypted using Amazon server-side encryption with Amazon S3-managed encryption keys (SSE-S3).

Verify log file encryption with the provided Cloud Custodian rule:

<pre class="codeWrap"><code>- name: cloudtrail-logs-encrypted-at-rest
  description: AWS CloudTrail is a web service that records AWS API calls for an account and makes those logs available to users and resources in accordance with IAM policies. AWS Key Management Service (KMS) is a managed service that helps create and control the encryption keys used to encrypt account data, and uses Hardware Security Modules (HSMs) to protect the security of encryption keys. CloudTrail logs can be configured to leverage server side encryption (SSE) and KMS customer created master keys (CMK) to further protect CloudTrail logs. It is recommended that CloudTrail be configured to use SSE-KMS.
  resource: cloudtrail
    - type: value
      key: KmsKeyId
      value: absent


You can address this by following these steps in the AWS Console:

1. Log in to the AWS Management Console at

2. In the left navigation panel, go to Trails.

3. Under the Name column, select the trail name requiring an update.

4. Click the pencil icon next to the S3 section to edit the trail bucket configuration.

5. Under S3 bucket, click Advanced.

6. Choose Yes for Encrypt log files to use SSE-KMS and a Customer Master Key (CMK) for log file encryption.

7. Choose Yes for Create a new KMS key if you want to generate a new CMK, or select No to use an existing CMK encryption key in the region.

8. Save your changes to activate SSE-KMS encryption.

14. Enable CloudTrail Log File Validation

CloudTrail log file validation generates a digitally signed digest file containing a hash for each log written to Amazon S3. These digest files serve to verify whether a log file has been altered, deleted, or remains unchanged after CloudTrail delivers it. It is advisable to enable file validation for all trails, as it adds an extra layer of integrity checks to CloudTrail logs.

To verify and enable file validation in the AWS Console:

1. Log in to the AWS Management Console at

2. In the left navigation panel, navigate to Trails.

3. Under the Name column, select the trail name you wish to inspect.

4. In the S3 section, look for the Enable log file validation status.

  - If the status is set to No, the selected trail lacks log file integrity validation. To resolve this:

    - Click on the pencil icon next to S3 section to edit the trail bucket configuration.

    - Under S3 bucket, click Advanced and locate the Enable log file validation configuration status.

    - Choose Yes to activate log file validation, then click Save.

Explore more about AWS CloudTrail security best practices.

AWS Config

AWS Config offers a comprehensive perspective on the resources linked to your AWS account. This includes detailed insights into their configurations, interconnections, and the historical evolution of configurations and relationships over time.

15. Validate AWS Config Activation

The AWS Config service takes charge of configuration management for supported AWS resources in your account and furnishes you with corresponding log files. These records include configuration items (AWS resources), their interrelationships, and any alterations in configurations among resources.

It is strongly advised to activate AWS Config in all AWS Regions. The historical data captured by AWS Config, including configuration item history, facilitates security analysis, resource change tracking, and compliance audits.

To ascertain the status of configuration recorders and delivery channels created by the Config service in the selected region, do the following:

<pre class="codeWrap"><code>aws configservice --region REGION get-status</code></pre>

The output of the preceding command reveals the status of all AWS Config delivery channels and configuration recorders. If AWS Config is inactive, both the configuration recorders and delivery channels lists will be empty:

<pre class="codeWrap"><code>Configuration Recorders:
Delivery Channels:</code></pre>

Alternatively, if the service was previously active but is now deactivated, the status should be displayed as OFF:

<pre class="codeWrap"><code>Configuration Recorders:
name: default
recorder: OFF
Delivery Channels:
name: default
last stream delivery status: NOT_APPLICABLE
last history delivery status: SUCCESS
last snapshot delivery status: SUCCESS</code></pre>

To rectify this, after enabling AWS Config, configure it to record all resources:

1. Access the AWS Config console at

2. Choose the relevant Region to set up AWS Config.

3. If you are new to AWS Config, refer to the Getting Started section in the AWS Config Developer Guide.

4. Navigate to the Settings page from the menu and:

  - Choose Edit.

  - Under Resource types to record, Record all resources supported in this region should be selected and Include global resources (e.g., AWS IAM resources).

  - For Data retention period, opt for the default or specify a custom retention period.

  - Regarding AWS Config role, either opt for Create AWS Config service-linked role or select Choose a role from your account and choose the desired role.

  - Specify the Amazon S3 bucket to use or create one, and optionally include a prefix.

  - For Amazon SNS topic, choose an existing Amazon SNS topic or create a new one.

5. Save your change.

Facing Challenges in Cloud, DevOps, or Security?
Let’s tackle them together!

get free consultation sessions

In case you prefer e-mail first:

Thank you! Your message has been received!
We will contact you shortly.
Oops! Something went wrong while submitting the form.
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information. If you wish to disable storing cookies, click here.