Centralise AWS Events with Parseable

Introduction

Keeping tabs on AWS activities like bucket operations, IAM role assumptions, unusual API spikes, and network traffic events is essential for maintaining security and operational efficiency. However, this typically involves stitching together multiple services such as CloudTrail, CloudWatch, Athena, Lambda functions, or even external SIEM tools, creating fragmented data silos, increased complexity, and higher costs.

This post introduces a streamlined, scalable solution: ingesting every AWS event directly into Parseable through a simplified, fully-managed AWS pipeline. By integrating:

you can consolidate your AWS telemetry data into Parseable Observability Platform. This setup eliminates unnecessary intermediaries, minimizes operational overhead, and drastically simplifies querying and analysis.

Our objective is straightforward, deliver real-time visibility and deep insights across your AWS environment, empowering you to quickly detect anomalies, troubleshoot incidents, and maintain robust security with minimal effort.

Pre-requisites

Before you begin, ensure you have:

  • An active AWS account with some services (like S3, EC2, IAM, etc.) running and generating events.

  • Parseable set up and running on your own infrastructure (VM, server, or even a laptop), with HTTPS enabled for secure data ingestion. Read more to install Parseable.

Once these are in place, you’re ready to configure the integration and unlock unified AWS observability in Parseable’s UI.

Setting Up AWS CloudTrail for Full Visibility

Follow these steps to configure AWS CloudTrail to capture every important event across your AWS environment:

  1. Create a New Trail
  • Go to the AWS CloudTrail section in the AWS Console.

  • Click Create trail.

  • Set the Trail name to aws-events.

  1. Choose Log Events

When configuring log events, select all four types for complete coverage:

  1. Management events

    • Enable both Read and Write API activity.

    • These track all management operations across AWS services (like launching EC2 instances, creating IAM roles, updating S3 bucket policies, etc.).

  1. Data events

    • Resource type: Select S3.

    • Log selector template: Choose Log all events.

    • This ensures CloudTrail records every object-level operation (such as PUT, GET, DELETE) in your S3 buckets.

  2. Insight events

    • Insight types: Select both API call rate and API error rate.

    • These provide automatic anomaly detection for spikes in API activity or errors—crucial for early detection of security incidents or misconfigurations.

  3. Network activity events

    • Event source: Add both ec2.amazonaws.com and s3.amazonaws.com.

    • Log selector template: Set to Log all events for each.

    • This logs network-level activity for EC2 and S3, giving you granular insight into connections and data flows.

  1. Additional (Optional) Settings
  • Multi-region trail: Enable this if you want to capture events from all AWS regions.

  • S3 bucket: Choose or create a bucket to store the raw CloudTrail logs for archiving or compliance.

  • Encryption: Optionally, use AWS KMS for encrypted log storage.

  • SNS notifications: Enable if you want notifications when new log files are delivered.

  1. Review and Create

Review your selections to make sure all event types are covered.

  1. Click Create trail to start capturing comprehensive telemetry.

Configuring Amazon Kinesis Data Firehose to Stream Events to Parseable

Follow these steps to set up Amazon Kinesis Data Firehose to stream AWS events directly into Parseable:

  1. Create a Delivery Stream

Navigate to Amazon Kinesis Data Firehose in the AWS Console.

  • Click Create Delivery Stream.

  • Provide a descriptive name, for example, Send_to_Parseable.

  • Set the source as Direct PUT since events are pushed directly from AWS EventBridge.

  1. Configure the HTTP Endpoint Destination

Under Destination settings, select HTTP Endpoint and provide the following details:

  • HTTP Endpoint Name: Parseable

  • HTTP Endpoint URL: https://ingestor.demo.parseable.com/api/v1/ingest

  • HTTP Endpoint Parameters (headers):

    • Authorization: Basic YWRtaW46YWRtaW4= (Replace with secure auth for production.)

    • X-P-Stream: awsevents

    • X-P-Log-Source: kinesis

    • Content-Type: application/json

  1. Configure Backup Settings

To handle potential ingestion failures gracefully:

  • Enable Source record backup in Amazon S3.

  • Choose to back up Failed data only.

  • Select or create an S3 bucket dedicated to storing these failed records.

  • Review settings and create the delivery stream.

Configuring Amazon EventBridge to Route AWS Events to Kinesis Data Firehose

Follow these steps to set up Amazon EventBridge to capture events from AWS services through the CloudTrail and forward them directly to your Kinesis Data Firehose delivery stream:

  1. Create an EventBridge Rule

Open the AWS Console and navigate to Amazon EventBridge. Under Rules, click on Create rule.

Rule details:

  • Name: SubscribeAllEvents

  • Event bus: Select default.

  • Rule type: Select Rule with an event pattern.

  1. Define the Event Pattern

Under Event source, select Other, and choose the Custom pattern (JSON editor) option. Paste the following event pattern:

{
  "source": [
    "aws.cloudtrail",
    "aws.s3",
    "aws.ec2",
    "aws.ssm",
    "aws.sts",
    "aws.cloudformation",
    "aws.iam",
    "aws.kms"
  ]
}

This pattern ensures EventBridge captures events originating from these key AWS services.

  1. Set the Target to Firehose

Under Select target(s):

  • Target types: Choose AWS service.

  • Select a target: Select Firehose stream.

  • Stream: Choose the previously created Firehose stream Send_to_Parseable.

  1. Finalize and Test
  • Review your settings and create the rule.

  • Validate by generating events in AWS (such as creating S3 buckets, modifying IAM roles, etc.) and ensuring these events appear in your Parseable stream.

This setup efficiently routes AWS events directly from EventBridge through Kinesis Data Firehose to Parseable, enabling streamlined monitoring and analytics.

Verifying Data Ingestion in Parseable

Now that your AWS configuration is complete and events are streaming from CloudTrail through EventBridge and Firehose, it’s time to confirm that Parseable is receiving and indexing your data.

  1. Check the Dataset on the Datasets Page
  • Navigate to the Datasets page in the Parseable UI.

  • Look for the dataset named awsevents.

  • You should see activity indicators or record counts updating as new AWS events are ingested.

  • If the dataset appears and is growing, your Firehose-to-Parseable pipeline is working!

  1. Explore Incoming Events
  • Head over to the Explore page in Parseable.

  • Select the awsevents dataset from the dropdown.

  • You should see live records with fields such as eventSource, eventName, userAgent, requestParameters, and the log format(kinesis), auto detected using schema detection.

  • Use quick filters or run SQL queries to slice and dice the data, search for specific event types, recent API calls, or failed operations.

  1. Filtering S3 PUT Operations in Parseable

With Parseable, you can quickly zero in on S3 PutObject operations using the Explore page filters or with SQL queries.

Using Field Filters:

  • In the Explore view for the awsevents dataset, add these filters:

    • detail_eventSource = s3.amazonaws.com

    • detail_eventName = PutObject

Saved Filters:

  • You can save this filter combination for one-click access in the future, see “put calls in S3” under your Saved Filters panel.

Analyzing AWS Events in Parseable SQL Editor

Once your AWS events are ingested and visible in the awsevents dataset, unlock advanced analysis using Parseable’s SQL Editor. This allows you to group, count, and segment events using simple SQL queries, ideal for audits, troubleshooting, and trend monitoring.

  1. Access the SQL Editor
  • Go to the SQL Editor tab in Parseable’s sidebar.

  • Select the awsevents dataset as your query source.

Example Queries

S3 Put Operations

Find out which S3 buckets are receiving the most PutObject operations:

SELECT
  "detail_requestParameters_bucketName" as bucket_name,
  COUNT(*) as put_count
FROM
  awsevents
WHERE
  "detail_eventSource" = 's3.amazonaws.com'
  AND "detail_eventName" = 'PutObject'
GROUP BY
  bucket_name
ORDER BY
  put_count DESC;

  1. Distribution of S3 API Calls

See which S3 API actions are called most frequently:

SELECT
  "detail_eventName" as s3_operation,
  COUNT(*) as operation_count
FROM
  awsevents
WHERE
  "detail_eventSource" = 's3.amazonaws.com'
GROUP BY
  s3_operation
ORDER BY
  operation_count DESC;

  1. Events Count by Source

Analyze the total number of events coming from each AWS service:

SELECT
  source,
  COUNT(*) as event_count
FROM
  awsevents
GROUP BY
  source;

  1. S3 Traffic by User Agent

Identify which user agents are generating S3 traffic:

SELECT
  "detail_userAgent" as user_agent,
  COUNT(*) as request_count
FROM
  awsevents
WHERE
  "detail_eventSource" = 's3.amazonaws.com'
GROUP BY
  user_agent;

  1. Distribution by Event Source

Break down the event volume per AWS service (from the raw source field):

SELECT
  source,
  COUNT(*) as event_count
FROM
  awsevents
GROUP BY
  source;

  1. IAM Role Usage Count

See which IAM roles are most active in your environment:

SELECT
  SUBSTRING_INDEX("detail_userIdentity_sessionContext_sessionIssuer_userName", '/', -1) as role_name,
  COUNT(*) as usage_count
FROM
  awsevents
WHERE
  "detail_userIdentity_sessionContext_sessionIssuer_userName" IS NOT NULL
GROUP BY
  role_name

  1. API Failed Operations

Identify failed API calls, the affected resource, and the error type:

SELECT
  "detail_eventSource" as service,
  "detail_eventName" as operation,
  "detail_errorCode" as error_code,
  "detail_resources_ARN" as resource,
  COUNT(*) as error_count
FROM awsevents
WHERE "detail_errorCode" IS NOT NULL
AND "detail_resources_ARN" IS NOT NULL
GROUP BY service, operation, error_code, resource

  1. SSM Activity

Break down all operations happening via AWS Systems Manager (SSM):

SELECT
  "detail_eventName" as ssm_operation,
  COUNT(*) as operation_count
FROM
  awsevents
WHERE
  "detail_eventSource" = 'ssm.amazonaws.com'
GROUP BY
  ssm_operation
ORDER BY
  operation_count DESC;

Power Up Your Analysis with Saved SQLs

Parseable makes it easy to save your most-used SQL queries for instant reuse and collaboration. Whether you’re troubleshooting incidents or reporting to stakeholders, Saved SQLs mean less typing and more insight, right when you need it.

Here are some practical examples you can save and run with a single click:

  1. List of Errors

Get a chronological list of all error events, including key context for each:

SELECT 
  time, 
  "detail_eventName", 
  "detail_eventSource", 
  "detail_errorCode", 
  "detail_errorMessage", 
  "detail_resources_ARN"
FROM awsevents
WHERE "detail_errorCode" IS NOT NULL
ORDER BY time DESC
  1. Error Count for Each ARN

Group and count all errors per resource (ARN), event, and error type:

SELECT 
  "detail_eventName", 
  "detail_errorCode", 
  "detail_errorMessage", 
  "detail_resources_ARN", 
  COUNT(*)
FROM awsevents
WHERE "detail_errorCode" IS NOT NULL
GROUP BY 
  "detail_eventName", 
  "detail_errorCode", 
  "detail_errorMessage", 
  "detail_resources_ARN"
  1. API Calls per Service (Minute-by-Minute)

See a high-resolution timeline of API activity across AWS services:

SELECT 
  date_trunc('minute', p_timestamp) as timestamp, 
  "detail_eventSource", 
  COUNT(*) as call_count
FROM awsevents
GROUP BY 1, 2
ORDER BY 1 DESC, 3 DESC

With Saved SQLs, you don’t have to remember or rewrite your go-to queries. Just click Apply and instantly surface answers to your most important AWS telemetry questions.

💡
Pro tip: Share Saved SQLs with your team so everyone can leverage proven queries for faster troubleshooting and operational reviews.

Conclusion

By streaming all CloudTrail records through EventBridge and Kinesis Data Firehose straight into Parseable, you’ve removed the usual maze of Lambdas, queues, and ad-hoc buckets. Every AWS event now lives in one column-first store on S3, cost effective to maintain, fast to query.

Better yet, everything happens in one UI:

  • Datasets Page confirm ingestion health in real time.

  • Explore Page provides point-and-click filters, zooming from millions of events to a single API call in seconds.

  • SQL Editor offers full ANSI SQL for joins, window functions, and aggregations, no exporting to Athena or Redshift.

  • Saved Filters & Saved SQLs let you pin your favourite pivots (errors by ARN, S3 PUT hot spots, IAM role usage, etc.) so the whole team can rerun them with a single click.

No more juggling six browser tabs just to answer “who touched that bucket?” Parseable’s unified workspace keeps logs, traces, metrics, and security events side-by-side, ready for dashboards, alerts, or spur-of-the-moment investigations.

What’s Next?

Ready to go further with Parseable? Here’s how you can take the next step:

  • Book a Demo: Have questions, unique requirements, or want a guided walkthrough? Schedule a call with our team for a personalized demo and deep dive.

  • Purchase on AWS Marketplace: Deploy Parseable directly from the AWS Marketplace for seamless billing and rapid onboarding in your cloud account.

  • Join Our Community: Share your experience, get best practices, and stay updated, join our Slack or email us at hello@parseable.com.

Unlock true control and speed for your AWS observability. We can’t wait to see what you build next!

0
Subscribe to my newsletter

Read articles from Debabrata Panigrahi directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Debabrata Panigrahi
Debabrata Panigrahi