Protecting PII data on the cloud: Deep Dive into Encryption


Introduction
In our previous post on PII data security, we laid the groundwork for several techniques. Now, we'll dive deep into the most critical aspect of data protection: Encryption. Think of it as the digital lock and key that keeps your sensitive information safe from prying eyes.
In the cloud, data is constantly moving and resting, making it vulnerable if not properly secured. That's where two essential pillars of a robust security strategy come into play: Encryption at Rest and Encryption in Transit. This article, Part 2 of our series, will focus exclusively on these two crucial forms of encryption. We'll explore how they work and why they are indispensable for protecting PII, using clear, practical examples from the AWS ecosystem. Let's unlock the secrets of secure data.
Encryption at Rest: Securing Data When It's Still
Imagine your data as a valuable treasure. Encryption at rest is like putting that treasure in a super-secure vault. It means your data is encrypted when it's stored in any persistent location ‒ whether that's on a hard drive, in a database, or in cloud storage services. The primary goal here is to protect your data from unauthorized access if someone manages to get their hands on the storage medium or bypasses other security controls.
How Encryption at Rest Works
When data is encrypted at rest, it's transformed into an unreadable format (ciphertext) before it's written to storage. This transformation uses an encryption algorithm and a key. When the data needs to be accessed, it's decrypted using the same key, making it readable again. This process is often transparent to the applications and users accessing the data, meaning they don't need to encrypt or decrypt it themselves explicitly; the underlying service handles it. The beauty of encryption at rest is that even if a malicious actor gains physical access to your storage (e.g., steals a hard drive, or somehow accesses the underlying cloud infrastructure where your data resides), they won't be able to make sense of your data without the encryption key. It's a critical layer of defense against data breaches and unauthorized disclosure.
Encryption at Rest in AWS: Practical Examples
AWS, like other major cloud providers, offers robust and often transparent encryption at rest capabilities across its services. This makes it relatively straightforward to implement this fundamental security measure.
S3 Encryption
Amazon S3 (Simple Storage Service) is a highly scalable object storage service, often used for data lakes, backups, and static website hosting. Protecting data in S3 is paramount. AWS offers several options for S3 encryption at rest:
Server-Side Encryption with S3-Managed Keys (SSE-S3): This option is the simplest. AWS handles the encryption and decryption of your objects and manages the encryption keys for you. When you upload an object, S3 encrypts it before saving it to disk, and decrypts it when you download it. Amazon S3 automatically enables server-side encryption with Amazon S3-managed keys (SSE-S3) for new object uploads.
How it works:
Every object is encrypted with a unique data key using the strong AES-256 block cipher.
That data key is encrypted with a master key regularly rotated and managed entirely by AWS.
Both encryption and decryption are fully transparent—you just enable it.
Enablement:
It is set as a bucket default policy (by default), so all new objects get encrypted automatically.
Or, you can specify it per object upload (via headers like
x-amz-server-side-encryption: AES256
).
Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS): This option gives you more control over your encryption keys. AWS Key Management Service (KMS) is a managed service that makes it easy to create and control the encryption keys used to encrypt your data. With SSE-KMS, S3 uses a KMS key to encrypt your objects. You can define and manage these keys, set permissions on who can use them, and audit their usage. This is a popular choice for PII as it provides a clear audit trail of key usage and greater control over key management.
How it works:
Amazon S3 requests a new data key from AWS KMS, asking for both a plaintext version and a copy encrypted with the specified KMS key.
AWS KMS generates the data key, encrypts it with the chosen KMS key, and returns both versions—the plaintext key and the encrypted key—to S3.
Amazon S3 uses the plaintext key to encrypt your data, then discards it from memory as quickly as possible.
Amazon S3 stores the encrypted data key alongside the encrypted object as metadata.
Enablement:
You can enable this option via server-side encryption in the S3 bucket properties.
Add a header
x-amz-server-side-encryption: aws:kms
.
Server-Side Encryption with Customer-Provided Keys (SSE-C): For those who want to manage their own encryption keys entirely, SSE-C allows you to provide your own encryption key as part of your request. S3 uses this key to encrypt your data as it writes it to disks and decrypts it when you access it. S3 does not store your key, ensuring you have full control. However, this also means you are responsible for managing and protecting your keys.
How it works:
You (the customer) provide your own 256-bit encryption key with each S3 request.
S3 uses this key to perform AES-256 encryption on the object before writing it to disk.
The key itself is not stored in AWS. Instead, S3 keeps only a salted HMAC for validation.
Enablement:
You must use HTTPS.
This can be applied at the bucket policy level with headers like:
x-amz-server-side-encryption-customer-algorithm
- for specifying the encryption algorithmx-amz-server-side-encryption-customer-key
- for specifying the 256-bit, base 64 encoded encryption keyx-amz-server-side-encryption-customer-key-md5
- for specifying base64 encoded 128-bit MD5 digest of the encryption key.
Dual-layer server-side encryption with AWS KMS keys (DSSE-KMS): With dual-layer server-side encryption using AWS KMS keys (DSSE-KMS), Amazon S3 encrypts each object twice during upload. This approach makes it easier to meet compliance requirements that mandate multilayer encryption, while still giving you full control over the encryption keys through AWS KMS.
How it works:
AWS KMS keys must be in the same Region as the bucket.
S3 encrypts each object twice—two independent layers of encryption with 256-bit Advanced Encryption Standard Galois Counter Mode (AES-GCM) algorithm, each with a different KMS key.
Both keys are managed in KMS, ensuring independent cryptographic protection.
-
This option can be enabled on the object level via S3 bucket properties.
Header: specify the
x-amz-server-side-encryption
header with a value ofaws:kms:dsse
.Optionally use these -
x-amz-server-side-encryption-aws-kms-key-id: SSEKMSKeyId
x-amz-server-side-encryption-context: SSEKMSEncryptionContext
.Bucket policies can enforce uploads to require DSSE-KMS.
RDS/Redshift Encryption
Relational databases (like Amazon RDS for MySQL, PostgreSQL, SQL Server, Oracle, MariaDB) and data warehouses (like Amazon Redshift) also store vast amounts of sensitive data. AWS provides encryption at rest for these services, often integrated with AWS KMS.
Amazon RDS: When you enable encryption for an RDS DB instance, AWS RDS uses the industry-standard AES-256 encryption algorithm to encrypt your data on the server that hosts your DB instance.
How it works:
Encryption is managed transparently, meaning you can access your data as usual, and it's automatically encrypted before being written to storage.
RDS encrypts the underlying storage for a DB instance, its automated backups, read replicas, and snapshots. However, note that encryption must be configured at instance/cluster creation time (you can’t enable encryption on an existing unencrypted RDS instance without snapshot/restore or other workarounds).
The encryption process uses AWS Key Management Service (KMS) keys. You can use an AWS managed key or bring your own customer-managed key for greater control.
Temporary files and data in transit are not encrypted by this feature; for that, you need to enforce SSL/TLS connections.
Amazon Redshift: Redshift database encryption protects data at rest. When you enable encryption for a cluster, Redshift encrypts the data blocks and system metadata for the cluster.
How it works:
Redshift uses a four-tier key hierarchy for encryption: a master key, a cluster encryption key (CEK), a database encryption key (DEK), and data encryption keys.
It integrates with AWS KMS or a hardware security module (HSM) to manage the master key, providing a secure and auditable key management solution.
All data stored on the cluster, including backups (snapshots) in Amazon S3, is encrypted using the AES-256 algorithm.
Enabling encryption is a cluster-level setting that applies to all databases and user-created tables within that cluster.
EBS Encryption
Amazon Elastic Block Store (EBS) provides persistent block storage volumes for use with Amazon EC2 instances. These volumes are essentially virtual hard drives attached to your virtual servers. If your EC2 instances process or store PII, encrypting their attached EBS volumes is a must.
How it works:
When you create an encrypted EBS volume, all data at rest within the volume is encrypted.
Data moving between the EC2 instance and the encrypted EBS volume is also automatically encrypted in transit.
All snapshots created from the encrypted volume, and any subsequent volumes created from those snapshots, are also automatically encrypted.
The encryption process uses AWS KMS and the AES-256 algorithm, making it transparent to the EC2 instance and applications. You do not need to modify your applications to handle EBS encryption.
In essence, encryption at rest is your baseline defense. It's the first line of protection for your data when it's not actively being used or transmitted. While powerful, it's just one piece of the puzzle. Next, we'll explore how to protect data when it's in motion.
Encryption in Transit: Protecting Data on the Move
If encryption at rest is your data in a secure vault, then encryption in transit (or encryption in flight) is like having an armoured car transport that data. It refers to the protection of data as it moves from one point to another, across networks. This is crucial because data is often most vulnerable when it's being transmitted, as it can be intercepted or eavesdropped upon.
How Encryption in Transit Works
Encryption in transit typically involves cryptographic protocols that establish a secure, encrypted tunnel between two communicating parties. Before any data is sent, the client and server (or two services) perform a 'handshake' to agree on encryption algorithms and exchange cryptographic keys. Once this secure channel is established, all data transmitted through it is encrypted before sending and decrypted upon receipt. If an attacker intercepts the data, they only see scrambled, unreadable information. Common protocols used for encryption in transit include:
TLS (Transport Layer Security) / SSL (Secure Sockets Layer): These are the most widely used protocols for securing communication over computer networks. You see them in action every day when you browse websites with https:// in the URL. TLS/SSL encrypts the data exchanged between your browser and the website server.
VPN (Virtual Private Network): VPNs create a secure, encrypted tunnel over a public network (like the internet), allowing users to send and receive data as if their computing devices were directly connected to the private network.
IPsec (Internet Protocol Security): A suite of protocols that provides cryptographic security for IP networks. It can be used to create VPNs or secure direct communication between hosts.
Encryption in Transit in AWS: Practical Examples
AWS provides numerous mechanisms to ensure data is encrypted while in transit, covering various communication patterns within and outside the AWS ecosystem.
Encryption in Transit: AWS S3
The foundation of encryption in transit for Amazon S3 is HTTPS, which uses the Transport Layer Security (TLS) protocol (formerly SSL). Here's the process:
Client Initiates Connection: When you upload or download a file, your client (like the AWS Command Line Interface, an AWS SDK, or your web browser) initiates a connection to an S3 endpoint (e.g., https://my-bucket.s3.us-east-1.amazonaws.com).
TLS Handshake: The client and the S3 server perform a "TLS handshake." During this process, they:
Verify the identity of the S3 server using its SSL certificate.
Agree on a set of encryption algorithms (a "cipher suite") to use.
Securely exchange cryptographic keys that will be used for the session.
Encrypted Data Transfer: Once the handshake is complete, a secure, encrypted tunnel is established between your client and S3. All data and commands (like PUT, GET, and DELETE) sent through this tunnel are encrypted before they leave the client and are only decrypted after they arrive at the S3 server. The same process happens in reverse for downloads.
This entire process is transparent to you. As long as you are connecting to an https:// endpoint, the TLS encryption and decryption happen automatically.
How to Enforce Encryption in Transit:
To enforce a high-security posture, it is best practice to create a bucket policy that explicitly denies any requests that are not sent over secure communication.
Here is a standard S3 bucket policy to enforce this.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::your-bucket",
"arn:aws:s3:::your-bucket/*"
],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
}
]
}
TLS/SSL for Web Traffic (ALB, CloudFront)
When users access your applications hosted on AWS, securing the communication between their browsers and your servers is paramount. AWS services make this straightforward:
Application Load Balancer (ALB): ALBs distribute incoming application traffic across multiple targets, such as EC2 instances.
How it works:
ALBs use an HTTPS listener to terminate the TLS connection from the client. This offloads the encryption and decryption workload from your backend application servers.
It uses a "security policy," which is a combination of TLS protocols and ciphers, to negotiate the secure connection with clients.
ALBs seamlessly integrate with AWS Certificate Manager (ACM) to provision, manage, and deploy public SSL/TLS certificates for free.
You can also re-encrypt traffic from the ALB to the backend targets for end-to-end encryption.
Amazon CloudFront: CloudFront is a content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency. The practical steps for attaching a custom domain with a proper SSL/TLS certificate can be a major stumbling block. This deep dive addresses the common challenges and breaks down the process into two distinct parts: securing the connection from the Viewer to CloudFront and from CloudFront to your origin server.
Viewer-to-CloudFront Encryption
This is the most critical part for your users, ensuring that when they type https://www.your-app.com into their browser, they get a secure, trusted connection.How it works:
To serve traffic over HTTPS from a custom domain, CloudFront uses an SSL/TLS certificate to prove its identity to the user's browser.
This certificate is managed through AWS Certificate Manager (ACM) and allows the browser and CloudFront to negotiate an encrypted tunnel.
The Golden Rule: Because CloudFront is a global service, it has a hard requirement that the certificate must be provisioned in the us-east-1 (N. Virginia) region to be deployed to all edge locations.
CloudFront uses a Security Policy (e.g., TLSv1.2_2021) to define the minimum TLS protocol version and the ciphers that it will negotiate with the viewer's browser.
Implementation:
Step 1: Request Certificate in ACM (us-east-1): In the ACM console, ensure you are in the N. Virginia region. Request a public certificate for your domain (e.g., *.your-app.com) and complete the DNS validation by adding the provided CNAME record to your DNS provider.
Step 2: Configure CloudFront Distribution: Edit your distribution and add your custom domain (e.g., www.your-app.com) to the Alternate Domain Names (CNAMEs) field. In the Custom SSL Certificate field, select the certificate you just created.
Step 3: Point DNS to CloudFront: In your DNS provider (e.g., Route 53), create a record pointing your custom domain to the CloudFront distribution's domain name (e.g., d1234abcd.cloudfront.net). Use an Alias A record if using Route 53, or a CNAME record for other providers.
CloudFront-to-Origin Encryption (End-to-End Security)
For Custom Origins (ALB, EC2, etc.)
How it works:
When connecting to a custom origin, CloudFront acts like a browser and expects to validate a trusted SSL/TLS certificate installed on your origin server.
This ensures that data is encrypted not only from the user to the AWS edge but also from the edge to your application's front door.
Implementation:
In your CloudFront distribution's origin settings, set the Origin Protocol Policy to "HTTPS Only."
Ensure a valid certificate from a trusted Certificate Authority (like one provisioned by ACM on your ALB) is installed and configured on your origin server.
For S3 Bucket Origins
How it works:
CloudFront-to-S3 can be encrypted (HTTPS). To ensure end-to-end encryption, configure CloudFront to use HTTPS for the origin (Origin Protocol Policy = HTTPS Only) and use OAC so CloudFront signs requests to S3. Note: CloudFront–S3 traffic uses AWS network paths, but it still relies on HTTPS for cryptographic protection — do not rely on ‘private network’ semantics alone..
To ensure the S3 bucket is not publicly accessible, CloudFront uses an Origin Access Control (OAC), which creates a service principal and a bucket policy that only allows CloudFront to access the bucket's contents.
Implementation:
In your CloudFront distribution's origin settings, choose your S3 bucket.
For "Origin Access," select "Origin access control settings (recommended)" and follow the prompts to create the control setting. CloudFront will provide a bucket policy that you must copy and apply to your S3 bucket's permissions.
Database Connections
When your applications connect to databases like Amazon RDS or Redshift, you should always enforce SSL/TLS encryption for these connections.
Amazon RDS:
How it works:
Amazon RDS creates an SSL/TLS certificate and installs it on the DB instance when the instance is provisioned.
You can download the AWS-provided root certificate bundle and use it in your client application to connect securely.
For most database engines, you can enforce SSL/TLS connections by setting a specific parameter (e.g., rds.force_ssl=1 for PostgreSQL and MySQL) in the DB instance's parameter group. This will reject any non-encrypted connection attempts.
Amazon Redshift:
How it works:
To encrypt data in transit, Amazon Redshift uses SSL encryption for connections between your client and your Redshift cluster.
By default, cluster databases accept both SSL and non-SSL connections.
You can configure your Redshift cluster to require an SSL connection by setting the require_ssl parameter to true in the cluster's parameter group.
Client tools like JDBC or ODBC drivers must be configured with specific SSL parameters to establish a secure connection.
VPN and Direct Connect
For secure connectivity between your on-premises networks and your AWS Virtual Private Clouds (VPCs), AWS offers dedicated solutions that encrypt data in transit:
AWS Site-to-Site VPN:
How it works:
This service creates a secure connection between your data center or branch office and your AWS cloud resources.
It establishes two encrypted tunnels over the public internet using the industry-standard IPsec protocol. All data traffic flowing through these tunnels is automatically encrypted, ensuring confidentiality and integrity.
AWS Direct Connect:
How it works:
Direct Connect provides a dedicated, private network connection between your premises and AWS. While this connection is private, it is not inherently encrypted.
To encrypt data in transit over this private link, you must layer a cryptographic protocol on top. The common method is to establish an AWS Site-to-Site VPN over your Direct Connect connection, creating an IPsec-encrypted tunnel that runs through your private, high-bandwidth link.
Inter-Service Communication
Within the AWS ecosystem, many services communicate with each other. AWS often handles encryption in transit for these communications transparently, but it's good to be aware of how it works:
VPC Endpoints: When your EC2 instances or other services in a VPC need to access AWS services (like S3, DynamoDB, or SQS) without traversing the public internet, you can use VPC Endpoints. These endpoints ensure that traffic stays within the AWS network, and communication is often encrypted using TLS.
AWS PrivateLink: This technology allows you to securely publish your services to other VPCs or on-premises networks without exposing them to the public internet. PrivateLink uses network load balancers and VPC endpoints to establish private, secure connections, with traffic encrypted in transit.
Encryption in transit is vital for protecting data as it moves across potentially untrusted networks. Combined with encryption at rest, it forms a comprehensive security posture, safeguarding your data whether it's stationary or in motion.
Securing Your AI — Encryption in Amazon Bedrock
In the age of generative AI, the data flowing to and from large language models (LLMs) is a new frontier for security. This includes sensitive prompts, proprietary datasets for fine-tuning, and the model outputs themselves. Amazon Bedrock, as a managed service, is built with a security-first mindset and provides granular control over the encryption of your AI workloads.
Let's break down how Bedrock protects your data, both in transit and at rest, with a focus on how you can implement and enforce these controls.
Encryption in Transit: Secure by Default
This is straightforward and non-negotiable. All communication with the Amazon Bedrock service API, whether from your applications, the AWS SDK, or the CLI, occurs exclusively over an HTTPS connection using Transport Layer Security (TLS).
How it works: Data is encrypted before leaving your client and is only decrypted upon arrival at the Bedrock service endpoint. There are no unencrypted endpoints available.
Your Action: None required. This protection is automatic and cannot be disabled, ensuring all your prompts and model interactions are secure as they travel over the network.
Encryption at Rest: From Managed Protection to Granular Control
When Bedrock stores your data, it is always encrypted. You have two options for managing the encryption keys, allowing you to choose between simplicity and fine-grained control.
AWS-Owned Keys (Default)
By default, all data stored by Bedrock is encrypted at rest using keys that are owned, managed, and controlled entirely by AWS.
How it works: This is a fully managed solution. AWS handles key creation, rotation, and access policies for you.
Resources Covered: This applies to all data Bedrock might store, including model customization jobs, imported models, knowledge bases, agents, and more.
Customer-Managed Keys (CMK) with AWS KMS
For greater control over your security posture and to meet strict compliance or governance requirements, you can instruct Bedrock to use a customer-managed key (CMK) that you create and control in the AWS Key Management Service (KMS).
Why use a CMK?
Control: You control the key's lifecycle, including its creation, rotation schedule, and when to disable or delete it.
Permissions: You define exactly which users, roles, and services (like Bedrock) can use the key.
Auditability: All usage of your CMK is logged in AWS CloudTrail, providing a detailed audit trail of when and by whom your key was accessed.
Bedrock features that support CMKs:
Model Customization (Fine-Tuning) Jobs
Imported Models
Knowledge Bases
Agents for Bedrock
Flows for Bedrock
Implementation Guide: Using a Customer-Managed Key with Bedrock
When using CMKs, follow least privilege in key policies (restrict principals and kms:ViaService
to Bedrock service principal where appropriate), Enable CloudTrail logging for KMS usage, and enable automatic key rotation for CMKs or rotate per policy.
To use your own key, you must configure two sets of permissions: the KMS key policy (to permit Bedrock to use the key) and the IAM policy (to give your users/roles permission to run the Bedrock job and use the key).
Step 1: Create a KMS Key and Attach a Key Policy
First, create a symmetric encryption CMK in AWS KMS. During creation, or by editing its policy later, you must add a statement that allows the Bedrock service to use this key.
Here is a sample KMS Key Policy statement. You would add this to the existing statements in your key's policy document. In the Principal
field, add accounts that you want to allow to encrypt and decrypt the key to the list that the AWS
subfield maps to.
{
"Sid": "PermissionsEncryptDecryptModel",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::${account-id}:role/${role}"
]
},
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey",
"kms:DescribeKey",
"kms:CreateGrant"
],
"Resource": "*",
"Condition": {
"StringLike": {
"kms:ViaService": [
"bedrock.${region}.amazonaws.com"
]
}
}
}
Step 2: Attach an IAM Policy to the User or Role
The user or role that will start the Bedrock job (e.g., the data scientist running a fine-tuning job) needs permission to run the job and to use the KMS key.
Here is a sample IAM Policy for a user who needs to run a model customization job with a CMK:
Consider changing the resource & user ARNs along with the permitted actions on resources as per your best practices.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Bedrock permissions for model customization",
"Effect": "Allow",
"Action": [
"bedrock:CreateModelCustomizationJob"
],
"Resource": "*"
},
{
"Sid": "KMS permissions for model customization",
"Effect": "Allow",
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey",
"kms:CreateGrant",
"kms:ListGrants",
"kms:RevokeGrant"
],
"Resource": "arn:aws:kms:us-east-1:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
}
]
}
Conclusion: A Layered Defense
In this deep dive, we've explored the two foundational pillars of encryption in the cloud: Encryption at Rest and Encryption in Transit. We've seen how they work to protect your data when it's stationary in storage and when it's actively moving across networks. From S3 buckets to database connections, and from web traffic to inter-service communication, AWS provides a comprehensive suite of tools and services to implement these crucial security measures. Remember, a robust cloud security posture isn't about choosing one type of encryption over another; it's about implementing a layered defense. Combining encryption at rest with encryption in transit ensures that your data is protected throughout its lifecycle, significantly reducing the risk of unauthorized access and data breaches. As data engineers, understanding and effectively implementing these concepts is non-negotiable for building secure and compliant data architectures.
What's Next?
While encryption at rest and in transit provides a strong foundation, the world of data encryption is far more nuanced. In the next instalment of this series, we'll delve into more specialized and intriguing encryption techniques that offer unique capabilities for PII protection:
Deterministic Encryption: Where the same input always yields the same encrypted output, crucial for maintaining data integrity and enabling joins.
Probabilistic Encryption: This generates different ciphertexts for the same plaintext, enhancing security by obscuring patterns.
Homomorphic Encryption: This cutting-edge technique allows for the computation of encrypted data without ever decrypting it.
Stay tuned as we continue our journey into securing PII in the cloud!
Subscribe to my newsletter
Read articles from Kishan Rekhadia directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Kishan Rekhadia
Kishan Rekhadia
Kishan is a seasoned Data Engineer with strong expertise in building scalable, cloud-native data solutions on AWS. He brings over 4 years of hands-on experience delivering enterprise-grade data ingestion, transformation, masking, and reconciliation frameworks for top global clients in the banking and financial services sector. Currently working at Deloitte as a Consultant, he leads a high-impact team in designing metadata-driven data pipelines, automating end-to-end workflows using AWS Glue, Lambda, Step Functions, and enforcing granular RBAC policies via Lake Formation. His work has contributed to optimised data lake architectures, enabling 10–30 TB scale migrations from legacy systems to AWS S3, while ensuring compliance, security, and performance. Previously at Infosys, Kishan played a core engineering role in modernising data infrastructure for a leading US bank, delivering reusable ETL frameworks, QuickSight automation, and high-performance Python microservices. With deep knowledge of PySpark, Python, SQL, and tools like Airflow, Redshift, Aurora, and DBT, he has led the design of automated monitoring, alerting, and reconciliation systems with significant reductions in manual effort and latency. Certified across AWS and Microsoft Azure platforms, Kishan combines strong programming skills with architectural thinking, enabling seamless collaboration across business and technical teams. With a focus on scalability, automation, and data integrity, he thrives in fast-paced environments that demand innovation and execution.