Serverless Security Best Practices for AWS Lambda and Azure Functions
Serverless does not eliminate infrastructure security — it shifts the responsibility boundary. AWS manages the underlying EC2 instance, OS patching, and network isolation. You still own the function code, the IAM execution role, the event source configuration, the dependency supply chain, and the data the function processes. This shift creates a distinct threat model that most teams are not prepared for: attackers target the function logic and IAM configuration rather than the underlying host, because the host is not accessible. Understanding this boundary is the prerequisite for effective serverless security.
The Serverless Threat Model: What Changes, What Does Not
What the cloud provider handles:
- OS patching and vulnerability management for the underlying compute
- Network isolation between tenants
- Physical infrastructure security
- Runtime availability and scaling
What you own:
- Function code and its vulnerabilities (injection, broken auth, logic flaws)
- IAM execution role permissions
- Event source configuration and validation
- Dependency security (npm packages, PyPI packages, Lambda Layers)
- Secrets and environment variable management
- Function output and data exfiltration paths
- Observability and detection
The serverless attack surface in practice:
- Overprivileged execution roles — the most common misconfiguration. A Lambda function with
AdministratorAccessor broad S3, DynamoDB, or SSM permissions provides an attacker who achieves code execution with a highly privileged identity to pivot from. - Event injection — malicious input arrives via an event source (HTTP via API Gateway, S3 event, SQS message, SNS notification) and exploits insufficient input validation in the function.
- Vulnerable dependencies — serverless deployment packages bundle their dependencies. A vulnerable npm or PyPI package in a Lambda deployment package is exploitable without patching the OS.
- Secrets in environment variables — Lambda environment variables are visible in the function configuration in the AWS console and API. If an attacker compromises an IAM identity with
lambda:GetFunctionConfiguration, they can read all environment variables including secrets. - Denial of wallet — serverless billing is per-invocation and per-GB-second. An attacker who can invoke a function at scale or trigger long-running executions can generate significant unexpected cost.
Function-Level Least Privilege IAM: The Most Important Control
70% of serverless functions run with overprivileged roles. This is the most consequential misconfiguration in serverless environments because the execution role is the blast radius for code execution.
Principle: one function, one role, minimum permissions.
Do not share IAM roles across functions. Each function should have a dedicated execution role with only the permissions it needs to do its job. A Lambda function that reads from one DynamoDB table should have dynamodb:GetItem, dynamodb:Query, dynamodb:Scan on that specific table ARN — not dynamodb:* on *.
AWS Lambda example — tight execution role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:Query"
],
"Resource": "arn:aws:dynamodb:us-east-1:123456789012:table/Orders"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
}
]
}
What to remove from execution roles:
lambda:InvokeFunctionunless the function explicitly invokes other functions- Broad
s3:*ors3:GetObjecton*when only specific buckets are needed iam:*orsts:AssumeRolepermissions on execution roles (functions rarely need to assume other roles)ssm:GetParameterson*when only specific parameter paths are neededsecretsmanager:GetSecretValueon*when only specific secrets are needed
IAM Access Analyzer for policy validation: AWS IAM Access Analyzer can generate least-privilege policies based on CloudTrail access activity. Run a function in staging, collect 30 days of CloudTrail, use Access Analyzer policy generation to produce a minimum-permission policy based on observed API calls. This is faster and more accurate than manually auditing policy statements.
Azure Functions: Managed Identity over service principal keys:
For Azure Functions, use system-assigned or user-assigned Managed Identity rather than service principal client secrets. Managed Identity tokens are issued at runtime and do not require storing credentials. Assign the Managed Identity only the RBAC roles needed: Storage Blob Data Reader on a specific container, not Contributor on the subscription.
Briefings like this, every morning before 9am.
Threat intel, active CVEs, and campaign alerts, distilled for practitioners. 50,000+ subscribers. No noise.
Event Injection: The Most Exploited Serverless Vulnerability
Event injection occurs when a serverless function uses untrusted data from an event payload to construct a command, query, or external call without proper validation or sanitization. The event source is the equivalent of user input — and in serverless, the event can come from HTTP requests, S3 notifications, SQS messages, DynamoDB Streams, SNS topics, or IoT events.
SQL injection via API Gateway event:
# Vulnerable pattern
def handler(event, context):
user_id = event['queryStringParameters']['userId']
query = f"SELECT * FROM orders WHERE user_id = '{user_id}'"
# userId = "'; DROP TABLE orders; --"
Command injection via S3 event:
# Vulnerable pattern — processing filename from S3 event
def handler(event, context):
key = event['Records'][0]['s3']['object']['key']
os.system(f'convert {key} output.jpg') # Shell injection via filename
OS command injection is particularly severe in serverless because the execution role may have credentials that an attacker can extract from the function environment:
# What an attacker does after achieving command execution:
env | grep AWS # Extracts AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN
curl http://169.254.170.2/... # AWS Lambda credentials endpoint
Defense — validate at the boundary:
- Treat all event data as untrusted regardless of source
- Use parameterized queries for database access — never string concatenation
- Validate and sanitize filenames before using in OS commands; prefer subprocess with argument lists over
os.system() - Apply JSON Schema validation on API Gateway event payloads before passing to function logic
- Use API Gateway request validation to reject malformed requests before they reach the function
SQS injection — poisoned messages: If a function processes SQS messages that originated from external systems, validate message content before processing. A poisoned message that persists in the queue will be processed repeatedly until the function either handles it correctly or it reaches the dead-letter queue.
Dependency Security in Serverless Deployment Packages
Serverless deployment packages bundle their dependencies — unlike containerized workloads where the base image and application are separate layers, a Lambda zip or container image bundles everything together. This means you own the full dependency tree for every function.
Dependency scanning in CI/CD:
# npm audit for Node.js Lambda functions
npm audit --audit-level=high
# Fail the pipeline on high/critical CVEs
npm audit --audit-level=high --json | jq '.metadata.vulnerabilities.high + .metadata.vulnerabilities.critical > 0'
# pip-audit for Python Lambda functions
pip install pip-audit
pip-audit -r requirements.txt --format=json
# Snyk integration
snyk test --severity-threshold=high
Lambda Layers and shared dependencies: Lambda Layers allow multiple functions to share dependency packages. This reduces deployment package size but creates a shared vulnerability surface: a vulnerable package in a widely-used Layer affects every function that references it. Scan Layer contents with the same rigor as function packages. AWS Lambda publishes Layers for the AWS SDK — use the managed layer rather than bundling the SDK yourself, as AWS patches it.
Software composition analysis (SCA) in the pipeline: Integrate SCA tooling (Snyk, Dependabot, OWASP Dependency-Check) into the CI/CD pipeline with a break-the-build policy on high and critical severity CVEs. Do not rely on post-deployment scanning — fix vulnerabilities before deployment.
Dependency pinning:
Pin dependency versions explicitly rather than using ranges (^, ~, >=). Unpinned dependencies are updated automatically in new builds, which can introduce breaking changes or newly vulnerable versions without an explicit decision. Use lock files (package-lock.json, poetry.lock, requirements.txt with pinned versions) and commit them to the repository.
Secrets Handling in Serverless: Environment Variables Are Not Secrets
Lambda environment variables are not a secrets store. They are visible in the Lambda console, in GetFunctionConfiguration API calls, and potentially in logs if the runtime accidentally prints the environment. Anyone with the right IAM permissions can read them.
Correct pattern — retrieve secrets at runtime from Secrets Manager or SSM:
import boto3
import json
from functools import lru_cache
@lru_cache(maxsize=1)
def get_db_credentials():
client = boto3.client('secretsmanager')
response = client.get_secret_value(SecretId='prod/myapp/db')
return json.loads(response['SecretString'])
def handler(event, context):
creds = get_db_credentials() # Cached after first invocation
# Connect to database using creds
Using lru_cache (or a module-level cache) means the secret is only fetched once per Lambda execution environment rather than on every invocation — important for performance and Secrets Manager API cost.
AWS Lambda Powertools for parameter retrieval:
from aws_lambda_powertools.utilities import parameters
def handler(event, context):
# Automatically caches with configurable TTL
db_password = parameters.get_secret('prod/myapp/db', max_age=300)
What is acceptable in environment variables:
- Non-sensitive configuration (region, table names, feature flags)
- Resource ARNs and identifiers
- Log level configuration
What never belongs in environment variables:
- Passwords, API keys, private keys
- OAuth client secrets
- Encryption keys
Observability for Security: CloudTrail, CloudWatch, and Runtime Monitoring
Serverless functions are ephemeral — they spin up, execute, and terminate in milliseconds to minutes. Traditional agent-based security monitoring does not apply. Observability must be built into the function and collected via cloud-native services.
CloudTrail for API-level visibility: Every AWS API call made by a Lambda function is logged in CloudTrail under the function's execution role identity. Monitor for:
- Unexpected API calls from function execution roles (a database Lambda calling IAM or EC2 APIs is anomalous)
- Credential exfiltration patterns (Lambda role calling STS GetCallerIdentity, then IAM API calls)
- Data exfiltration via S3 GetObject calls to buckets the function should not access
CloudWatch Logs Insights — detecting anomalies:
-- Detect Lambda functions making IAM API calls (anomalous for most functions)
fields @timestamp, @message
| filter @message like /iam\.amazonaws\.com/
| stats count() by functionName, requestParameters.userName
| sort count desc
Amazon GuardDuty for serverless: GuardDuty supports Lambda threat detection (enable Lambda Protection). Detects behavioral anomalies: functions calling unusual external IPs, unusual API call patterns, credential exfiltration signatures. Enable at the organization level in AWS Organizations — do not leave it disabled to save cost.
AWS Lambda Insights: Enhanced monitoring for CPU time, memory utilization, disk usage, and network activity. Enables anomaly detection on resource usage — a function using significantly more network bandwidth than baseline may indicate data exfiltration.
Runtime Application Self-Protection (RASP) for serverless: Tools like Datadog Application Security Management and Contrast Security Serverless provide function-level runtime monitoring — detecting injection attacks, sensitive data access, and anomalous behavior from within the function execution. These add latency (typically 1-5ms) but provide visibility that platform-level logging cannot.
Serverless Threat Modeling: Mapping the Attack Surface
Use the following framework to threat model a serverless application systematically.
Step 1 — Map all event sources: List every event source that triggers each function: API Gateway endpoints, S3 bucket notifications, SQS queues, SNS topics, EventBridge rules, Cognito triggers, DynamoDB Streams. Each event source is a potential untrusted data entry point.
Step 2 — Evaluate event source authentication:
- API Gateway: Is the endpoint publicly accessible? Is authentication required (Cognito User Pool, Lambda authorizer, API key)? Is authorization checked in the function or the gateway?
- S3 triggers: Who can write to the triggering bucket? Can an external party upload a malicious file that triggers the function?
- SQS: Is the queue public? Can external parties write messages? Is message content validated before processing?
Step 3 — Analyze execution role permissions: For each function, enumerate what the execution role can do: which services, which resources, which actions. Map this against what the function actually needs. The gap is the blast radius.
Step 4 — Trace data flows: What data enters the function? What external services does it call? What data does it write? Where could sensitive data be logged accidentally?
Step 5 — Enumerate the dependency tree: Run SCA against the deployment package. Identify any high/critical CVEs and assess exploitability in the function's specific usage context.
OWASP Serverless Top 10 as a checklist:
- Injection flaws (event injection)
- Broken authentication
- Insecure serverless deployment configuration
- Over-privileged function permissions and roles
- Inadequate function monitoring and logging
- Insecure third-party dependencies
- Insecure application secrets storage
- Denial of service via financial resource exhaustion
- Serverless function execution flow manipulation
- Improper exception handling and verbose error messages
The bottom line
Serverless security reduces to three primary controls: least-privilege IAM execution roles, rigorous input validation at every event source boundary, and runtime observability via CloudTrail and GuardDuty. The overprivileged execution role is the most common and consequential mistake — fix it first by auditing every function role against what the function actually calls. Enable GuardDuty Lambda Protection organization-wide. Run SCA against deployment packages in CI/CD. Retrieve secrets from Secrets Manager at runtime rather than environment variables. The cloud provider eliminates infrastructure security overhead — use that headspace to focus on the application and IAM layer where serverless attacks actually occur.
Frequently asked questions
Is serverless more or less secure than traditional compute?
Neither — it is differently secure. Serverless eliminates an entire class of risk: OS vulnerability management, kernel exploits, and network perimeter configuration for the underlying host. You do not patch Lambda's underlying OS. However, it concentrates risk in areas where many teams are less experienced: IAM execution role configuration, event source input validation, and dependency management. Organizations that would otherwise neglect OS patching often benefit from serverless. Organizations with mature infrastructure security programs but immature application security may introduce new risk.
How do I enforce least privilege on Lambda execution roles?
Start with AWS IAM Access Analyzer policy generation — run the function in a staging environment for 2-4 weeks, then use Access Analyzer to generate a minimum-permission policy based on observed CloudTrail activity. Complement with manual review of the generated policy against the function's actual requirements. One function per execution role — never share roles across functions with different access requirements. Set up AWS Config rules or Service Control Policies to alert or deny wildcard (`*`) actions and resources in Lambda execution role policies.
What is event injection in serverless and how do I prevent it?
Event injection occurs when a function uses untrusted data from an event payload — HTTP request parameters, S3 object keys, SQS message bodies — in a SQL query, OS command, or external API call without validation. Prevention: treat all event data as untrusted, use parameterized queries for database access, validate payloads against a JSON Schema before processing, use subprocess with argument lists instead of os.system() for external commands, and configure API Gateway request validation to reject malformed inputs before they reach the function.
How do I handle secrets in AWS Lambda without hardcoding them?
Retrieve secrets at runtime from AWS Secrets Manager or SSM Parameter Store using the function's execution role identity (no stored credentials needed). Cache the retrieved secret in a module-level variable or with lru_cache to avoid fetching on every invocation. AWS Lambda Powertools' parameters utility provides TTL-based caching. Never put passwords, API keys, or private keys in environment variables — they are visible in the Lambda console and GetFunctionConfiguration API calls.
Can WAFs protect serverless applications?
Partially, and only for HTTP-triggered functions. AWS WAF can be attached to API Gateway, CloudFront, or Application Load Balancer to filter HTTP traffic before it reaches Lambda. It blocks common web attacks (SQL injection, XSS, OWASP Top 10 patterns) and rate limits requests. However, WAF only covers HTTP event sources — functions triggered by S3, SQS, SNS, DynamoDB Streams, or EventBridge are not protected by WAF. Function-level input validation is required regardless of WAF coverage.
What OWASP guidance exists for serverless?
The OWASP Serverless Top 10 is the primary reference document. It covers injection flaws, broken authentication, insecure deployment configuration, over-privileged roles, inadequate monitoring, vulnerable dependencies, insecure secret storage, denial of wallet attacks, function execution flow manipulation, and improper error handling. The OWASP ServerlessGoat project is a deliberately vulnerable serverless application for hands-on learning — useful for understanding how these vulnerabilities look in practice before finding them in your own environment.
Sources & references
- OWASP Serverless Top 10
- AWS Lambda Security Best Practices — AWS Security Blog
- CNCF Cloud Native Security Whitepaper 2025
- Datadog State of Serverless 2025
- PureSec Serverless Security Top 10 (Foundational Research)
Free resources
Critical CVE Reference Card 2025–2026
25 actively exploited vulnerabilities with CVSS scores, exploit status, and patch availability. Print it, pin it, share it with your SOC team.
Ransomware Incident Response Playbook
Step-by-step 24-hour IR checklist covering detection, containment, eradication, and recovery. Built for SOC teams, IR leads, and CISOs.
Get threat intel before your inbox does.
50,000+ security professionals read Decryption Digest for early warnings on zero-days, ransomware, and nation-state campaigns. Free, weekly, no spam.
Unsubscribe anytime. We never sell your data.

Founder & Cybersecurity Evangelist, Decryption Digest
Cybersecurity professional with expertise in threat intelligence, vulnerability research, and enterprise security. Covers zero-days, ransomware, and nation-state operations for 50,000+ security professionals weekly.
The Mythos Brief is free.
AI that finds 27-year-old zero-days. What it means for your security program.
