Ippon Blog

Security Recommendations of Serverless Applications Deep Dive

Written by Iryna Chmelyk | December 8, 2025

Introduction

The “Implementing security best practices for serverless applications” session at re:Invent was a very useful talk for anyone interested in running regular serverless and agentic AI workloads on AWS. It also validated that the security patterns Ippon teams are recommending and implementing for our clients are fully aligned with the latest AWS guidance and current best practices.

Current Security Best Practices for Serverless Applications

Environment isolation with AWS accounts
Using separate AWS accounts for product environments (dev, test, and prod) creates hard isolation boundaries that limit blast radius and reduce the risk of accidental cross-environment access. It also lets teams apply consistent guardrails at the OU/account level.

Encryption at rest and in transit
Encrypting all data at rest with KMS-backed keys and enforcing TLS in transit ensures that even if data is accessed outside intended paths, it remains protected and auditable. This is critical for both regulatory requirements and for defending against credential leaks, misconfigurations, and interception on the wire.​ Some AWS services encrypt data by default, while others require you to turn encryption on explicitly, so it’s important to double-check each service’s settings to ensure encryption is properly enabled everywhere.

Layered IAM with no wildcards
The session reinforced a “defense in depth” approach to IAM, specifically SCPs for organization-wide guardrails, resource policies on services like Lambda, API Gateway, S3, VPC endpoints, and KMS keys, plus tightly scoped IAM principal and session policies as well as permissions boundaries. Avoiding "*" in actions, resources, and principals is not just theory; it directly reduces the impact of compromised credentials or bugs by ensuring identities can only do exactly what they need.​

Lambda execution roles and resource-based policies
Giving each Lambda function a dedicated, minimal execution role and using resource-based policies to control who can invoke it turns identity into a precise perimeter. This prevents lateral movement between functions and services and makes permissions easier to review over time as serverless systems evolve.​

Detection and runtime visibility (Inspector and GuardDuty)
Amazon Inspector provides automated scanning of code and dependencies so known vulnerabilities are caught before deployment, which is critical in fast-moving serverless pipelines. Amazon GuardDuty then monitors for suspicious activity and runtime anomalies across accounts, filling the observability gap that traditional host-based agents can’t cover in serverless environments.​

Input validation and edge protection
Validating untrusted payloads at API Gateway and again inside Lambda (for example, with the Powertools validation library) stops malformed or malicious data before it can reach core logic. Adding AWS WAF in front of public-facing endpoints, with AWS-managed and custom rules, provides an extra shield against common web exploits, bots, and abuse patterns that would otherwise hit your APIs directly.​

Authorization Best Practices

Cognito vs. Lambda authorizers + Verified Permissions
Using Cognito or an external OIDC provider for user authentication, then choosing between Cognito authorizers and Lambda authorizers based on how much custom logic is needed, cleanly separates identity from application code. Layering Amazon Verified Permissions on top, with Cedar policies, enables fine-grained, context-aware authorization that can be tested, audited, and evolved without rewriting Lambdas.

OAuth2 Grant Types for serverless and agentic AI
The session focused on the two OAuth2 grant types that really matter in these architectures: authorization code for user-facing serverless applications and client credentials for machine-to-machine (M2M) and agent-to-API scenarios. The topic of M2M communication becomes critical in agentic AI workloads, where agents often act as the “system” using client credentials to call downstream APIs, while frontends rely on the authorization code flow to maintain strong user identity and consent.​ This is how the flow looks.

Auth with AgentCore Identity

Inbound and outbound auth 

This session also walked through inbound and outbound auth for Amazon Bedrock AgentCore Identity, which is expected to become more and more relevant as serverless applications start using AI agents as a part of their stack. 

AgentCore Identity is a centralized identity and credential management service for AI agents and automated workloads, designed so that agents become first-class identities with their own permissions and securely managed credentials. It was launched to solve the specific problem of how agents authenticate to gateways and downstream tools in a way that is auditable, least-privilege, and aligned with familiar patterns like OAuth2 and IAM.

Inbound auth in AgentCore Identity focuses on who is allowed to invoke an agent or gateway: it validates requests using IAM, OAuth2/OIDC providers, and JWT verification so that only trusted users, services, or other agents can reach your agent endpoints.

Outbound auth then controls what the agent is allowed to call once it is running by centrally managing OAuth clients, API keys, and IAM roles and issuing scoped credentials for downstream APIs and SaaS integrations.

In the context of serverless security, this dual model makes a lot of sense: inbound auth mirrors API Gateway and Lambda authorizers for protecting entry points, while outbound auth mirrors tightly scoped execution roles and resource-based policies, giving agents the minimum access they need with a clear audit trail. You can see a visualization of this below:

Final thoughts

This session did more than list patterns; it connected them into a coherent security model for serverless and agentic AI workloads. It was also a strong confirmation that the architectures and recommendations we’re bringing to clients are fully aligned with where AWS security best practices are today and where they’re heading next.

If your organization needs help figuring out how to make serverless applications on AWS more secure, contact us and we will be happy to help.