

Nov 22, 2024
Headless websites, such as those built using Next.js, offer several performance and SEO benefits over traditional websites. However, a rendering hosting service is one requirement for launching websites using a modern headless CMS like XM Cloud.
These hosting services manage the front-end delivery and static assets separately from the CMS backend, whereas a traditional website bundles the front-end and backend on the same server. This approach provides the performance and scalability benefits of a headless website.
Many XM Cloud users opt for Vercel or Netlify when selecting a hosting option. However, we’ve found that for enterprises that want to control costs and infrastructure management, setting up AWS resources and leveraging serverless application deployment can cost less than services like Vercel or Netlify offer.
This article will explore hosting options and alternatives to Netlify and Vercel for hosting headless websites, specifically self-hosting Next.js applications on AWS.
When it comes to deploying a Next.js website, the most common hosting options include:
Vercel
Netlify
AWS Amplify
Self-hosting
As the creators of Next.js, Vercel hosting includes support for all the Next.js features. Choosing options outside of Vercel could prevent enterprises from losing out on some available features. However, in most cases, businesses aren’t using the full assortment of available features.
Vercel/Netlify/AWS Amplify all provide for easy setup. However, there are several reasons why you may choose to self-host your Next.js application. Some of these reasons may be:
It is expensive to host with Vercel or other providers
You are an AWS shop with all its infrastructure in AWS, but you aren’t happy with the Amplify service. The reasons could vary from the amount of time it takes to support newer Next.js features or the “miss from Cloudfront” issue that has been open for a while waiting for a fix.
You want to leverage the AWS services that Netlify, Vercel, and AWS Amplify use under the hood.
You want to optimize costs.
Some of the criteria in our approach included the need to:
Leverage CloudFront for caching
Leverage S3 for assets
Leverage the Lambda function for the Next.js application
Leverage all AWS services for this architecture
Have 'Zero downtime' deployment with this architecture.
We have built our infrastructure with the following components:
AWS CloudFront
AWS API Gateway
AWS Lambda Function in VPC
AWS S3
AWS ElastiCache Redis in VPC
AWS Codebuild
AWS CloudWatch
Amazon EventBridge
AWS IAM
We also created terraform scripts to create this infrastructure for us (included in the appendix at the end of the article). The overall architecture is as follows:
The request hits Cloudfront and then is either forwarded to the API-gateway for Next.js application or routes any requests with pattern nextjsassets-*, or public assets to an S3 bucket.
The API-gateway request is integrated with the AWS Lambda function running the Next.js application using the Proxy Integration.
AWS Lambda function is running Node.js 18.x runtime
We added an env variable BUILD_STANDALONE
and set it to false locally but to true in upper environment builds.
We built the Next.js application in standalone mode using the output setting in next.config.js.
output: process.env.BUILD_STANDALONE === "true" ? "standalone" : undefined
We used the Cloudfront domain as the publicUrl (PUBLIC_URL
in the .env file) ex: https://abc1234.cloudfront.net
We set the assetPrefix in next.config.js to be a combination of publicUrl, assets_base_folder_prefix, codebuild number
assetPrefix: process.env.BUILD_STANDALONE === "true" ? publicUrl+"/"+process.env.ASSETS_BASE_FOLDER_PREFIX+"-"+process.env.CODEBUILD_BUILD_NUMBER : publicUrl
We needed this to be build number-based so that we can have zero downtime deployment, whereby we are not overwriting existing files, but writing to a new folder in S3 for every build. This will provide for a clear separation of builds and support the cleanup of older builds through maintenance jobs. We need this assets base folder prefix to configure the respective rule on Cloudfront to serve these requests matching this path from the S3 bucket. This gives an assetPrefix such as https://abc1234.cloudfront.net/nextjsassets-1
and file paths such as https://abc1234.cloudfront.net/nextjsassets-10/_next/static/chunks/pages/_app-1234.js
We created a folder in public named nextjsassets-public
and put all the public assets (primarily images or image icons) in there. This folder will be uploaded to the same S3 bucket that contains the .next/static assets. That would provide for better caching on these assets. So the S3 bucket structure will be like this:
S3 bucket
- nextjsassets-public
- nextjsassets-1
- _next
- static
- nextjsassets-2
- _next
- static
- nextjsassets-3
- _next
- static
If you foresee using this public folder for changing assets such as CSS/JS that change from one build to the next, then there are better ways to support zero downtime deployment. In that case, you would copy these into the .next/standalone/public
folder and be served by the Node.js runtime in the Lambda function.
A sample run.sh
file is used to run the Next.js application using Node js runtime. The file contents are as follows:
#!/bin/bash
node standalone/server.js
Image optimization in Next.js is another area that needs to be addressed. However, in the XM Cloud Next.js scenario, most images come from Sitecore Edge and are optimized outside Next.js. If needed, this can be addressed using image loaders, as mentioned here.
module.exports = {
images: {
loader: 'custom',
loaderFile: './my/image/loader.js',
},
}
//my/image/loader.js contents
'use client'
export default function myImageLoader({ src, width, quality }) {
return `https://example.com/${src}?w=${width}&q=${quality || 75}`
}
Redis cache changes for ISR will work as expected. See below ISR section for details.
We explored AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy to see which tool would be best for the job. We started with AWS CodeBuild to build the artifacts and push them to separate artifact S3 buckets.
AWS CodeDeploy claimed to have supported zero-downtime deployments to Lambda functions and EC2 instances but did not provide a way to deploy the artifacts built using AWS CodeBuild to Lambda. All examples were provided to shift the alias from one version to another of the Lambda function but did not discuss the deployment aspect.
As such, AWS CodeBuild and AWS CodePipeline were the more appropriate tools for the job. AWS CodePipeline incorporates AWS CodeBuild into its process and is better. However, to save time, all that remained was to push the assets to S3 and the artifact zip to the Lambda function. All of these could be handled through AWS CLI commands. For now, we have extended the AWS CodeBuild buildspec to incorporate these things.
AWS CodeBuild exposes all the Build environment variables as environment variables on the build server. However, we wanted to provide a way to support the environment variables of certain patterns so that we can pull them all using the grep command on Linux and build the .env.local
file for the Next.js build. We chose to prefix all such variables with OSH_AMP_
. For example, the following env variables are configured on the AWS Code Build project (Some of the variables are omitted for brevity):
OSH_AMP_PUBLIC_URL | https://abc1234.cloudfront.net |
OSH_AMP_SITECORE_API_KEY | abcdef |
OSH_AMP_GRAPH_QL_ENDPOINT | https://edge.sitecorecloud.io/api/graphql/v1 |
OSH_ISR_REVALIDATE | 3600 |
OSH_ASSETS_BASE_FOLDER_PREFIX | nextjsassets |
ENV_VARIABLE_1 | value1 |
ENV_VARIABLE_2 | value2 |
This would produce the following .env.local file for the Next.js build
PUBLIC_URL=https://abc1234.cloudfront.net
SITECORE_API_KEY=abcdef
GRAPH_QL_ENDPOINT=https://edge.sitecorecloud.io/api/graphql/v1
ISR_REVALIDATE=3600
ASSETS_BASE_FOLDER_PREFIX=nextjsassets
We also push the following into the .env.local file
CODEBUILD_BUILD_NUMBER=10
Sample BuildSpec file
version: 0.2
phases:
install:
commands:
- export FE_SOURCE=${CODEBUILD_SRC_DIR}/src/sxastarter
- cd $FE_SOURCE
- printenv|grep OSH_AMP_.*|sed 's/OSH_AMP_//gI'>> .env.local
- printenv|grep CODEBUILD_BUILD_NUMBER >> .env.local
- npm install
build:
commands:
- echo "Building nextjs application"
- npm run build
- echo "finished building nextjs app"
post_build:
commands:
# move necessary files into .next folder and remove folders
#public folder here is being put into the next.js application, but can be
# pushed to separate nextjsassets-public folder on the s3 bucket.
- mv ${FE_SOURCE}/public ${FE_SOURCE}/.next/standalone
- mv ${FE_SOURCE}/run.sh ${FE_SOURCE}/.next
- mv ${FE_SOURCE}/.env.local ${FE_SOURCE}/.next/standalone
- rm -rf ${FE_SOURCE}/.next/cache
- rm -rf ${FE_SOURCE}/.next/server
# move assets for s3 bucket
- mkdir assets
- export ASSETS_BASE_FOLDER=${ASSETS_BASE_FOLDER_PREFIX}-${CODEBUILD_BUILD_NUMBER}
- echo ${ASSETS_BASE_FOLDER}
- mkdir assets/${ASSETS_BASE_FOLDER}
- mkdir assets/${ASSETS_BASE_FOLDER}/_next
- mv ${FE_SOURCE}/.next/static assets/${ASSETS_BASE_FOLDER}/_next/
# set artifact sources.
- export ARTIFACT1_SRC=${FE_SOURCE}/.next
- export ARTIFACT2_SRC=${FE_SOURCE}/assets
# copy assets to s3
- aws s3 cp ${FE_SOURCE}/assets s3://${ASSETS_S3_BUCKET}/ --recursive
# deploy zip to lambda function
- cd $ARTIFACT1_SRC
- zip -r artifact1.zip ./*
- aws lambda get-alias --function-name=${LAMBDA_FUNCTION_NAME} --name=${LAMBDA_ALIAS_NAME} --query "to_number(FunctionVersion)"
- publishedVersion=$(aws lambda update-function-code --function-name=${LAMBDA_FUNCTION_NAME} --zip-file fileb://artifact1.zip --publish --query "Version" --output text)
- echo $publishedVersion
#allow sometime for the deployment of the zip file and update function code to finish.
- sleep 10
# warm up lambda and update alias for going live
- aws lambda invoke --function-name=${LAMBDA_FUNCTION_NAME} --qualifier=$publishedVersion response.json
- aws lambda update-alias --function-name=${LAMBDA_FUNCTION_NAME} --name=${LAMBDA_ALIAS_NAME} --function-version $publishedVersion
# purge cloudfront
- aws cloudfront create-invalidation --distribution-id ${CLOUDFRONT_DISTRIBUTION_ID} --paths "/" "/*"
artifacts:
files:
- '**.*'
secondary-artifacts:
artifact1:
base-directory: ${ARTIFACT1_SRC}
files:
- '**/*'
artifact2:
base-directory: ${ARTIFACT2_SRC}
files:
- '**/*'
The deployment gets you a basic, working Next.js application site hosted and serving HTML. However, the ISR feature was not working. This is where we introduced Redis into the architecture and used it for caching to support the ISR feature of Next.js, as mentioned in the Next.js site. We have used @neshca/cache-handler to support the Redis cache.
We used the following changes to the Next.js application in next.config.js
const nextConfig = {
...
//Use `experimental` option instead of the `cacheHandler` property when using Next.js versions from 13.5.1 to 14.0.4
//cacheHandler: require.resolve('./cache-handler.mjs'),
experimental: {
incrementalCacheHandlerPath: require.resolve('./cache-handler.js'),
},
...
}
const { CacheHandler } = require("@neshca/cache-handler");
//import { CacheHandler } from '@neshca/cache-handler/dist';
const { isImplicitTag } = require('@neshca/cache-handler/helpers');
const { createClient, commandOptions } = require('redis');
CacheHandler.onCreation(async () => {
//Always create a Redis client inside the `onCreation` callback.
const client = createClient({
url: process.env.REDIS_SERVICE_URL,
pingInterval: 1000,
socket: {
connectTimeout:30000,
},
});
// Ignore Redis errors: https://github.com/redis/node-redis?tab=readme-ov-file#events.
client.on('error', (mesg) => {
console.log('error message:', process.env.REDIS_SERVICE_URL, mesg)
});
await client.connect();
console.log("codebuild:"+process.env.CODEBUILD_BUILD_NUMBER);
// Define a timeout for Redis operations.
const timeoutMs = 30000;
// Define a key prefix for the cache.
// It is useful to avoid key collisions with other data in Redis,
// or to delete all cache keys at once by using a pattern.
const keyPrefix = 'pointB:'+ process.env.CODEBUILD_BUILD_NUMBER + ":";
// Define a key for shared tags.
// You'll see how to use it later in the `revalidateTag` method
const sharedTagsKey = '_sharedTags_';
// Create an assert function to ensure that the client is ready before using it.
// When you throw an error in any Handler method,
// the CacheHandler will use the next available Handler listed in the `handlers` array.
function assertClientIsReady() {
if (!client.isReady) {
console.log("client is not ready");
throw new Error('Redis client is not ready yet or connection is lost.');
}
console.log("Client is ready");
}
const revalidatedTagsKey = `${keyPrefix}__revalidated_tags__`;
// Create a custom Redis Handler
const customRedisHandler = {
// Give the handler a name.
// It is useful for logging in debug mode.
name: 'redis-strings-custom',
// We do not use try/catch blocks in the Handler methods.
// CacheHandler will handle errors and use the next available Handler.
async get(key) {
// Ensure that the client is ready before using it.
// If the client is not ready, the CacheHandler will use the next available Handler.
assertClientIsReady();
// Create a new AbortSignal with a timeout for the Redis operation.
// By default, redis client operations will wait indefinitely.
const options = commandOptions({ signal: AbortSignal.timeout(timeoutMs) });
console.log("getkey:"+keyPrefix+key);
// Get the value from Redis.
// We use the key prefix to avoid key collisions with other data in Redis.
const result = await client.get(options, keyPrefix + key);
// If the key does not exist, return null.
if (!result) {
console.log("getkey result is null, return null:"+keyPrefix+key);
return null;
}
// Redis stores strings, so we need to parse the JSON.
const cacheValue = JSON.parse(result);
// If the cache value has no tags, return it early.
if (!cacheValue) {
console.log("getkey cachevalue is null, return null:"+keyPrefix+key);
return null;
}
// Return the cache value.
return cacheValue;
},
async set(key, cacheHandlerValue) {
// Ensure that the client is ready before using it.
assertClientIsReady();
// Create a new AbortSignal with a timeout for the Redis operation.
const options = commandOptions({ signal: AbortSignal.timeout(timeoutMs) });
const setOperation = client.set(options, keyPrefix + key, JSON.stringify(cacheHandlerValue));
console.log("setoperation sueccessful:"+keyPrefix+key);
// Wait for all operations to complete.
await Promise.all([setOperation]);
},
async revalidateTag(tag) {
console.log("revalidate tag:"+tag);
// Ensure that the client is ready before using it.
assertClientIsReady();
for (let [key, value] of cache) {
// If the value's tags include the specified tag, delete this entry
if (value.tags.includes(tag)) {
client.delete(key)
}
}
},
};
return {
// The order of the handlers is important.
// The CacheHandler will run get methods in the order of the handlers array.
// Other methods will be run in parallel.
handlers: [customRedisHandler],
};
});
module.exports = CacheHandler;
With the rise of headless websites built using Next.js, enterprises should consider a number of hosting options, such as Vercel, Netlify, AWS Amplify, or self-hosting.
While Vercel and Netlify seem like the obvious choice, self-hosting can be an option for cost control and infrastructure management. We’ve found that self-hosting on AWS can reduce loading times by up to 90% and enhance the user experience tenfold.
For enterprises that don’t want to manage this self-hosting themselves but would prefer the cost savings it provides, Oshyn is an enterprise digital agency adept at helping businesses create digital experiences that connect with their customers.
With our DevOps expertise and Oshyn’s head hosting service, we can help set up and host headless websites built on Next.js while optimizing cloud spending.
If you currently use AWS for your cloud infrastructure, read our blog post about Optimizing Your AWS Bill.
These terraform scripts can be used to follow our approach. The terraform scripts are divided into the following files:
main.tf
provider "aws" {
region = var.aws_region
}
variables.tf - for holding variables
variable "customerId" {
description = "Customer Id"
type = string
default = "testcustomer"
}
variable "aws_region" {
description = "AWS region"
type = string
default = "us-west-1"
}
variable "oshyn_amplify_tag" {
description = "sample tag for oshyn amplify"
type = string
default = "Oshyn-Amplify"
}
#lambda variables
variable "lambda_function_suffix" {
description = "lambda function suffix"
type = string
default = "oshyn-amplify-lambda-function"
}
variable "subnet_id" {
description = "subnet id"
type = string
default = "subnet-0eae8akjihgfeabcd"
}
variable "lambda_function_alias_name"{
description = "lambda function alias name"
type = string
default = "LiveAlias"
}
variable "lambda_function_log_group_suffix" {
description = "lambda function log group suffix"
type = string
default = "oshyn-amplify-lambda-function-log-group"
}
variable "lambda_role_suffix" {
description = "lambda role suffix"
type = string
default = "oshyn-amplify-lambda-role"
}
variable "lambda_logging_suffix" {
description = "lambda logging suffix"
type = string
default = "oshyn-amplify-lambda-logging"
}
variable "lambda_update_policy_suffix" {
description = "lambda function update policy"
type = string
default = "oshyn-amplify-lambda-update-policy"
}
variable "lamdba_adapter_layer_arn"{
description = "lambda adapter layer arn value"
type = string
default = "arn:aws:lambda:us-west-1:753240598075:layer:LambdaAdapterLayerX86:13"
}
#api-gateway variables
variable "apigateway_suffix" {
description = "customer oshyn amplify api gateway suffix"
type = string
default = "oshyn-amplify-api-gateway"
}
variable "api-gatway-prod-stage-name" {
description = "sample prod-stagename for api gateway"
type = string
default = "Prod"
}
variable "api-gatway-cloudwatch-global" {
description = "apigateway cloudwatch global role name"
type = string
default = "api-gateway-cloudwatch-global"
}
#cloudfront variables
variable "cloudfront-price-class"{
description = "cloudfront price class"
type = string
default = "PriceClass_100"
}
#redis cache variables
variable "redis_cache_suffix" {
description = "redis cache suffix"
type = string
default = "oshyn-amplify-redis-cache"
}
variable "event_rule_suffix" {
description = "event rule suffix"
type = string
default = "oshyn-amplify-event-trigger-rule"
}
variable "security_group_id" {
description = "security group id"
type = string
default = "sg-06d6cfa9aaf25abcd"
}
data.tf - for holding data resources
data "aws_cloudfront_cache_policy" "aws-managed-caching-optimized" {
name = "Managed-CachingOptimized"
}
data "aws_cloudfront_origin_request_policy" "aws-managed-all-viewer-except-host-header" {
name = "Managed-AllViewerExceptHostHeader"
}
data "aws_iam_policy" "AWSLambdaVPCAccessExecutionRole"{
name = "AWSLambdaVPCAccessExecutionRole"
}
data "aws_elasticache_subnet_group" "subnet-redis" {
name = "subnet-redis"
}
output.tf - for holding output variables
#there are other outputs but truncated for brevity
output "aws-managed-caching-optimized-id" {
value = data.aws_cloudfront_cache_policy.aws-managed-caching-optimized.id
}
output "aws_lambda_function_invoke_arn"{
value = aws_lambda_function.customer-lambda-function.invoke_arn
}
output "aws_lambda_function_version"{
value = aws_lambda_function.customer-lambda-function.version
}
output "aws_lambda_function_alias_arn"{
value = aws_lambda_alias.customer-lambda-function-alias.invoke_arn
}
s3 bucket
resource "aws_s3_bucket" "customer-oshyn-amplify-s3" {
bucket_prefix = format("%s-%s",var.customerId,"oshyn-amplify-s3-")
tags = {
Oshyn-Amplify = var.oshyn_amplify_tag
}
}
resource "aws_s3_bucket_public_access_block" "customer-oshyn-amplify-s3-public-access" {
bucket = aws_s3_bucket.customer-oshyn-amplify-s3.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
resource "aws_s3_bucket_ownership_controls" "customer-oshyn-amplify-s3-ownership" {
bucket = aws_s3_bucket.customer-oshyn-amplify-s3.id
rule {
object_ownership = "BucketOwnerEnforced"
}
}
redis.tf - Redis instance
resource "aws_elasticache_replication_group" "customer-redis-cluster" {
engine = "redis"
node_type = "cache.t3.small"
parameter_group_name = "default.redis7"
engine_version = "7.1"
port = 6379
replication_group_id = format("%s-%s-2",var.customerId,var.redis_cache_suffix)
description = "redis cache for ${var.customerId}"
subnet_group_name = data.aws_elasticache_subnet_group.subnet-redis.name
security_group_ids = [var.security_group_id]
multi_az_enabled = false
transit_encryption_enabled = false
automatic_failover_enabled = true
network_type = "ipv4"
at_rest_encryption_enabled = false
num_node_groups = 1
replicas_per_node_group = 2
tags = {
Oshyn-Amplify = var.oshyn_amplify_tag
}
apply_immediately = true
log_delivery_configuration {
destination = aws_cloudwatch_log_group.slow_log.name
destination_type = "cloudwatch-logs"
log_format = "json"
log_type = "slow-log"
}
log_delivery_configuration {
destination = aws_cloudwatch_log_group.engine_log.name
destination_type = "cloudwatch-logs"
log_format = "json"
log_type = "engine-log"
}
}
resource "aws_cloudwatch_log_group" "slow_log"{
name = format("elasticache/%s-%s-2/slow-log",var.customerId,var.redis_cache_suffix)
}
resource "aws_cloudwatch_log_group" "engine_log"{
name = format("elasticache/%s-%s-2/engine-log",var.customerId,var.redis_cache_suffix)
}
lambdafunction.tf - Lambda function
resource "aws_lambda_function" "customer-lambda-function" {
function_name = format("%s-%s",var.customerId,var.lambda_function_suffix)
runtime = "nodejs18.x"
role = aws_iam_role.customer-lambda-role.arn
handler = "run.sh"
filename = "Deployment4.zip"
memory_size = 1024
publish = "true"
layers = [ var.lamdba_adapter_layer_arn ]
tags = {
Oshyn-Amplify = var.oshyn_amplify_tag
}
vpc_config {
ipv6_allowed_for_dual_stack = false
security_group_ids = [var.security_group_id]
subnet_ids = [var.subnet_id]
}
environment {
variables = {
AWS_LAMBDA_EXEC_WRAPPER = "/opt/bootstrap",
PORT = 8080
}
}
depends_on = [
aws_iam_role_policy_attachment.customer-lambda-logs,
]
}
resource "aws_lambda_alias" "customer-lambda-function-alias" {
name = var.lambda_function_alias_name
function_name = aws_lambda_function.customer-lambda-function.function_name
function_version = aws_lambda_function.customer-lambda-function.version
}
resource "aws_cloudwatch_log_group" "customer-lambda-function-log-group" {
name = "/aws/lambda/${aws_lambda_function.customer-lambda-function.function_name}"
tags = {
Oshyn-Amplify = var.oshyn_amplify_tag
}
retention_in_days = 14
}
resource "aws_iam_policy" "customer-lambda-logging" {
name = format("%s-%s",var.customerId,var.lambda_logging_suffix)
path = "/"
description = "IAM policy for logging from a lambda"
tags = {
Oshyn-Amplify = var.oshyn_amplify_tag
}
policy =
apigateway.tf - for api gateway resource
resource "aws_api_gateway_rest_api" "customer-oshyn-api-gateway" {
name = format("%s-%s",var.customerId,var.apigateway_suffix)
description = "This is customer - ${var.customerId} -api gateway"
tags = {
Oshyn-Amplify = var.oshyn_amplify_tag
}
}
resource "aws_cloudwatch_log_group" "customer-oshyn-api-gateway-log-group" {
name = "API-Gateway-Execution-Logs_${aws_api_gateway_rest_api.customer-oshyn-api-gateway.id}/${var.api-gatway-prod-stage-name}"
tags = {
Oshyn-Amplify = var.oshyn_amplify_tag
}
retention_in_days = 14
}
resource "aws_api_gateway_method" "AnyMethodForRootPath" {
rest_api_id = aws_api_gateway_rest_api.customer-oshyn-api-gateway.id
resource_id = aws_api_gateway_rest_api.customer-oshyn-api-gateway.root_resource_id
http_method = "ANY"
authorization = "NONE"
}
resource "aws_api_gateway_resource" "customer-oshyn-api-gateway-resource" {
rest_api_id = aws_api_gateway_rest_api.customer-oshyn-api-gateway.id
parent_id = aws_api_gateway_rest_api.customer-oshyn-api-gateway.root_resource_id
path_part = "{proxy+}"
}
resource "aws_api_gateway_method" "AnyMethodForProxyPath" {
rest_api_id = aws_api_gateway_rest_api.customer-oshyn-api-gateway.id
resource_id = aws_api_gateway_resource.customer-oshyn-api-gateway-resource.id
http_method = "ANY"
authorization = "NONE"
}
resource "aws_api_gateway_integration" "integrationForRootPath" {
rest_api_id = aws_api_gateway_rest_api.customer-oshyn-api-gateway.id
resource_id = aws_api_gateway_rest_api.customer-oshyn-api-gateway.root_resource_id
http_method = aws_api_gateway_method.AnyMethodForRootPath.http_method
content_handling = "CONVERT_TO_TEXT"
integration_http_method = "POST"
type = "AWS_PROXY"
uri = aws_lambda_alias.customer-lambda-function-alias.invoke_arn
timeout_milliseconds = 29000
}
resource "aws_api_gateway_integration" "integrationForProxyPath" {
rest_api_id = aws_api_gateway_rest_api.customer-oshyn-api-gateway.id
resource_id = aws_api_gateway_resource.customer-oshyn-api-gateway-resource.id
http_method = aws_api_gateway_method.AnyMethodForRootPath.http_method
content_handling = "CONVERT_TO_TEXT"
integration_http_method = "POST"
type = "AWS_PROXY"
uri = aws_lambda_alias.customer-lambda-function-alias.invoke_arn
timeout_milliseconds = 29000
}
resource "aws_api_gateway_deployment" "customer-oshyn-amplify-gateway-stage-deployment" {
rest_api_id = aws_api_gateway_rest_api.customer-oshyn-api-gateway.id
triggers = {
redeployment = sha1(jsonencode([aws_api_gateway_rest_api.customer-oshyn-api-gateway.body,
aws_api_gateway_method.AnyMethodForRootPath.id,
aws_api_gateway_integration.integrationForRootPath.id,
aws_api_gateway_integration.integrationForRootPath.uri
]))
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_api_gateway_stage" "Prod" {
deployment_id = aws_api_gateway_deployment.customer-oshyn-amplify-gateway-stage-deployment.id
rest_api_id = aws_api_gateway_rest_api.customer-oshyn-api-gateway.id
stage_name = var.api-gatway-prod-stage-name
depends_on = [aws_cloudwatch_log_group.customer-oshyn-api-gateway-log-group]
}
resource "aws_api_gateway_stage" "Stage" {
deployment_id = aws_api_gateway_deployment.customer-oshyn-amplify-gateway-stage-deployment.id
rest_api_id = aws_api_gateway_rest_api.customer-oshyn-api-gateway.id
stage_name = "Stage"
}
resource "aws_lambda_permission" "lambda_alias_permission_for_proxy" {
statement_id = "AllowExecutionFromAPIGatewayForProxyForAlias"
action = "lambda:InvokeFunction"
function_name = aws_lambda_alias.customer-lambda-function-alias.function_name
qualifier = aws_lambda_alias.customer-lambda-function-alias.name
principal = "apigateway.amazonaws.com"
# The /* part allows invocation from any stage, method and resource path
# within API Gateway.
source_arn = "${aws_api_gateway_rest_api.customer-oshyn-api-gateway.execution_arn}/*/*/*"
}
resource "aws_lambda_permission" "lambda_alias_permission_for_root" {
statement_id = "AllowExecutionFromAPIGatewayForRootForAlias"
action = "lambda:InvokeFunction"
function_name = aws_lambda_alias.customer-lambda-function-alias.function_name
qualifier = aws_lambda_alias.customer-lambda-function-alias.name
principal = "apigateway.amazonaws.com"
# The /* part allows invocation from any stage, method and resource path
# within API Gateway.
source_arn = "${aws_api_gateway_rest_api.customer-oshyn-api-gateway.execution_arn}/*/*/"
}
resource "aws_api_gateway_method_settings" "Prod_Stage_settings" {
rest_api_id = "${aws_api_gateway_rest_api.customer-oshyn-api-gateway.id}"
stage_name = "${var.api-gatway-prod-stage-name}"
method_path = "*/*"
settings {
logging_level = "INFO"
data_trace_enabled = true
metrics_enabled = true
}
}
apigateway-logging.tf - for api gateway logging
resource "aws_api_gateway_account" "api-gateway-cloudwatch-account" {
cloudwatch_role_arn = aws_iam_role.api-gateway-cloudwatch-global.arn
}
data "aws_iam_policy_document" "api-gateway-assume-role" {
statement {
effect = "Allow"
principals {
type = "Service"
identifiers = ["apigateway.amazonaws.com"]
}
actions = ["sts:AssumeRole"]
}
}
resource "aws_iam_role" "api-gateway-cloudwatch-global" {
name = var.api-gatway-cloudwatch-global
assume_role_policy = data.aws_iam_policy_document.api-gateway-assume-role.json
}
data "aws_iam_policy_document" "cloudwatch-policy-document" {
statement {
effect = "Allow"
actions = [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutLogEvents",
"logs:GetLogEvents",
"logs:FilterLogEvents",
]
resources = ["*"]
}
}
resource "aws_iam_role_policy" "cloudwatch-role-policy" {
name = "default"
role = aws_iam_role.api-gateway-cloudwatch-global.id
policy = data.aws_iam_policy_document.cloudwatch-policy-document.json
}
cloudfront.tf - Cloudfront resource
resource "aws_cloudfront_origin_access_control" "s3originacl" {
name = "s3originacl"
description = "S3 origin acl"
origin_access_control_origin_type = "s3"
signing_behavior = "always"
signing_protocol = "sigv4"
}
resource "aws_cloudfront_distribution" "customer-cloudfront-distribution" {
comment = "Cloudfront distribution for ${var.customerId}'s Oshyn Amplify"
origin {
domain_name = aws_s3_bucket.customer-oshyn-amplify-s3.bucket_regional_domain_name
origin_access_control_id = aws_cloudfront_origin_access_control.s3originacl.id
origin_id = aws_s3_bucket.customer-oshyn-amplify-s3.id
}
origin {
domain_name = "${aws_api_gateway_rest_api.customer-oshyn-api-gateway.id}.execute-api.us-west-1.amazonaws.com"
origin_id = aws_api_gateway_rest_api.customer-oshyn-api-gateway.id
origin_path = "/${var.api-gatway-prod-stage-name}"
custom_origin_config {
http_port = "80"
https_port = "443"
origin_protocol_policy = "https-only"
origin_ssl_protocols = ["TLSv1.2"]
}
}
enabled = true
is_ipv6_enabled = true
#aliases = ["mysite.example.com", "yoursite.example.com"]
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = aws_api_gateway_rest_api.customer-oshyn-api-gateway.id
compress = true
cache_policy_id = data.aws_cloudfront_cache_policy.aws-managed-caching-optimized.id
origin_request_policy_id = data.aws_cloudfront_origin_request_policy.aws-managed-all-viewer-except-host-header.id
viewer_protocol_policy = "redirect-to-https"
#min_ttl = 0
#default_ttl = 300
#max_ttl = 900
}
# Cache behavior with precedence 0
ordered_cache_behavior {
path_pattern = "/_next/static/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD", "OPTIONS"]
target_origin_id = aws_s3_bucket.customer-oshyn-amplify-s3.id
cache_policy_id = data.aws_cloudfront_cache_policy.aws-managed-caching-optimized.id
#min_ttl = 0
#default_ttl = 3600
#max_ttl = 86400
compress = true
viewer_protocol_policy = "https-only"
}
# Cache behavior with precedence 1
ordered_cache_behavior {
path_pattern = "/static/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD"]
target_origin_id = aws_s3_bucket.customer-oshyn-amplify-s3.id
cache_policy_id = data.aws_cloudfront_cache_policy.aws-managed-caching-optimized.id
#min_ttl = 0
#default_ttl = 3600
#max_ttl = 86400
compress = true
viewer_protocol_policy = "https-only"
}
ordered_cache_behavior {
path_pattern = "/nextjsassets*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD", "OPTIONS"]
target_origin_id = aws_s3_bucket.customer-oshyn-amplify-s3.id
cache_policy_id = data.aws_cloudfront_cache_policy.aws-managed-caching-optimized.id
#min_ttl = 0
#default_ttl = 3600
#max_ttl = 86400
compress = true
viewer_protocol_policy = "https-only"
}
price_class = var.cloudfront-price-class
restrictions {
geo_restriction {
restriction_type = "whitelist"
locations = ["US"]
}
}
tags = {
Oshyn-Amplify = var.oshyn_amplify_tag
}
viewer_certificate {
cloudfront_default_certificate = true
}
}
data "aws_iam_policy_document" "cloudfront-s3-bucket-policy-document" {
statement {
sid = "AllowCloudFrontServicePrincipal"
actions = ["s3:GetObject"]
resources = ["${aws_s3_bucket.customer-oshyn-amplify-s3.arn}/*"]
effect = "Allow"
principals {
type = "Service"
identifiers = ["cloudfront.amazonaws.com"]
}
condition {
test = "StringEquals"
variable = "AWS:SourceARN"
values = [aws_cloudfront_distribution.customer-cloudfront-distribution.arn]
}
}
}
resource "aws_s3_bucket_policy" "s3-cloudfront-bucket-policy" {
bucket = aws_s3_bucket.customer-oshyn-amplify-s3.id
policy = data.aws_iam_policy_document.cloudfront-s3-bucket-policy-document.json
}