Questions tagged with AWS CloudFormation
Content language: English
Sort by most recent
```
3:54:27 PM | CREATE_FAILED | AWS::S3::Bucket | loggingBucketE84AEEE7
acda-server-access-logging-all-buckets-beta-feamazon already exists
new Bucket (/Users/xiaouye/workspace/AlexaCustomerDataAggregatorCDK/src/AlexaCustomerDataAggregatorCDK/node_modules/monocdk/lib/aws-s3/lib/bucket.js:738:26)
\_ RequestCallbackStack.createLoggingBucket (/Users/xiaouye/workspace/AlexaCustomerDataAggregatorCDK/src/AlexaCustomerDataAggregatorCDK/dist/lib/stack/callback.js:106:24)
\_ new RequestCallbackStack (/Users/xiaouye/workspace/AlexaCustomerDataAggregatorCDK/src/AlexaCustomerDataAggregatorCDK/dist/lib/stack/callback.js:38:35)
\_ Object.<anonymous> (/Users/xiaouye/workspace/AlexaCustomerDataAggregatorCDK/src/AlexaCustomerDataAggregatorCDK/dist/lib/app.js:99:31)
\_ Module._compile (internal/modules/cjs/loader.js:1085:14)
\_ Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10)
\_ Module.load (internal/modules/cjs/loader.js:950:32)
\_ Function.Module._load (internal/modules/cjs/loader.js:790:12)
\_ Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:75:12)
\_ internal/main/run_main_module.js:17:47
❌ AlexaCustomerDataAggregatorCDK-Callback-beta-PDX failed: Error: The stack named AlexaCustomerDataAggregatorCDK-Callback-beta-PDX failed to deploy: UPDATE_ROLLBACK_COMPLETE: acda-server-access-logging-all-buckets-beta-feamazon already exists
at prepareAndExecuteChangeSet (/Users/xiaouye/workspace/AlexaCustomerDataAggregatorCDK/build/AlexaCustomerDataAggregatorCDK/AlexaCustomerDataAggregatorCDK-1.0/AL2_x86_64/DEV.STD.PTHREAD/build/private/cdk-cli/node_modules/aws-cdk/lib/api/deploy-stack.ts:385:13)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at CdkToolkit.deploy (/Users/xiaouye/workspace/AlexaCustomerDataAggregatorCDK/build/AlexaCustomerDataAggregatorCDK/AlexaCustomerDataAggregatorCDK-1.0/AL2_x86_64/DEV.STD.PTHREAD/build/private/cdk-cli/node_modules/aws-cdk/lib/cdk-toolkit.ts:209:24)
at initCommandLine (/Users/xiaouye/workspace/AlexaCustomerDataAggregatorCDK/build/AlexaCustomerDataAggregatorCDK/AlexaCustomerDataAggregatorCDK-1.0/AL2_x86_64/DEV.STD.PTHREAD/build/private/cdk-cli/node_modules/aws-cdk/lib/cli.ts:341:12)
```
We have a deployment issue to our beta environment, we have problem in creating this s3 bucket, the SOP is:
1. Deployment from dev host, this deployment succeed, however this change will skip the updating and it modify some previous resources as
```
│ + │ ${loggingBucket.Arn}/* │ Allow │ s3:PutObject │ Service:logging.s3.amazonaws.com │ │
│ │ │ │ s3:PutObjectAcl │
```
2. I then find that the function change will not work so I flush the manual deployment by through pipeline.
3. however this build failed, see this pipeline
Is there any way that we could recover this deployment without delete any of the current resources. Why we are keep creating the resources already exist?
Can Cloud Intelligence Dashboards be implemented with Terraform? Is there is any templates that customers can use to deploy the dashboards using Terraform instead of CloudFormation?
We are working on the below Workshop:
We follow the workshop as https://github.com/aws-samples/aws-genomics-nextflow-workshop
and we have used the below cloud Formation file for resource creation:
• https://console.aws.amazon.com/cloudformation/home?#/stacks/new?stackName=Nextflow&templateURL=https://s3.amazonaws.com/pwyming-demo-templates/nextflow-workshop/cloud9.cfn.yaml
• https://console.aws.amazon.com/cloudformation/home?#/stacks/new?stackName=Nextflow&templateURL=https://s3.amazonaws.com/pwyming-demo-templates/nextflow-workshop/nextflow/nextflow-aio.template.yaml
After creating all resources and following all processes, We tried running bash command as “nextflow run hello” in Cloud9 Studio from GitHub source code (docs/modules/module-1__running-nextflow.md). AWS Batch Jobs started as “Runnable” in AWS Batch Dashboard but it is not moving "Running" State.
Expected Result: Process the .fastq files and get a result.
I have a CloudFormation template that creates an AutoScaling Group (AWS::AutoScaling::AutoScalingGroup) using a LaunchConfiguration (AWS::AutoScaling::LaunchConfiguration). I'm trying to convert it to use a LaunchTemplate (AWS::EC2::LaunchTemplate). I've updated the cloudformation template, creating a new launchTemplate resource. I created a CloudFormation Change set and I see that the LaunchConfiguration will be deleted, a LaunchTemplate will be created and the AutoScaleGroup will be updated. I used the same parameters as the existing CloudFormationTemplate. The error occurs when I attempt to Execute the changeset.
```
You must use a valid fully-formed launch template. The parameter groupName cannot be used with the parameter subnet
```
I'm supplying the Security Group ID via a parameter
```
SecurityGroups:
- !Ref InstanceSecurityGroup
```
If I switch to using the group name, I get a different error regarding defaultVPCs
Looking at a partial execution, I see that the Security GroupID is listed in the LaunchTemplate under the "Security Groups" and not the "Security Group IDs", which is where I would expect it.
How can I update my CloudFormation template to use a LaunchTemplate?
so I am looking to get MySQL install if I specify parameter environment as prod and if dev mariadb get installed from userdata script of this cloudformation template but it is not happening. Please guide on this how to do it?
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
Environment:
Description: "The environment to deploy to (dev or prod)"
Type: String
Default: "dev"
Resources:
EC2Instance:
Type: "AWS::EC2::Instance"
Properties:
InstanceType: "t2.micro"
ImageId: "ami-0f8ca728008ff5af4"
KeyName: "devops"
SecurityGroupIds:
- "sg-02464c840862fddaf"
SubnetId: "subnet-0b2bbe1a860c1ec8f"
UserData: !Base64 |
#!/bin/bash
if [ "${Environment}" == "prod" ]; then
# Install MySQL on production instances
sudo apt-get update
sudo apt install mysql-server -y
sudo systemctl restart mysql
sudo systemctl enable mysql
elif [ "${Environment}" == "dev" ]; then
# Install MariaDB on development instances
sudo apt-get update
sudo apt install mariadb-server mariadb-client -y
sudo systemctl enable mariadb
fi
Tags:
- Key: "Name"
Value: "MyNewInstance"
When I run CloudFormation deploy using a template with API Gateway resources, the first time I run it, it creates and deploys to stages. The subsequent times I run it, it updates the resources but doesn't deploy to stages. How can i solve this issue.
Hi AWS, I am trying to impose a condition on S3 `BucketEncryption` property whether it should be customer managed (SSE-KMS) or AWS managed key (SSE-S3). The code for the template is:
```
# version: 1.0
AWSTemplateFormatVersion: "2010-09-09"
Description: Create standardized S3 bucket using CloudFormation Template
Parameters:
BucketName:
Type: String
Description: "Name of the S3 bucket"
KMSKeyArn:
Type: String
Description: "KMS Key Arn to encrypt S3 bucket"
Default: ""
SSEAlgorithm:
Type: String
Description: "Encryption algorithm for KMS"
AllowedValues:
- aws:kms
- AES256
Conditions:
KMSKeysProvided: !Not [!Equals [!Ref KMSKeyArn, ""]]
Resources:
S3Bucket:
Type: 'AWS::S3::Bucket'
DeletionPolicy: Retain
UpdateReplacePolicy: Retain
Properties:
BucketName: !Ref BucketName
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
BucketEncryption:
ServerSideEncryptionConfiguration:
- !If
- KMSKeysProvided
- ServerSideEncryptionByDefault:
SSEAlgorithm: !Ref SSEAlgorithm
KMSMasterKeyID: !Ref KMSKeyArn
BucketKeyEnabled: true
- !Ref "AWS::NoValue"
```
When I am selecting the SSEAlgorithm as `AES256` I am receiving this error **Property ServerSideEncryptionConfiguration cannot be empty**. I know `KMSMasterKeyID` should not be present when the SSEAlgorithm is of AES256 type but I am confused how to get rid of this error.
Please help.
Here is an example of creating a tag for a stack.
const tags = [
{ Key: 'Environment', Value: 'Development' },
];
// Create the stack with tags
try {
const response = await cloudFormation
.createStack({
StackName: stackName,
TemplateURL: templateUrl,
Tags: tags,
})
.promise();
When you create a tag for stack level, how do you retrieve the tag from the stack, What is the API? I did not find anything here https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-cloudformation/classes/cloudformation.html
Hi,
Am using the below template for creating new was transfer family with vpc end point.
But we could not reach VPC end point with other networks.
Because it seems to route issue.
Can any one please suggest me to fix.
**Error Screen shot**

**Successful screen shot - Currently in production**

```
Description: This template create aws transfer family with add user and deploys a VPC and security group, with a pair of public and private subnets spread
across Single Availability Zones. It deploys an internet gateway, with a default
route on the public subnets. It deploys a pair of NAT gateways (one AZ),
and default routes for them in the private subnets,
AWSTemplateFormatVersion: '2010-09-09'
Parameters:
EnvironmentName:
Description: An environment name that is prefixed to resource names
Type: String
VpcCIDR:
Description: Please enter the IP range (CIDR notation) for this VPC
Type: String
Default: 10.192.0.0/16
PublicSubnetCIDR:
Description: Please enter the IP range (CIDR notation) for the public subnet in the first Availability Zone
Type: String
Default: 10.192.10.0/24
PrivateSubnetCIDR:
Description: Please enter the IP range (CIDR notation) for the private subnet in the first Availability Zone
Type: String
Default: 10.192.20.0/24
CreateServer:
AllowedValues:
- 'true'
- 'false'
Type: String
Description: >-
Whether this stack creates a server internally or not. If a server is
created internally, the customer identity provider is automatically
associated with it.
Default: 'true'
Endpointtype:
AllowedValues:
- 'Internal'
- 'Internet facing'
Type: String
Default: 'Internet facing'
Conditions:
CreateServer:
'Fn::Equals':
- Ref: CreateServer
- 'true'
Resources:
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: !Ref VpcCIDR
EnableDnsSupport: true
EnableDnsHostnames: true
Tags:
- Key: Name
Value: !Sub ${EnvironmentName} Resources
CloudWatchLoggingRole:
Description: IAM role used by Transfer to log API requests to CloudWatch
Type: 'AWS::IAM::Role'
Condition: CreateServer
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- transfer.amazonaws.com
Action:
- 'sts:AssumeRole'
GoldcoastTvodUser:
Type: 'AWS::Transfer::User'
Properties:
HomeDirectory: "/goldcoast-tvod"
HomeDirectoryType: "PATH"
Policy:
'Fn::Sub': |
{
"Version": "2012-10-17",
"Statement": {
"Sid": "AllowFullAccessToBucket",
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::goldcoast-tvod",
"arn:aws:s3:::goldcoast-tvod/*"
]
}
}
Role:
'Fn::Sub': 'arn:aws:iam::${AWS::AccountId}:role/TransferManagementRole'
ServerId:
'Fn::GetAtt': TransferServer.ServerId
SshPublicKeys:
- >-
ssh-rsa
AAAAB3
UserName: GoldcoastTvodUser
etcsvoduser:
Type: 'AWS::Transfer::User'
Properties:
HomeDirectory: "/etc-svod"
HomeDirectoryType: "PATH"
Policy:
'Fn::Sub': |
{
"Version": "2012-10-17",
"Statement": {
"Sid": "AllowFullAccessToBucket",
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
}
Role:
'Fn::Sub': 'arn:aws:iam::${AWS::AccountId}:role/TransferManagementRole'
ServerId:
'Fn::GetAtt': TransferServer.ServerId
SshPublicKeys:
- >-
ssh-rsa AAAAB3
UserName: etc-svod-user
etctvoduser:
Type: 'AWS::Transfer::User'
Properties:
HomeDirectory: "/tvn-tvod"
HomeDirectoryType: "PATH"
Policy:
'Fn::Sub': |
{
"Version": "2012-10-17",
"Statement": {
"Sid": "AllowFullAccessToBucket",
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
}
Role:
'Fn::Sub': 'arn:aws:iam::${AWS::AccountId}:role/TransferManagementRole'
ServerId:
'Fn::GetAtt': TransferServer.ServerId
SshPublicKeys:
- >-
ssh-rsa AAAAB3
UserName: etc-tvod-user
lhtcsvoduser:
Type: 'AWS::Transfer::User'
Properties:
HomeDirectory: "/lhtc-svod"
HomeDirectoryType: "PATH"
Policy:
'Fn::Sub': |
{
"Version": "2012-10-17",
"Statement": {
"Sid": "AllowFullAccessToBucket",
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
}
Role:
'Fn::Sub': 'arn:aws:iam::${AWS::AccountId}:role/TransferManagementRole'
ServerId:
'Fn::GetAtt': TransferServer.ServerId
SshPublicKeys:
- >-
ssh-rsa AAAAB3
UserName: lhtc-svod-user
lhtctvoduser:
Type: 'AWS::Transfer::User'
Properties:
HomeDirectory: "/tvn-tvod"
HomeDirectoryType: "PATH"
Policy:
'Fn::Sub': |
{
"Version": "2012-10-17",
"Statement": {
"Sid": "AllowFullAccessToBucket",
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
}
Role:
'Fn::Sub': 'arn:aws:iam::${AWS::AccountId}:role/TransferManagementRole'
ServerId:
'Fn::GetAtt': TransferServer.ServerId
SshPublicKeys:
- >-
ssh-rsa AAAAB3
UserName: lhtc-tvod-user
mastercopyfoleuser:
Type: 'AWS::Transfer::User'
Properties:
HomeDirectory: "/mastercopyfiles"
HomeDirectoryType: "PATH"
Policy:
'Fn::Sub': |
{
"Version": "2012-10-17",
"Statement": {
"Sid": "AllowFullAccessToBucket",
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
}
Role:
'Fn::Sub': 'arn:aws:iam::${AWS::AccountId}:role/TransferManagementRole'
ServerId:
'Fn::GetAtt': TransferServer.ServerId
SshPublicKeys:
- >-
ssh-rsa AAAAB3
UserName: mastercopyfole-user
InternetGateway:
Type: AWS::EC2::InternetGateway
Properties:
Tags:
- Key: Name
Value: !Ref EnvironmentName
InternetGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
InternetGatewayId: !Ref InternetGateway
VpcId: !Ref VPC
PublicSubnet:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
AvailabilityZone: !Select [ 0, !GetAZs '' ]
CidrBlock: !Ref PublicSubnetCIDR
MapPublicIpOnLaunch: true
Tags:
- Key: Name
Value: !Sub ${EnvironmentName} Public Subnet
PrivateSubnet:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
AvailabilityZone: !Select [ 0, !GetAZs '' ]
CidrBlock: !Ref PrivateSubnetCIDR
MapPublicIpOnLaunch: false
Tags:
- Key: Name
Value: !Sub ${EnvironmentName} Private Subnet
NatGatewayEIP:
Type: AWS::EC2::EIP
DependsOn: InternetGatewayAttachment
Properties:
Domain: vpc
Tags:
- Key: Name
Value: !Sub ${EnvironmentName} Elsatic Ip
PublicRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Sub ${EnvironmentName} Public Routes
PublicSubnetRouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref PublicRouteTable
SubnetId: !Ref PublicSubnet
PrivateRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Sub ${EnvironmentName} Private Routes
PrivateSubnetRouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref PrivateRouteTable
SubnetId: !Ref PrivateSubnet
SecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: "Production Security Group"
GroupDescription: "Security Group with inbound and outbound rule"
VpcId: !Ref VPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 0.0.0.0/0
- IpProtocol: udp
FromPort: 69
ToPort: 69
CidrIp: 96.47.148.171/32
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 3.16.146.0/29
SecurityGroupEgress:
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
Tags:
- Key: Name
Value: !Sub ${EnvironmentName}
TfVPCInterfaceEndpoint:
Type: 'AWS::EC2::VPCEndpoint'
Properties:
VpcEndpointType: Interface
ServiceName: !Sub 'com.amazonaws.${AWS::Region}.logs'
VpcId: !Ref VPC
SubnetIds:
- !Ref PublicSubnet
SecurityGroupIds:
- !Ref SecurityGroup
TransferServer:
Type: 'AWS::Transfer::Server'
Condition: CreateServer
Properties:
EndpointType: 'VPC'
SecurityPolicyName: TransferSecurityPolicy-FIPS-2020-06
LoggingRole:
'Fn::GetAtt': CloudWatchLoggingRole.Arn
Tags:
- Key: Name
Value: !Sub ${EnvironmentName} Transferserver
EndpointDetails:
VpcId: !Ref VPC
SubnetIds:
- !Ref PublicSubnet
AddressAllocationIds:
- !GetAtt NatGatewayEIP.AllocationId
Tags:
- Key: Name
Value: !Sub ${EnvironmentName} Transferserver
```
The aws ecs fargate is being deployed through the aws cli using the console for only the task definition and the rest of the cluster, service container, and deployment.
One day, I saw that the task definitions were created as stacks in cloudformation. (Failure records were also included.)
Searching or looking at the official documentation says that the stack is not created in cloudformation. What is the cause? And how to prevent it from spawning?
I created it by referring to the following document.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-blue-green.html
When Elastic Beanstalk auto-generates resources, NLB is created with Network mapping for subnets with "Assigned by AWS" IPv4 addresses.
How it would be possible to a associate Elastic IP to Beanstalk environment with Network Load Balancer for **inbound** traffic? *(This is not to be confused with [static "source" IP address](https://repost.aws/knowledge-center/elastic-beanstalk-static-IP-address) in Beanstalk)*
I reviewed [related CloudFormation resources](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-elasticloadbalancingv2-loadbalancer-subnetmapping.html) to see if or how I can make use of them but I am not sure if this can be applicable for Elastic Beanstalk environments.
So I am making cloud formation template main.yaml file and one other yaml file from which parameters can be taken so that I can reuse it by just making changes in the value of another yaml file where only values are other but facing issue. Please guide accordingly.
Parameters:
InstanceType:
Description: EC2 instance type
Type: String
Default: t2.micro
AllowedValues: [t2.micro, t2.small, t2.medium, m4.large]
SecurityGroupId:
Description: Security group ID for the EC2 instance
Type: AWS::EC2::SecurityGroup::Id
VpcId:
Description: VPC ID
Type: AWS::EC2::VPC::Id
KeyName:
Description: Name of the key pair to use for SSH access
Type: AWS::EC2::KeyPair::KeyName
SubnetId:
Description: Subnet ID for the EC2 instance
Type: AWS::EC2::Subnet::Id
Resources:
EC2Instance:
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref InstanceType
SecurityGroupIds: [!Ref SecurityGroupId]
KeyName: !Ref KeyName
SubnetId: !Ref SubnetId
ImageId: ami-0f8ca728008ff5af4
UserData: !Base64
Fn::Sub: |
#!/bin/bash
sudo apt-get update
sudo apt install apache2 -y
sudo systemctl start apache2
sudo systemctl enable apache2
Like we ave variables in terraform in params.yaml
How do I format and define these values or take user input
InstanceType: t2.micro
KeyName: devops
SecurityGroupIds: sg-02464c840862fddaf
SubnetId: subnet-0b2bbe1a860c1ec8f
KeyName: devops
VpcId: vpc-01491099ac5c6857a
Facing issue with formatting and defining paramsyaml file.Please guide.