By using AWS re:Post, you agree to the Terms of Use
/AWS CloudFormation/

Questions tagged with AWS CloudFormation

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Best practice for restoring an RDS Aurora snapshot into a CloudFormation-built solution

Hi experts, I'm looking for best practices in restoring data into a cloudformation-built system. I've got extensive cloudformation that builds a solution, including an RDS Aurora Serverless database cluster. Now I want to restore that RDS server from a snapshot. - I notice that restoring through the console creates a new cluster, and this is no longer in the cloudformation stack, so doesn't get updates (plus my existing RDS instance is retained) - I found the property `DbSnapshotIdentifier` in DBInstance along with this answer https://repost.aws/questions/QUGElgNYmhTEGzkgTUVP21oQ/restoring-rds-snapshot-with-cloud-formation, however I see in the docs that I can never change it after the initial deployment (it seems it will delete the DB if I do - see below). This means I could never restore more than once. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-database-instance.html#cfn-rds-dbinstance-dbsnapshotidentifier - I also found a StackOverflow post from 6 years ago with the same question but no real answers. https://stackoverflow.com/questions/32255309/how-do-i-restore-rds-snapshot-into-a-cloudformation For the DbSnapshotIdentifier point above, here's the relevant wording from their docs that concerns me: > After you restore a DB instance with a DBSnapshotIdentifier property, you must specify the same DBSnapshotIdentifier property for any future updates to the DB instance. When you specify this property for an update, the DB instance is not restored from the DB snapshot again, and the data in the database is not changed. However, if you don't specify the DBSnapshotIdentifier property, an empty DB instance is created, and the original DB instance is deleted It seems this should be simple but it's not. Please don't tell me I need to fall back to using `mysqlbackup` ¯\\_(ツ)\_/¯ Thanks in advance, Scott
1
answers
0
votes
26
views
asked 2 days ago

RequestParameters for Api Event in Serverless::Function in JSON - how does it work?

I'm trying to add some query string parameters for a Lambda function, using a SAM template written in JSON. All the examples are in YAML? Can anyone point out where I'm going wrong. Here's the snippet of the definition: ``` "AreaGet": { "Type": "AWS::Serverless::Function", "Properties": { "Handler": "SpeciesRecordLambda::SpeciesRecordLambda.Functions::AreaGet", "Runtime": "dotnet6", "CodeUri": "", "MemorySize": 256, "Timeout": 30, "Role": null, "Policies": [ "AWSLambdaBasicExecutionRole" ], "Events": { "AreaGet": { "Type": "Api", "Properties": { "Path": "/", "Method": "GET", "RequestParameters": [ "method.request.querystring.latlonl": { "Required": "true" }, "method.request.querystring.latlonr": { "Required": "true" } ] } } } } }, ``` and here's the error message I get: > Failed to create CloudFormation change set: Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document. Number of errors found: 1. Resource with id [AreaGet] is invalid. Event with id [AreaGet] is invalid. Invalid value for 'RequestParameters' property. Keys must be in the format 'method.request.[querystring|path|header].{value}', e.g 'method.request.header.Authorization'. Sorry I know this is a bit of a beginners question, but I'm a bit lost as to what to do, as I can't find any information about this using JSON. Maybe you can't do it using JSON? Thanks, Andy.
1
answers
0
votes
30
views
asked 8 days ago

Best practice guidance to avoid "CloudFormation cannot update a stack when a custom-named resource requires replacing"

Hi, Over the years we have taken the approach of naming everything we deploy — it's clean, orderly and unambiguous. Since embracing infastructure-as-code practices, our CloudFormation recipes have been written to name everything with the project's prefix and stage. For example, a VPC will be deployed as `projectname-vpc-dev`, and its subnets will be `projectname-subnet-a-dev`, etc. Unfortunately, it seems some AWS resources won't update via CF if they are named — CloudFormation returns an error like this: > `CloudFormation cannot update a stack when a custom-named resource requires replacing. Rename <name> and update the stack again.` How should we best overcome this? Should we simply avoid naming things? Can we use tags instead to avoid this? What's best practice? For reference, here's a snippet of CloudFormation that appears to be causing the issue above (with serverless.yml variables): ``` Type: AWS::EC2::SecurityGroup Properties: GroupName: projectname-dev GroupDescription: Security group for projectname-dev ... ``` I also had the same problem previously with `AWS::RDS::DBCluster` for `DBClusterIdentifier`. Generally speaking, how do I know which CloudFormation settings block stack updates like this? It feels like a bit of whack-a-mole at present. For the above example the docs at https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-security-group.html say nothing of this behaviour, but it does say "update requires replacement" against the fields `GroupName` and `GroupDescription`. Is that what I need to look out for, or is that something different again? Thanks in advance... Scott
1
answers
0
votes
19
views
asked 9 days ago

Issues Creating MediaConnect Flows with Cloudformation Template

Hi, I'm struggling creating Media Connect flows using cloudformation where the ingest protocol is not zixi-push. The documentation does state that srt-listener is not supported via cloudformation ( reference https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-mediaconnect-flowsource.html#cfn-mediaconnect-flowsource-maxlatency ) but I'm trying "rtp" and it fails. Additionally, the error message is not very helpful "Error occurred during operation 'AWS::MediaConnect::Flow'." (RequestToken: <redacted>, HandlerErrorCode: GeneralServiceException)" A working template (using Zixi on port 2088) looks like this; ``` `{ "AWSTemplateFormatVersion": "2010-09-09", "Description": "Media Connect Flow Test", "Resources": { "MediaConnectFlowA": { "Type": "AWS::MediaConnect::Flow", "Properties": { "Name": "WIZARDA", "AvailabilityZone": "eu-west-1b", "Source": { "Name": "WIZARDASource", "StreamId": "A", "Description": "Media Connect Flow Test - WIZARDA", "Protocol": "zixi-push", "IngestPort": 2088, "WhitelistCidr": "<redacted>/32" } } } } }` ``` but keeping the protocol as zixi and changing the ingress port results in failure (this could be be design I guess as it's a non-standard Zixi port). Similarly, and more importantly for what I want to do, trying to change the protocol to "rtp" fails e.g. ``` { "AWSTemplateFormatVersion": "2010-09-09", "Description": "Media Connect Flow Test", "Resources": { "MediaConnectFlowA": { "Type": "AWS::MediaConnect::Flow", "Properties": { "Name": "WIZARDA", "AvailabilityZone": "eu-west-1b", "Source": { "Name": "WIZARDASource", "StreamId": "A", "Description": "Media Connect Flow Test - WIZARDA", "Protocol": "rtp", "IngestPort": 2088, "WhitelistCidr": "<redacted>/32" } } } } } ``` Can anyone advise on the right construct to create a flow with RTP source? (also rtp-fec failed) for completeness, the I've run via the console and also using the CLI e.g. ``` aws --profile=aws-course --region=eu-west-1 cloudformation create-stack --stack-name="mediaconnect-rtp" --template-body file://..\MConly.json ```
1
answers
0
votes
24
views
asked 18 days ago

Lambda Handler No Space On Device Error

Have a lambda function that is throwing an error of "No space left on device". The lambda function creates a custom resource handler defined within the lambda python code: response = cfn.register_type( Type='RESOURCE', TypeName='AWSQS:MYCUSTOM::Manager', SchemaHandlerPackage="s3://xxx/yyy/awsqs-mycustom-manager.zip", LoggingConfig={"LogRoleArn": "xxx", "LogGroupName": "awsqs-mycustom-manager-logs"}, ExecutionRoleArn="xxx" The lambda function when created has the following limits set: 4GB of Memory and 4GB of Ephemeral space. However, I was still receiving a no space on device even thought the '/tmp/' is specified and this is plenty of space. Doing additional digging I added a "df" output inside of the code/zip file. When the output prints is shows that only 512MB of space is available in temp? Filesystem 1K-blocks Used Available Use% Mounted on /mnt/root-rw/opt/amazon/asc/worker/tasks/rtfs/python3.7-amzn-201803 27190048 22513108 3293604 88% / /dev/vdb 1490800 14096 1460320 1% /dev **/dev/vdd 538424 872 525716 1% /tmp** /dev/root 10190100 552472 9621244 6% /var/rapid /dev/vdc 37120 37120 0 100% /var/task Its like a new instance was created internally and did not adopt the size from the parent. Forgive me if technically my language is incorrect as this is the first time busting this out and seeing this type of error. Just has me confused as too what is going on under the covers, and I can find no documentation on how to increase the ephemeral storage within the handler even though the originating lamda function in which this is defined has already had the limits increased.
1
answers
0
votes
41
views
asked 23 days ago

Lifecycle Configuration Standard --> Standard IA -- Glacier Flexible Restore via CloudFormation

We do shared web hosting and my cPanel servers stores backups in S3, each server with its own bucket. cPanel does not have a provision to select the storage class, so everything gets created as Standard. With around 9TB of backups being maintained, I would really like them to be stored as Standard IA after the first couple of days, and then transition to Glacier after they have been in IA for 30 days. The logic here is the backup that is most likely needed would be the most recent. Currently we skip the step of transferring to IA and they go straight to Glacier after 30 days. According to this page, that kind of multi staged transition should be ok, and it confirms that the transitions from class to class I want are acceptable. https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html The examples on this page show a transition in days of 1, seeming to show that a newly created object stored in Standard can be transitioned immediately: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket-lifecycleconfig.html My YAML template for Cloud Formation has this section in it: ``` - Id: TransitionStorageType Status: Enabled Transitions: - StorageClass: "STANDARD_IA" TransitionInDays: 2 - StorageClass: "GLACIER" TransitionInDays: 32 ``` When I run the template all of the buckets update with nice green check marks, then the whole stack rolls back without saying what the issue is. If turn that into 2 separate rules like this: ``` - Id: TransitionStorageIA Status: Enabled Transitions: - StorageClass: "STANDARD_IA" TransitionInDays: 2 - Id: TransitionStorageGlacier Status: Enabled Transitions: - StorageClass: "GLACIER" TransitionInDays: 32 ``` Then each bucket getting modified errors with: `Days' in Transition action must be greater than or equal to 30 for storageClass 'STANDARD_IA'` but if you look at the rules, it is in Standard IA for 30 days as it doesn't change to Glacier until day 32, and it transitions to Standard IA at day 2. So that error does not make any sense. What do I need to do to make this work? My monthly bill is in serious need of some trimming. Thank you.
1
answers
0
votes
13
views
asked a month ago

How to ensure using the latest lambda layer version when deploying with CloudFormation and SAM?

Hi, we use CloudFormation and SAM to deploy our Lambda (Node.js) functions. All our Lambda functions has a layer set through `Globals`. When we make breaking changes in the layer code we get errors during deployment because new Lambda functions are rolled out to production with old layer and after a few seconds *(~40 seconds in our case)* it starts using the new layer. For example, let's say we add a new class to the layer and we import it in the function code then we get an error that says `NewClass is not found` for a few seconds during deployment *(this happens because new function code still uses old layer which doesn't have `NewClass`)*. Is it possible to ensure new lambda function is always rolled out with the latest layer version? Example CloudFormation template.yaml: ``` Globals: Function: Runtime: nodejs14.x Layers: - !Ref CoreLayer Resources: CoreLayer: Type: AWS::Serverless::LayerVersion Properties: LayerName: core-layer ContentUri: packages/coreLayer/dist CompatibleRuntimes: - nodejs14.x Metadata: BuildMethod: nodejs14.x ExampleFunction: Type: AWS::Serverless::Function Properties: FunctionName: example-function CodeUri: packages/exampleFunction/dist ``` SAM build: `sam build --base-dir . --template ./template.yaml` SAM package: `sam package --s3-bucket example-lambda --output-template-file ./cf.yaml` Example CloudFormation deployment events, as you can see new layer (`CoreLayer123abc456`) is created before updating the Lambda function so it should be available to use in the new function code but for some reasons Lambda is updated and deployed with the old layer version for a few seconds: | Timestamp | Logical ID | Status | Status reason | | --- | --- | --- | --- | 2022-05-23 16:26:54 | stack-name | UPDATE_COMPLETE | - 2022-05-23 16:26:54 | CoreLayer789def456 | DELETE_SKIPPED | - 2022-05-23 16:26:53 | v3uat-farthing | UPDATE_COMPLETE_CLEANUP_IN_PROGRESS | - 2022-05-23 16:26:44 | ExampleFunction | UPDATE_COMPLETE | - 2022-05-23 16:25:58 | ExampleFunction | UPDATE_IN_PROGRESS | - 2022-05-23 16:25:53 | CoreLayer123abc456 | CREATE_COMPLETE | - 2022-05-23 16:25:53 | CoreLayer123abc456 | CREATE_IN_PROGRESS | Resource creation Initiated 2022-05-23 16:25:50 | CoreLayer123abc456 | CREATE_IN_PROGRESS | - 2022-05-23 16:25:41 | stack-name | UPDATE_IN_PROGRESS | User Initiated
3
answers
0
votes
71
views
asked a month ago

ApplicationLoadBalancedFargateService with listener on one port and health check on another fails health check

Hi, I have an ApplicationLoadBalancedFargateService that exposes a service on one port, but the health check runs on another. Unfortunately, the target fails health check and terminates the task. Here's a snippet of my code ``` const hostPort = 5701; const healthCheckPort = 8080; taskDefinition.addContainer(stackPrefix + 'Container', { image: ecs.ContainerImage.fromRegistry('hazelcast/hazelcast:3.12.6'), environment : { 'JAVA_OPTS': `-Dhazelcast.local.publicAddress=localhost:${hostPort} -Dhazelcast.rest.enabled=true`, 'LOGGING_LEVEL':'DEBUG', 'PROMETHEUS_PORT': `${healthCheckPort}`}, portMappings: [{containerPort : hostPort, hostPort: hostPort},{containerPort : healthCheckPort, hostPort: healthCheckPort}], logging: ecs.LogDriver.awsLogs({streamPrefix: stackPrefix, logRetention: logs.RetentionDays.ONE_DAY}), }); const loadBalancedFargateService = new ecsPatterns.ApplicationLoadBalancedFargateService(this, stackPrefix + 'Service', { cluster, publicLoadBalancer : false, desiredCount: 1, listenerPort: hostPort, taskDefinition: taskDefinition, securityGroups : [fargateServiceSecurityGroup], domainName : env.getPrefixedRoute53(stackName), domainZone : env.getDomainZone(), }); loadBalancedFargateService.targetGroup.configureHealthCheck({ path: "/metrics", port: healthCheckPort.toString(), timeout: cdk.Duration.seconds(15), interval: cdk.Duration.seconds(30), healthyThresholdCount: 2, unhealthyThresholdCount: 5, healthyHttpCodes: '200-299' }); ``` Any suggestions on how I can get this to work? thanks
1
answers
0
votes
40
views
asked a month ago

Elemental Mediaconvert job template for Video on Demand

I launched the fully managed video on demand template from here https://aws.amazon.com/solutions/implementations/video-on-demand-on-aws/?did=sl_card&trk=sl_card. I have a bunch of questions on how to tailor this service to my use case. I will each separate questions for each. Firstly, is possible to use my own GUID as an identifier for the mediaconvert jobs and outputs. The default GUID tagged onto the videos in this workflow are independent of my application server. So it's difficult for the server to track who owns what video on the destination s3 bucket. Secondly, I would like to compress the video input for cases where the resolution is higher than 1080p. For my service i don't want to process any videos higher than 1080p. Is there a way i can achieve this without adding a lamda during the ingestion stage to compress it? I know it can by compressed on the client, i am hoping this can be achieved on this workflow, perhaps using mediaconvert? Thirdly, based on some of the materials i came across about this service, aside from the hls files mediaconvert generates, its supposed to generate an mp4 version of my video for cases where a client wants to download the full video as opposed to streaming. That is not the default behaviour, how do i achieve this? Lastly, how do i add watermarks to my videos in this workflow. Forgive me if some of these questions feel like things i could have easily researched on and gotten solutions. I did do some research, but i failed to grasp a clear understanding on anything
1
answers
0
votes
17
views
asked a month ago

Error with creating Cloudformation stack during creating resources and have a role specified

I am exploring how to delegate Cloudformation permission to other users by testing specifying a role when creating a stack. I notice that some resources like VPC, IGW and EIP can be created but error was prompted. The created resources cannot be deleted by the stack also during rollback or stack deletion. For example, the following simple template create a VPC: ``` Resources: VPC: Type: AWS::EC2::VPC Properties: CidrBlock: 10.3.9.0/24 ``` I have actually created a role to specify during creation with policy which allow a lot of actions that I collected by querying the cloudtrail using athena. The following are already included: `"ec2:CreateVpc","ec2:DeleteVpc","ec2:ModifyVpcAttribute"` However, the following occur during creation: > Resource handler returned message: "You are not authorized to perform this operation. (Service: Ec2, Status Code: 403, Request ID: bf28db5b-461e-48ff-9430-91cc05be77ef)" (RequestToken: bc6c6c87-a616-2e94-65eb-d4e5488a499a, HandlerErrorCode: AccessDenied) Looks like some callback mechanisms are used? The VPC was actually created. The deletion was also failed but it did not succeeded. > Resource handler returned message: "You are not authorized to perform this operation. (Service: Ec2, Status Code: 403, Request ID: f1e43bf1-eb08-462a-9788-f183db2683ab)" (RequestToken: 80cc5412-ba28-772b-396e-37b12dbf8066, HandlerErrorCode: AccessDenied) Any hint about this issue? Thanks.
1
answers
0
votes
37
views
asked 2 months ago

How can I build a CloudFormation secret out of another secret?

I have an image I deploy to ECS that expects an environment variable called `DATABASE_URL` which contains the username and password as the userinfo part of the url (e.g. `postgres://myusername:mypassword@mydb.foo.us-east-1.rds.amazonaws.com:5432/mydbname`). I cannot change the image. Using `DatabaseInstance.Builder.credentials(fromGeneratedSecret("myusername"))`, CDK creates a secret in Secrets Manager for me that has all of this information, but not as a single value: ```json { "username":"myusername", "password":"mypassword", "engine":"postgres", "host":"mydb.foo.us-east-1.rds.amazonaws.com", "port":5432, "dbInstanceIdentifier":"live-myproduct-db" } ``` Somehow I need to synthesise that `DATABASE_URL` environment variable. I don't think I can do it in the ECS Task Definition - as far as I can tell the secret can only reference a single key in a secret. I thought I might be able to add an extra `url` key to the existing secret using references in cloud formation - but I can't see how. Something like: ```java secret.newBuilder() .addTemplatedKey( "url", "postgres://#{username}:#{password}@#{host}:#{port}/#{db}" ) .build() ``` except that I just made that up... Alternatively I could use CDK to generate a new secret in either Secrets Manager or Systems Manager - but again I want to specify it as a template so that the real secret values don't get materialised in the CloudFormation template. Any thoughts? I'm hoping I'm just missing some way to use the API to build compound secrets...
3
answers
0
votes
17
views
asked 2 months ago

ApplicationLoadBalancedFargateService with load balancer, target groups, targets on non-standard port

I have an ECS service that exposes port 8080. I want to have the load balancer, target groups and target use that port as opposed to port 80. Here is a snippet of my code: ``` const servicePort = 8888; const metricsPort = 8888; const taskDefinition = new ecs.FargateTaskDefinition(this, 'TaskDef'); const repository = ecr.Repository.fromRepositoryName(this, 'cloud-config-server', 'cloud-config-server'); taskDefinition.addContainer('Config', { image: ecs.ContainerImage.fromEcrRepository(repository), portMappings: [{containerPort : servicePort, hostPort: servicePort}], }); const albFargateService = new ecsPatterns.ApplicationLoadBalancedFargateService(this, 'AlbConfigService', { cluster, publicLoadBalancer : false, taskDefinition: taskDefinition, desiredCount: 1, }); const applicationTargetGroup = new elbv2.ApplicationTargetGroup(this, 'AlbConfigServiceTargetGroup', { targetType: elbv2.TargetType.IP, protocol: elbv2.ApplicationProtocol.HTTP, port: servicePort, vpc, healthCheck: {path: "/CloudConfigServer/actuator/env/profile", port: String(servicePort)} }); const addApplicationTargetGroupsProps: elbv2.AddApplicationTargetGroupsProps = { targetGroups: [applicationTargetGroup], }; albFargateService.loadBalancer.addListener('alb-listener', { protocol: elbv2.ApplicationProtocol.HTTP, port: servicePort, defaultTargetGroups: [applicationTargetGroup]} ); } } ``` This does not work. The health check is taking place on port 80 with the default URL of "/" which fails, and the tasks are constantly recycled. A target group on port 8080, with the appropriate health check, is added, but it has no targets. What is the recommended way to achieve load balancing on a port other than 80? thanks
1
answers
0
votes
52
views
asked 2 months ago

Scheduled Action triggering at time specified in another action

I have a CloudFormation setup with Scheduled Actions to autoscale services based on times. There is one action that scales up to start the service, and another to scale down to turn it off. I also occasionally add an additional action to scale up if a service is needed at a different time on a particular day. I'm having an issue where my service is being scaled down instead of up when I specify this additional action. Looking at the console logs I get an event that looks like: ``` 16:00:00 -0400 Message: Successfully set min capacity to 0 and max capacity to 0 Cause: scheduled action name ScheduleScaling_action_1 was triggered ``` However the relevant part of the CloudFormation Template for the Scheduled Action with the name in the log has a different time, e.g.: ``` { "ScalableTargetAction": { "MaxCapacity": 0, "MinCapacity": 0 }, "Schedule": "cron(0 5 ? * 2-5 *)", "ScheduledActionName": "ScheduleScaling_action_1" } ``` What is odd is that the time this action is triggering matches exactly with the Schedule time for another action. E.g. ``` { "ScalableTargetAction": { "MaxCapacity": 1, "MinCapacity": 1 }, "Schedule": "cron(00 20 ? * 2-5 *)", "ScheduledActionName": "ScheduleScaling_action_2" } ``` I am using CDK to generate the CloudFormation template, which doesn't appear to allow me to specify a timezone. So my understanding is that the times here should be UTC. What could cause the scheduled action to trigger at the incorrect time like this?
1
answers
0
votes
9
views
asked 2 months ago

EC2 Instance Status Check fails when created by CloudFormation template

I have created a CloudFormation Stack using the below template in the **us-east-1** and **ap-south-1** region AWSTemplateFormatVersion: "2010-09-09" Description: Template for node-aws-ec2-github-actions tutorial Resources: InstanceSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Sample Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 80 ToPort: 80 CidrIp: 0.0.0.0/0 - IpProtocol: tcp FromPort: 443 ToPort: 443 CidrIp: 0.0.0.0/0 - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: 0.0.0.0/0 EC2Instance: Type: "AWS::EC2::Instance" Properties: ImageId: "ami-0d2986f2e8c0f7d01" #Another comment -- This is a Linux AMI InstanceType: t2.micro KeyName: node-ec2-github-actions-key SecurityGroups: - Ref: InstanceSecurityGroup BlockDeviceMappings: - DeviceName: /dev/sda1 Ebs: VolumeSize: 8 DeleteOnTermination: true Tags: - Key: Name Value: Node-Ec2-Github-Actions EIP: Type: AWS::EC2::EIP Properties: InstanceId: !Ref EC2Instance Outputs: InstanceId: Description: InstanceId of the newly created EC2 instance Value: Ref: EC2Instance PublicIP: Description: Elastic IP Value: Ref: EIP The Stack is executed successfully and all the resources are created. But unfortunately, once the EC2 status checks are initialized the Instance status check fails and I am not able to reach the instance using SSH. I have tried creating an Instance manually by the same IAM user, and that works perfectly. These are the Policies I have attached to the IAM user. Managed Policies * AmazonEC2FullAccess * AWSCloudFormationFullAccess InLine Policy { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "iam:CreateInstanceProfile", "iam:DeleteInstanceProfile", "iam:GetRole", "iam:GetInstanceProfile", "iam:DeleteRolePolicy", "iam:RemoveRoleFromInstanceProfile", "iam:CreateRole", "iam:DeleteRole", "iam:UpdateRole", "iam:PutRolePolicy", "iam:AddRoleToInstanceProfile" ], "Resource": "*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:ListAllMyBuckets", "s3:CreateBucket", "s3:DeleteObject", "s3:DeleteBucket" ], "Resource": "*" } ] } Thanks in advance for helping out. Have a good day
1
answers
0
votes
13
views
asked 2 months ago

CloudFormation RDS CreateInstance fails incompatible-parameters

We have been creating RDS MariaDB instances on an almost daily basis using CloudFormation and our scripts for a long time (years), however, the last week or two RDS CreateInstance fails intermittently with the below error - deleting the stack and retrying usually works, but of course not a suitable long term solution for regular environment creation: ``` 2022-04-14 15:26:20 UTC+0100 RdsInstance CREATE_FAILED DB Instance is in state: incompatible-parameters ``` If I view the RDS database in question, under the "Events" listing there are about 5 pages of the same event: ``` April 14, 2022, 3:10:13 PM UTC Your MySql memory setting is inappropriate for the usages ``` On that failure, it attempts to roll back, but that also fails because the delete DB instance fails for the same reason (more-or-less, DB not in available state). However, the DB does eventually end up being available, after something like 20 mins or so, the DB (having failed to be deleted) will be showing as "Available". We have not changed anything in the parameter group or DB engine. It is running MariaDB 10.3.31. Anyone have any idea what might be causing this or what might have changed recently? --- *EDIT*: Following on from the answers provided so far, the thing I'm most interested in is the intermittent nature of the issue, and the fact that its just started happening, having run successfully for a long time previously. If there was an incorrect parameter for the DB type, then I'd expect it to fail every time, the intermittent nature of it makes me thinking its more likely a race condition or timing issue involved. I have reviewed the parameter group, and only one value has been changed from the default params for Mariadb10.3: `max_connections` - now set to a fixed 1000 (rather than the default to calculate it based on the size of the instance). This hasn't changed for a long time, and can't see that this is causing an issue?
2
answers
0
votes
52
views
asked 2 months ago
1
answers
0
votes
47
views
asked 3 months ago
  • 1
  • 90 / page