Serverless

Serverless is a way to describe the services, practices, and strategies that enable you to build more agile applications so you can innovate and respond to change faster. With serverless computing, infrastructure management tasks like capacity provisioning and patching are handled by AWS, so you can focus on only writing code that serves your customers.

Recent questions

see all
1/18
  • I'm unable to use AWS Lambda and to find a reasonable explanation. - Dashboard shows error box with empty message - Create form shows a spinner for 10 seconds and then stops and nothing happens - API returns `{"message": null}` - The user has enough permissions to use Lambda ![AWS Lambda dashboard error](/media/postImages/original/IM_s3FBKPBRRm6boEVBcldQA) I can just ask if someone faced something similar, since support doesn't want to answer my ticket. Thanks
    0
    answers
    0
    votes
    17
    views
    asked 3 hours ago
  • Hi, I am trying to setup Lambda functions with API Gateway as the trigger. I'll be making external API calls from the functions and I need my IP to be allowlist with the provider, so it should be static. I also need to provide them the hostname from where the API calls will originate from, so the API gateway will be using custom domain. I have the domain registered on Godaddy and for this API Gateway, I want to use a subdomain. At the moment, what I have done is: 1. Created a VPC Endpoint with subnets in all the availability zones in the region. 2. Created a private Rest API and assigned the above VPCE to it. 3. Created the same number of Elastic IPs as the availability zones. 4. Requested a new certificate from ACM for the subdomain, put the CNAME records on GoDaddy and got the certificate issued. 5. Created a Target Group with IP as target type, TLS as protocol and HTTPS as health check protocol and registered the default subnet's IPs of each availability zone. I used 403 as the health check status expected as this will be the status when the API will be invoked using NLB's DNS for health checks. The health check comes out to be positive. 5. Created Internet Facing, IPv4 Network Load Balancer. The listener was setup with TLS as the protocol. I assigned the above created EIPs to this load balancer and the above generated certificate too. At this point, I am successfully able to invoke the private API Gateway using the NLBs domain. However, I get a security warning because the domain for which the certificate was issued for is not being used to invoke the API. I created a Custom domain for the API and assigned the same certificate to it as well. But still, I get the same warning on the client side. And if I try to invoke the API with the custom domain name, I get no response at all because the name does not get resolved. If I had my domain registered on AWS Route 53, I would've been able to create an Alias record that pointed to the NLB. Can I still do this with external registrar and will this even do anything for me? Can somebody please guide me what needs to be done to get this working? Really appreciate it & thanks in advance. PS. Sorry for the long detail if it's unnecessary.
    0
    answers
    0
    votes
    9
    views
    asked 4 hours ago
  • I created token signature with the below command: echo -n tokenKeyValue | openssl dgst -sha256 -sign private-key.pem| openssl base64 NOw testing the authorizer by test invoke authorizer in aws CLI with the command: aws iot test-invoke-authorizer \ --authorizer-name my-new-authorizer \ --token tokenKeyValue \ --token-signature {created signature} I am getting an error : unknown options:tokenKeyValue. Please guide
    0
    answers
    0
    votes
    9
    views
    asked 5 hours ago
  • Hi folks. We have developed an IoT application and all is going great with the receiving of messages from our IoT Device to trigger Lambda's. The next step is to Publish messages from a Lambda to the device using JavaScript. I can successfully Publish back to the device using the Test option from the web console. I can't seem to find JavaScript sample Lambda code that shows how to connect to a specific Thing and then Publish to it. Would also need to understand the security required for the Lambda connection to the Thing (ie. does the Lambda have to load the same certificates used by the IoT Device). I appreciate any help. Thanks. Grant.
    1
    answers
    0
    votes
    12
    views
    Grant
    asked 16 hours ago
  • When trying to create a WAF web ACL, I get the following error: "WAFUnavailableEntityException: AWS WAF couldn't retrieve the resource that you requested. Retry your request." [This page in the AWS Documentation](https://docs.aws.amazon.com/waf/latest/APIReference/API_CreateWebACL.html) gives the following explanation: "WAFUnavailableEntityException AWS WAF couldn’t retrieve a resource that you specified for this operation. If you've just created a resource that you're using in this operation, you might just need to wait a few minutes. It can take from a few seconds to a number of minutes for changes to propagate. Verify the resources that you are specifying in your request parameters and then retry the operation. HTTP Status Code: 400" However, I waited for over an hour after creating my resources (first, an API, then an ALB), and I am still getting the same error when I try to create a web ACL for those resources. Not sure what the issue is.
    1
    answers
    0
    votes
    7
    views
    asked 19 hours ago
  • I am getting exec /bin/sh: exec format error while trying to deploy a basic Hello World container as ECS service using Fargate. Following are the steps I followed: 1. Build the docker image using docker desktop for Mac 2. Created the ECR repository using AWS console 3. Pushed the docker image to ECR 4. Created a task definition in ECS with Fargate as lunch type 5. Tried to deploy ECS task as service Here is the docker file I used to build the image: FROM ubuntu:18.04 #Install dependencies RUN apt-get update && \ apt-get -y install apache2 #Install apache and write hello world message RUN echo 'Hello World!' > /var/www/html/index.html #Configure apache RUN echo '. /etc/apache2/envvars' > /root/run_apache.sh && \ echo 'mkdir -p /var/run/apache2' >> /root/run_apache.sh && \ echo 'mkdir -p /var/lock/apache2' >> /root/run_apache.sh && \ echo '/usr/sbin/apache2 -D FOREGROUND' >> /root/run_apache.sh && \ chmod 755 /root/run_apache.sh EXPOSE 80 While troubleshooting the error I have tried to put #!/bin/sh as the first line in the docker file but that also did not work. I have tried to change the image from apache to NGINX and used different docker file like below: FROM nginx RUN rm /etc/nginx/conf.d/* COPY hello.conf /etc/nginx/conf.d/ COPY index.html /usr/share/nginx/html/ Using this image I am getting exec /docker-entrypoint.sh: exec format error
    0
    answers
    0
    votes
    6
    views
    asked 20 hours ago
  • There seems to be conflicting documentation relating to OpenSearch's bulk API, specifically related to "Create" and "Index" actions. #### Error If I run a simple bulk request with the following body: ``` {"index": {"_id": "63a3b4074244c5b760010f1f", "_index": "index-client"}} {"message": "My log message"} ``` I get the following error in the response: ``` { 'items': [ { 'index': { 'status': 400, '_id': '63a3b4074244c5b760010f1f', 'error': { 'reason': 'Document ID is not supported in create/index operation request', 'type': 'illegal_argument_exception' }, '_index': 'index-client' } } ], 'errors': True, 'took': 0 } ``` #### Question Everything I can find (see docs below for reference) suggests that the Bulk API for the supported 2.0.x version of OpenSearch looks like the following: ``` POST _bulk { "delete": { "_index": "movies", "_id": "tt2229499" } } { "index": { "_index": "movies", "_id": "tt1979320" } } { "title": "Rush", "year": 2013 } { "create": { "_index": "movies", "_id": "tt1392214" } } { "title": "Prisoners", "year": 2013 } { "update": { "_index": "movies", "_id": "tt0816711" } } { "doc" : { "title": "World War Z" } } ``` with the Newline Delimited JSON (see https://www.ndjson.org) format separating actions and documents. If we focus on the "Create" and "Index" actions, these allow you to specify an "_id" field in the action. For "Index" actions, this will: "create a document if it doesn’t yet exist and replace the document if it already exists." **So, how is the "Index" action supposed to work if the action can't take an "_id" to correlate it with an existing document?** You might think that maybe it pulls the "_id" field from the document itself instead of from the action, but this will lead to the following error: ``` Field [_id] is a metadata field and cannot be added inside a document. Use the index API request parameters. ``` Googling "Document ID is not supported in create/index operation request" reveals nothing. I'm at a loss for how to do Bulk Index operations for AWS OpenSearch Serverless. ### Documentation #### OpenSearch docs (Version 2.0 because "Serverless collections currently run OpenSearch version 2.0.x.") * Bulk API: https://opensearch.org/docs/2.0/api-reference/document-apis/bulk/ * Intro to indexing: https://opensearch.org/docs/2.0/opensearch/index-data/#introduction-to-indexing * Python clients' API: https://opensearch-project.github.io/opensearch-py/api-ref/clients/opensearch_client.html#opensearchpy.OpenSearch.bulk #### AWS docs * Mention of bulk API: https://docs.aws.amazon.com/opensearch-service/latest/developerguide/gsgupload-data.html#gsgmultiple-document * Bulk api quick start: https://docs.aws.amazon.com/opensearch-service/latest/developerguide/quick-start.html#quick-start-bulk #### Elastic docs * Bulk API: https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html#bulk-api-request-body
    0
    answers
    0
    votes
    13
    views
    Kent
    asked 21 hours ago
  • I have a RestApi declared with Cloudformation using AWS::Serverless::Api and created a default cognito authorizer there and declaring a UserPoolArn pointing to my UserPool1's. Then, I created a custom resource, with RestApiId and a UserPool2ARN properties, so it could find (the APIG's authorizers) and add the second pool into the CognitoAuthorizer. It seems to work, AWS Console API Gateway Authorizers page shows the CognitoAuthorizer with TWO different pools. But the problem is when I "initiateAuth" different users from each pool to get an "idtoken", only the idtoken from the first-listed pool is going through. The idtoken from the other pool gets an unauthorized.
    0
    answers
    0
    votes
    3
    views
    asked a day ago
  • Documentation say. To edit a scheduled task (Amazon ECS console) Open the Amazon ECS console at https://console.aws.amazon.com/ecs/. Choose the cluster in which to edit your scheduled task. On the Cluster: cluster-name page, choose Scheduled Tasks. Select the box to the left of the schedule rule to edit, and choose Edit. Edit the fields to update and choose Update. But I cannot see the "Scheduled Tasks" option. it was there before but ever since the new interface i cannot see it. Is there anyway that i can edit the scheduled task? I tried rules in eventbridge but it is not letting me edit the contraineroverrides.
    1
    answers
    0
    votes
    5
    views
    asked a day ago
  • I am trying to send HTTP PoST request from Postman API to AWS IoT core. But getting a {message forbidden} error. So I am trying to create a Lambda authorizer. While creating I get the following message: "API Gateway needs your permission to invoke your Lambda function:" And not able to proceed further. Kindly help.
    1
    answers
    0
    votes
    17
    views
    asked a day ago
  • Hi, I have been banging my head trying to get this working and cannot figure it out. I have an ECS fargate cluster in 2 private subnets. There are 2 public subnets with NatGWs (needed for the tasks running in Fargate). Currently I have S3 traffic going through the NatGWs and I would like to implement an S3 endpoint as "best practice". I have created CFN scripts to create the endpoint and associated security group. All resources are created and appear to be working. However I can see from the logs that traffic for s3 is still going through the NatGWs. Is there something basic that I have missed? Is there a way to force the traffic from the tasks to the S3 endpoints? The fargate task security group has the following egress: ``` SecurityGroupEgress: - IpProtocol: "-1" CidrIp: 0.0.0.0/0 ``` Here is the script that creates the enpoint and SG: ``` endpointS3SecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: "Security group for S3 endpoint" GroupName: "S3-endpoint-sg" Tags: - Key: "Name" Value: "S3-endpoint-sg" VpcId: !Ref vpc SecurityGroupIngress: - IpProtocol: "tcp" FromPort: 443 ToPort: 443 SourceSecurityGroupId: !Ref fargateContainerSecurityGroup # S3 endpoint endpointS3: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: 's3:*' Resource: '*' SubnetIds: - !Ref privateSubnet1 - !Ref privateSubnet2 VpcEndpointType: Interface SecurityGroupIds: - !Ref endpointS3SecurityGroup ServiceName: Fn::Sub: "com.amazonaws.${AWS::Region}.s3" VpcId: !Ref vpc ``` Thanks in advance. Regards, Don.
    2
    answers
    0
    votes
    9
    views
    Don
    asked a day ago
  • I've added a HTTP API route integration that sends a message to an sqs queue. I would like to map the response to something other than xml in the api response. If the only option is to map to a response header that may work, but the only options to select a value from the sendMessage response is the use `$response.body.<json_path>` which will not work with xml. Is there anyway to have this integration (sqs-sendMessage) not return xml ? If not, is there anyway to map an xml value to a response header or body? (without using a lambda in between the endpoint and queue)
    1
    answers
    0
    votes
    20
    views
    asked a day ago
  • I have an aurora postgresql serverless v2 cluster with 2 instances, one writer and another reader and would like them to scale independently. According to [documentation](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2-administration.html#aurora-serverless-v2-choosing-promotion-tier:~:text=Aurora%20Serverless%20v2%20reader%20DB%20instances%20in%20tiers%202%E2%80%9315%20don%27t%20have%20the%20same%20constraint%20on%20their%20minimum%20capacity.%20When%20they%20are%20idle%2C%20they%20can%20scale%20down%20to%20the%20minimum%20Aurora%20capacity%20unit%20(ACU)%20value%20specified%20in%20the%20cluster%27s%20capacity%20range.), if the reader instance is in a Failover priority other than 0 or 1. They SHOULD scale independently but still no matter what I do, they always scale synchronously. I have a workload that runs twice a day and demands the higher acu count but the reader instance has very low usage, so I would like them to scale independently to save on costs. In my use case it is not a problem if there is a longer downtime for the reader instance to scale and take over in case of failure. Thanks
    2
    answers
    0
    votes
    9
    views
    asked a day ago
  • I have done the following : 1.Added invoke Function "Addpermission"API with the following command. aws lambda add-permission --function-name FunctionName --principal iot.amazonaws.com --source-arn AuthorizerARn --statement-id Id-123 --action "lambda:InvokeFunction" 2.Verify Authorizer Response with the command aws iot test-invoke-authorizer --authorizer-name NAME_OF_AUTHORIZER --token TOKEN_VALUE In AWS CLI. gives the following error: aws: error: argument operation: Invalid choice, valid choices are: And Postman API is still giving {message Forbidden} :( Note:TOKEN_VALUE is up to date
    1
    answers
    0
    votes
    11
    views
    asked 2 days ago
  • Hi, I have a Step Functions Express state machine for which I start executions with AWS SDK for PHP (StartExecution API). My code is running on an EC2 instance (Docker container on t3.micro) in a load balanced Beanstalk application. For the API call to start an execution, the total time it takes (everything included) is between 155ms and 500ms. The average is around 200ms. This is quite high and is a problem for us. My first question is if this is unusually high, or if this is normal? I tried starting the same workflow through API Gateway and saw roughly the same response times (or maybe slightly lower). I also tried using the PutItem API for a DynamoDB table and saw an average of around 200ms. Am I correct in assuming that these numbers should be lower? If my assumption is correct, I am thinking that maybe this is caused by the network path from my EC2 instance to the AWS API. My Beanstalk application is not using a VPC (though the EC2 instance is in the default VPC). Perhaps things could be improved by using a VPC and PrivateLink (VPC interface endpoint)? https://docs.aws.amazon.com/step-functions/latest/dg/vpc-endpoints.html So; 1. Is an average of 200ms unusually high or is this to be expected? 2. If #1 is true, should I expect using VPC/PrivateLink to improve this? 3. Which response times (everything included) should I expect (roughly/ballpark)? Thanks a lot!
    0
    answers
    0
    votes
    15
    views
    thdev
    asked 2 days ago
  • I came across these pricing links and we want to prototype based on the open search serverless. https://aws.amazon.com/blogs/big-data/amazon-opensearch-serverless-is-now-generally-available/ But I didn't understand the pricing strategy for open search serverless. Do we have any sample calculation ? Do we know what might be very minimal pricing for a month ? I did sample calculation based on the link below. https://aws.amazon.com/opensearch-service/pricing/ OpenSearch Compute Unit (OCU) - Indexing $0.24 per OCU per hour OpenSearch Compute Unit (OCU) - Search and Query $0.24 per OCU per hour Managed Storage $0.024 per GB per month There are 2 compute units need for both indexing and search/query. Active & Standby for indexing and 2 Replicas for search & Query. Totally there are 4 compute units. 0.24 * 24 (hours) * 30 (days) * 4 (compute units) = $691.2 This cost seems very high for minimal configuation. Can someone provide some clarity on this pricing model ?
    0
    answers
    1
    votes
    16
    views
    asked 2 days ago
  • We have build a tier-1 service and we want to ensure 100% availability during the deployment. Our Service needs 15 tasks to serve 850 tps traffic. We are looking for deployment configuration (1) Desired count is 15 as of now To ensure the service is always available I had set minimumHealthyPercent to 100%, but during deployment I had seen there is spike in the unhealthy hosts. (2) what should be the minimumHealthyPercent ? (3) what should be maximumPercent ? (4) Should we modify the health check associated with target group ?
    1
    answers
    0
    votes
    23
    views
    asked 2 days ago
  • Would like to route API Gateway invocation based on source IP Address. Eg. is source IP 10.x.x.x then invoke function A, if source IP 11.y.y.y then invoke function B. Similar with what Route53 supports for routing based on IP Address but we don't have access to Route53. Thank you in advance, Lucian
    2
    answers
    0
    votes
    13
    views
    asked 2 days ago

Recent articles

see all
1/2

Popular users

see all
1/18

Learn AWS faster by following popular topics

1/5