By using AWS re:Post, you agree to the Terms of Use
/Serverless/

Questions tagged with Serverless

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

RequestParameters for Api Event in Serverless::Function in JSON - how does it work?

I'm trying to add some query string parameters for a Lambda function, using a SAM template written in JSON. All the examples are in YAML? Can anyone point out where I'm going wrong. Here's the snippet of the definition: ``` "AreaGet": { "Type": "AWS::Serverless::Function", "Properties": { "Handler": "SpeciesRecordLambda::SpeciesRecordLambda.Functions::AreaGet", "Runtime": "dotnet6", "CodeUri": "", "MemorySize": 256, "Timeout": 30, "Role": null, "Policies": [ "AWSLambdaBasicExecutionRole" ], "Events": { "AreaGet": { "Type": "Api", "Properties": { "Path": "/", "Method": "GET", "RequestParameters": [ "method.request.querystring.latlonl": { "Required": "true" }, "method.request.querystring.latlonr": { "Required": "true" } ] } } } } }, ``` and here's the error message I get: > Failed to create CloudFormation change set: Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document. Number of errors found: 1. Resource with id [AreaGet] is invalid. Event with id [AreaGet] is invalid. Invalid value for 'RequestParameters' property. Keys must be in the format 'method.request.[querystring|path|header].{value}', e.g 'method.request.header.Authorization'. Sorry I know this is a bit of a beginners question, but I'm a bit lost as to what to do, as I can't find any information about this using JSON. Maybe you can't do it using JSON? Thanks, Andy.
1
answers
0
votes
30
views
asked 8 days ago

Using $defs in API Gateway Models

I am working on an API Gateway api using the Serverless Framework. The project contains a json-schema which apparently is used to create a model in API Gateway. Recently, I started to use the `$defs` element in the schema (https://json-schema.org/understanding-json-schema/structuring.html#defs), which is a way to re-use definitions within the same schema (pasting my schema below). However, no my deployments are failing: > Error: > CREATE_FAILED: ApiGatewayMethodV1PreviewsPostApplicationJsonModel (AWS::ApiGateway::Model) > Resource handler returned message: “Invalid model specified: Validation Result: warnings : [], errors : [Invalid model schema > specified. Unsupported keyword(s): [“$defs”], Model reference must be in canonical form, Model reference must be in canonical form] (Service: ApiGateway, Status Code: 400, Request ID: 7048dc90-7bb4-4259-bed8-50d7a93963d9, Extended Request ID: null)” This probably means that `$defs` is not supported in JSON schema draft 4? Any other way to avoid duplications in the schema file? Here is my schema (Typescript but you get the idea): ``` export const inputSchema = { type: 'object', properties: { body: { type: 'object', oneOf: [ { properties: { input: { type: 'string' }, options: { "$ref": "#/$defs/options" }, }, required: ['input'], }, { properties: { data: { type: 'string' }, options: { "$ref": "#/$defs/options" }, }, required: ['data'], }, ], }, }, $defs: { options: { type: 'object', properties: { camera: { type: 'string' }, auto_center: { type: 'boolean' }, view_all: { type: 'boolean' }, }, }, }, }; ```
0
answers
0
votes
14
views
asked 15 days ago

Best practices for lambda layer dependencies

Hi all, We recently started using the OpenTelemetry lambda layer for python: https://aws-otel.github.io/docs/getting-started/lambda/lambda-python in our serverless applications. We've encountered an issue with dependencies in a few of our projects, where the version of a particular dependency required by the lambda layer would conflict with the version installed in the lambda function itself. For example, the lambda layer had a requirement for protobuf>=3.15.0, whereas our application was using 3.13.0, causing the following error in the logs: ``` 2022-06-01T15:59:35.215-04:00 Configuration of configurator failed 2022-06-01T15:59:35.215-04:00 Traceback (most recent call last): 2022-06-01T15:59:35.215-04:00 File "/opt/python/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py", line 105, in _load_configurators 2022-06-01T15:59:35.215-04:00 entry_point.load()().configure(auto_instrumentation_version=__version__) # type: ignore 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 215, in configure 2022-06-01T15:59:35.215-04:00 self._configure(**kwargs) 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 231, in _configure 2022-06-01T15:59:35.215-04:00 _initialize_components(kwargs.get("auto_instrumentation_version")) 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 181, in _initialize_components 2022-06-01T15:59:35.215-04:00 trace_exporters, log_exporters = _import_exporters( 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 149, in _import_exporters 2022-06-01T15:59:35.215-04:00 for (exporter_name, exporter_impl,) in _import_config_components( 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 136, in _import_config_components 2022-06-01T15:59:35.215-04:00 component_impl = entry_point.load() 2022-06-01T15:59:35.215-04:00 File "/var/task/pkg_resources/__init__.py", line 2470, in load 2022-06-01T15:59:35.215-04:00 self.require(*args, **kwargs) 2022-06-01T15:59:35.215-04:00 File "/var/task/pkg_resources/__init__.py", line 2493, in require 2022-06-01T15:59:35.215-04:00 items = working_set.resolve(reqs, env, installer, extras=self.extras) 2022-06-01T15:59:35.215-04:00 File "/var/task/pkg_resources/__init__.py", line 800, in resolve 2022-06-01T15:59:35.215-04:00 raise VersionConflict(dist, req).with_context(dependent_req) 2022-06-01T15:59:35.215-04:00 pkg_resources.ContextualVersionConflict: (protobuf 3.13.0 (/var/task), Requirement.parse('protobuf>=3.15.0'), {'googleapis-common-protos'}) 2022-06-01T15:59:35.215-04:00 Failed to auto initialize opentelemetry 2022-06-01T15:59:35.215-04:00 Traceback (most recent call last): 2022-06-01T15:59:35.215-04:00 File "/opt/python/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py", line 123, in initialize 2022-06-01T15:59:35.215-04:00 _load_configurators() 2022-06-01T15:59:35.215-04:00 File "/opt/python/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py", line 109, in _load_configurators 2022-06-01T15:59:35.215-04:00 raise exc 2022-06-01T15:59:35.215-04:00 File "/opt/python/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py", line 105, in _load_configurators 2022-06-01T15:59:35.215-04:00 entry_point.load()().configure(auto_instrumentation_version=__version__) # type: ignore 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 215, in configure 2022-06-01T15:59:35.215-04:00 self._configure(**kwargs) 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 231, in _configure 2022-06-01T15:59:35.215-04:00 _initialize_components(kwargs.get("auto_instrumentation_version")) 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 181, in _initialize_components 2022-06-01T15:59:35.215-04:00 trace_exporters, log_exporters = _import_exporters( 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 149, in _import_exporters 2022-06-01T15:59:35.215-04:00 for (exporter_name, exporter_impl,) in _import_config_components( 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 136, in _import_config_components 2022-06-01T15:59:35.215-04:00 component_impl = entry_point.load() 2022-06-01T15:59:35.215-04:00 File "/var/task/pkg_resources/__init__.py", line 2470, in load 2022-06-01T15:59:35.215-04:00 self.require(*args, **kwargs) 2022-06-01T15:59:35.215-04:00 File "/var/task/pkg_resources/__init__.py", line 2493, in require 2022-06-01T15:59:35.215-04:00 items = working_set.resolve(reqs, env, installer, extras=self.extras) 2022-06-01T15:59:35.215-04:00 File "/var/task/pkg_resources/__init__.py", line 800, in resolve 2022-06-01T15:59:35.215-04:00 raise VersionConflict(dist, req).with_context(dependent_req) 2022-06-01T15:59:35.266-04:00 pkg_resources.ContextualVersionConflict: (protobuf 3.13.0 (/var/task), Requirement.parse('protobuf>=3.15.0'), {'googleapis-common-protos'}) ``` We've encountered the same issue with other libraries. My question is whether there are any best practices or recommendations to deal with this issue a little bit better. Should the lambda layer maybe publish its list of dependencies for each version so that the service using it will know what to expect? I feel like this is introducing a very loose dependency that will be only caught in runtime, which seems problematic to me. Hope it makes sense. I searched older posts and couldn't find anything relevant. Many thanks in advance, Juan
0
answers
1
votes
43
views
asked 15 days ago

Unable to pass aws.events.event.json in input transformer template

I am using eventbridge to trigger batch run through custom put-events request(using boto3). My request is as follows :- ```python INPUT_DATA = { "data":[List<Dict>], # i.e. data key has list value where each list item is dict "sample_key":"sample_string_value" } client = boto3.client("events") response = client.put_events(Entries=[ { "Source": "my.app.com", "Detail": json.dumps(INPUT_DATA), "DetailType":"sample-type", "EventBusName":"my-event-bus" } ]) ``` In order to pass data from eventbridge to batch, I have following input transformer template :- ```terraform input_paths = { } input_template = <<EOF { "ContainerOverrides": { "Environment": [ { "Name": "PAYLOAD", "Value": "<aws.events.event.json>" } ] } } EOF ``` where `aws.events.event.json` is pre-defined variable for event payload(i.e. detail key value) as *string*. When I run the script, I am able to see metric for `triggeredRule`with `1` but invocation for batch run fails i.e. `failedInvocations` count as `1` is also seen in the metrics. At the same time, if replace `aws.events.event.json` with `aws.events.rule-arn` or `aws.events.rule-name` or `aws.events.event.ingestion-time`, batch run is triggered and correct values of latter variables is also seen in job descriptions's environment section. I tried to refer similar issue [here](https://repost.aws/questions/QUCMU-UIYoThyQqlkCn2sWEQ/eventbridge-input-transformer-example-doesnt-work) but it does not seem solve the issue. Can someone suggest where is the issue in above input transformer while using `aws.events.event.json` ? Is it due format of `INPUT_DATA` that I am sending ? Would appreciate any hint. Thanks
1
answers
0
votes
33
views
asked 20 days ago

How to use if-then-else and loop construct in input transformer in EventBridge?

Is there a way to define **if-then-else** and **looping** constructs using JSONPath, while defining the configuring the "input path" for EventBridge Input Transformer? E.g. For the following input ``` { "category": { "name": "Nike Shoes", "level": 2, "translation": null, "ancestors": { "edges": [ { "node": { "name": "Shoes", "slug": "shoes", "level": 0 } }, { "node": { "name": "Running Shoes", "slug": "running-shoes", "level": 1 } } ] } } } ``` I need the output to be ``` { "categories.lvl0": "Shoes", "categories.lvl1": "Shoes > Running shoes", "categories.lvl2": "Shoes > Running shoes > Nike shoes", } ``` The following is the python logic for the output I wish to achieve ```python if node["category"]["level"] != 0: category_hierarchy = list(map(lambda x: x["node"]["name"], node["category"]["ancestors"]["edges"])) category_hierarchy.append(new_item["categoryName"]) for i in range(len(category_hierarchy)): new_item[f'categories.lvl{i}'] = " > ".join(category_hierarchy[0:i+1]) ``` if the level of the main category ("Nike shoes" here) is not equal to 0, then I want to loop through its ancestors and define variables of the form `categories.lvl(n)` with the logic defined in column 2 below, to get the values defined in column 3 | Variable | Logic | Value required | | --------- | ------ | ------------------ | | category.lvl0 | ancestor category with level 0 | Shoes | | category.lvl1 | ancestor category with level 0 > ancestor category with level 1 | Shoes> Running shoes | | category.lvl2 | ancestor category with level 0 > ancestor category with level 1 > main category (with level 0) | Shoes> Running shoes > Nike shoes| I could frame the following JSONPath construct for now, which in plain English, represents: "if the category level is 0, then proceed to output names of the ancestor categories" JSONPath - `$..edges[?(@.level!=0)]..name` Output ``` [ "Shoes", "Running shoes" ] ``` However, I am not sure how to proceed further.
0
answers
0
votes
33
views
asked a month ago

Sync DynamoDB to S3

What is the best way to sync my DynamoDB tables to S3, so that I can perform serverless 'big data' queries using Athena? The data must be kept in sync without any intervention. The frequency of sync would depend on the cost, ideally daily but perhaps weekly. I have had this question a long time. I will cover what I have considered, and why I don't like the options. 1) AWS Glue Elastic Views. Sounds like this will do the job with no code, but it was announced 18 months ago and there have been no updates since. Its not generally available, and there is not information on when it might be. 2) Use dynamodb native backup following this blog https://aws.amazon.com/blogs/aws/new-export-amazon-dynamodb-table-data-to-data-lake-amazon-s3/. I actually already use this method for 'one-off' data transfers that I kick-off manually and then configure in Athena. I have two issues with this option. The first is that, to my knowledge, the export cannot be scheduled natively. The blog suggests using the CLI to kick off exports, and I assume the writer intends that the CLI would need scheduling on a cron job somewhere. I don't run any servers for this. I imagine I could do it via a scheduled Lambda with an SDK. The second issue is that the export path in S3 always includes a unique export ID. This means I can't configure the Athena table to point to a static location for the data and just switch over the new data after a scheduled export. Perhaps I could write another lambda to move the data around to a static location after the export has finished, but it seems a shame to have to do so much work and I've not seen that covered anywhere before. 3) I can use data pipeline as described in https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBPipeline.html. This post is more about backing data up than making it accessible to Athena. I feel like this use case must be so common, and yet none of the ideas I've seen online are really complete. I was wondering if anyone had any ideas or experiences that would be useful here?
2
answers
0
votes
11
views
asked 2 months ago

Should I use Cognito Identity Pool OIDC JWT Connect Tokens in the AWS API Gateway?

I noticed this question from 4 years ago: https://repost.aws/questions/QUjjIB-M4VT4WfOnqwik0l0w/verify-open-id-connect-token-generated-by-cognito-identity-pool So I was curious and I looked at the JWT token being returned from the Cognito Identity Pool. Its `aud` field was my identity pool id and its `iss` field was "https://cognito-identity.amazonaws.com", and it turns out that you can see the oidc config at "https://cognito-identity.amazonaws.com/.well-known/openid-configuration" and grab the public keys at "https://cognito-identity.amazonaws.com/.well-known/jwks_uri". Since I have access to the keys, that means I can freely validate OIDC tokens produced by the Cognito Identity Pool. Moreso, I should be also able to pass them into an API Gateway with a JWT authorizer. This would allow me to effectively gate my API Gateway behind a Cognito Identity Pool without any extra lambda authorizers or needing IAM Authentication. Use Case: I want to create a serverless lambda app that's blocked behind some SAML authentication using Okta. Okta does not allow you to use their JWT authorizer without purchasing extra add-ons for some reason. I could use IAM Authentication onto the gateway instead but I'm afraid of losing formation such as the user's id, group, name, email, etc. Using the JWT directly preserves this information and passes it to the lambda. Is this a valid approach? Is there something I'm missing? Or is there a better way? Does the IAM method preserve user attributes...?
0
answers
0
votes
12
views
asked 2 months ago

¿How can we crate a lambda which uses a Braket D-Wave device?

We are trying to deploy a Lambda with some code which works in a Notebook. The code is rather simple and uses D-Wave — DW_2000Q_6. The problem is that when we execute the lambda (container lambda due to size problems), it give us the following error: ```json { "errorMessage": "[Errno 30] Read-only file system: '/home/sbx_user1051'", "errorType": "OSError", "stackTrace": [ " File \"/var/lang/lib/python3.8/imp.py\", line 234, in load_module\n return load_source(name, filename, file)\n", " File \"/var/lang/lib/python3.8/imp.py\", line 171, in load_source\n module = _load(spec)\n", " File \"<frozen importlib._bootstrap>\", line 702, in _load\n", " File \"<frozen importlib._bootstrap>\", line 671, in _load_unlocked\n", " File \"<frozen importlib._bootstrap_external>\", line 843, in exec_module\n", " File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\n", " File \"/var/task/lambda_function.py\", line 6, in <module>\n from dwave.system.composites import EmbeddingComposite\n", " File \"/var/task/dwave/system/__init__.py\", line 15, in <module>\n import dwave.system.flux_bias_offsets\n", " File \"/var/task/dwave/system/flux_bias_offsets.py\", line 22, in <module>\n from dwave.system.samplers.dwave_sampler import DWaveSampler\n", " File \"/var/task/dwave/system/samplers/__init__.py\", line 15, in <module>\n from dwave.system.samplers.clique import *\n", " File \"/var/task/dwave/system/samplers/clique.py\", line 32, in <module>\n from dwave.system.samplers.dwave_sampler import DWaveSampler, _failover\n", " File \"/var/task/dwave/system/samplers/dwave_sampler.py\", line 31, in <module>\n from dwave.cloud import Client\n", " File \"/var/task/dwave/cloud/__init__.py\", line 21, in <module>\n from dwave.cloud.client import Client\n", " File \"/var/task/dwave/cloud/client/__init__.py\", line 17, in <module>\n from dwave.cloud.client.base import Client\n", " File \"/var/task/dwave/cloud/client/base.py\", line 89, in <module>\n class Client(object):\n", " File \"/var/task/dwave/cloud/client/base.py\", line 736, in Client\n @cached.ondisk(maxage=_REGIONS_CACHE_MAXAGE)\n", " File \"/var/task/dwave/cloud/utils.py\", line 477, in ondisk\n directory = kwargs.pop('directory', get_cache_dir())\n", " File \"/var/task/dwave/cloud/config.py\", line 455, in get_cache_dir\n return homebase.user_cache_dir(\n", " File \"/var/task/homebase/homebase.py\", line 150, in user_cache_dir\n return _get_folder(True, _FolderTypes.cache, app_name, app_author, version, False, use_virtualenv, create)[0]\n", " File \"/var/task/homebase/homebase.py\", line 430, in _get_folder\n os.makedirs(final_path)\n", " File \"/var/lang/lib/python3.8/os.py\", line 213, in makedirs\n makedirs(head, exist_ok=exist_ok)\n", " File \"/var/lang/lib/python3.8/os.py\", line 213, in makedirs\n makedirs(head, exist_ok=exist_ok)\n", " File \"/var/lang/lib/python3.8/os.py\", line 223, in makedirs\n mkdir(name, mode)\n" ] } ``` It seems that the library tries to write to some files which are not in /tmp folder. I'm wondering if is possible to do this, and if not, what are the alternatives. imports used: ```python import boto3 from braket.ocean_plugin import BraketDWaveSampler from dwave.system.composites import EmbeddingComposite from neal import SimulatedAnnealingSampler ```
1
answers
0
votes
19
views
asked 2 months ago

How to create (Serverless) SageMaker Endpoint using exiting tensorflow pb (frozen model) file?

Note: I am a senior developer, but am very new to the topic of machine learning. I have two frozen TensorFlow model weight files: `weights_face_v1.0.0.pb` and `weights_plate_v1.0.0.pb`. I also have some python code using Tensorflow 2, that loads the model and handles basic inference. The models detect respectively faces and license plates, and the surrounding code converts an input image to a numpy array, and applies blurring to the images in areas that had detections. I want to get a SageMaker endpoint so that I can run inference on the model. I initially tried using a regular Lambda function (container based), but that is too slow for our use case. A SageMaker endpoint should give us GPU inference, which should be much faster. I am struggling to find out how to do this. From what I can tell reading the documentation and watching some YouTube video's, I need to create my own docker container. As a start, I can use for example `763104351884.dkr.ecr.us-east-1.amazonaws.com/tensorflow-inference:2.8.0-gpu-py39-cu112-ubuntu20.04-sagemaker`. However, I can't find any solid documentation on how I would implement my other code. How do I send an image to SageMaker? Who tells it to convert the image to numpy array? How does it know the tensor names? How do I install additional requirements? How can I use the detections to apply blurring on the image, and how can I return the result image? Can someone here please point me in the right direction? I searched a lot but can't find any example code or blogs that explain this process. Thank you in advance! Your help is much appreciated.
1
answers
0
votes
7
views
asked 2 months ago

JumpCloud Serverlesss Lambda Function timeout error

Hello Giks, Hope all doing good. I'm facing issue while running a serverless application Lambda function. I used this application to download files from remote node and store it to S3 bucket. it was working fine previously, all of sudden it stop fetching files source location. while debugging issue I observe that it was a lot of time complete a test event. In the CloudWatch logs I'm getting below error logs. ***START RequestId: 052226a9-5344-45f1-88bf-5c00242baee0 Version: $LATEST END RequestId: 052226a9-5344-45f1-88bf-5c00242baee0 REPORT RequestId: 052226a9-5344-45f1-88bf-5c00242baee0 Duration: 180625.18 ms Billed Duration: 180000 ms Memory Size: 192 MB Max Memory Used: 193 MB Init Duration: 587.37 ms XRAY TraceId: 1-626104fb-16ae94a33273f6404d180e41 SegmentId: 1010be2c3227cfab Sampled: true REPORT RequestId: 052226a9-5344-45f1-88bf-5c00242baee0 Duration: 180625.18 ms Billed Duration: 180000 ms Memory Size: 192 MB Max Memory Used: 193 MB Init Duration: 587.37 ms XRAY TraceId: 1-626104fb-16ae94a33273f6404d180e41 SegmentId: 1010be2c3227cfab Sampled: true 2022-04-21T07:20:17.417Z 052226a9-5344-45f1-88bf-5c00242baee0 Task timed out after 180.63 seconds*** I have tried increasing memory and timeout parameter but still getting same error. In X-Ray Trace Logs getting below response. serverlessrepo-JumpCloud--DirectoryInsightsFunctio-fgqp218AtLpYAWS::Lambda serverlessrepo-JumpCloud--DirectoryInsightsFunctio-fgqp218AtLpY OK 202 17ms Dwell Time OK - 47ms Attempt #1 Error (4xx) 200 3.03min Attempt #2 Error (4xx) 200 3.00min Attempt #3 Error (4xx) 200 3.00min serverlessrepo-JumpCloud--DirectoryInsightsFunctio-fgqp218AtLpYAWS::Lambda::Function serverlessrepo-JumpCloud--DirectoryInsightsFunctio-fgqp218AtLpY Error (4xx) - 3.01min Initialization OK - 587ms Invocation Error (4xx) - 3.01min serverlessrepo-JumpCloud--DirectoryInsightsFunctio-fgqp218AtLpY Error (4xx) - 3.00min Initialization OK - 611ms Invocation Error (4xx) - 3.00min serverlessrepo-JumpCloud--DirectoryInsightsFunctio-fgqp218AtLpY Error (4xx) - 3.00min Initialization OK - 549ms Invocation Error (4xx) - 3.00min Can anyone advise if anything I missed to debug. Thanks, Aman
2
answers
0
votes
12
views
asked 2 months ago

App Runner actions work very slow (2-10 minutes) and deployer provides incorrect error message

App Runner actions work very slow for me. create/pause/resume may take 2-5 minutes for simple demo image (`public.ecr.aws/aws-containers/hello-app-runner:latest`) and create-service when image not found takes ~10 minutes: example #1 - 5 min to deploy hello-app image ``` 04-17-2022 05:59:55 PM [AppRunner] Service status is set to RUNNING. 04-17-2022 05:59:55 PM [AppRunner] Deployment completed successfully. 04-17-2022 05:59:44 PM [AppRunner] Successfully routed incoming traffic to application. 04-17-2022 05:58:33 PM [AppRunner] Health check is successful. Routing traffic to application. 04-17-2022 05:57:01 PM [AppRunner] Performing health check on port '8000'. 04-17-2022 05:56:51 PM [AppRunner] Provisioning instances and deploying image. 04-17-2022 05:56:42 PM [AppRunner] Successfully pulled image from ECR. 04-17-2022 05:54:56 PM [AppRunner] Service status is set to OPERATION_IN_PROGRESS. 04-17-2022 05:54:55 PM [AppRunner] Deployment started. ``` example #2 - 10 min when image not found ``` 04-17-2022 05:35:41 PM [AppRunner] Failed to pull your application image. Be sure you configure your service with a valid access role to your ECR repository. 04-17-2022 05:25:47 PM [AppRunner] Starting to pull your application image. ``` example #3 - 10 min when image not found ``` 04-17-2022 06:46:24 PM [AppRunner] Failed to pull your application image. Be sure you configure your service with a valid access role to your ECR repository. 04-17-2022 06:36:31 PM [AppRunner] Starting to pull your application image. ``` but 404 error should be detected immediately and fail much faster. because no need to retry 404 many times for 10 min, right? additionally the error message `Failed to pull your application image. Be sure you configure your service with a valid access role to your ECR repository` is very confusing. it doesn't show image name and doesn't provide the actual cause. 404 is not related to access errors like 401 or 403, correct? can App Runner actions performance and error message be improved?
0
answers
0
votes
10
views
asked 2 months ago

CodeBuild failing with invalidParameterError on build with a valid parameter given

I'm trying to create a lambda layer in serverless and have it deploy to AWS creating the lambda layer for use in other deployments. However I'm running into an issue where the "Lambda:PublishLayerVersion" is failing because of CompatibleArchitectures. I'm wondering if its possible that there was a mistake that I'm missing or its serverless having an issue because Action is using a lowercase 'p' for "Lambda:publishLayerVersion" when the docs here: https://docs.aws.amazon.com/lambda/latest/dg/API_PublishLayerVersion.html states it is "Lambda:PublishLayerVersion". It is also likely that the SDK error is legitimate that the param "CompatibleArchitectures" isn't supported in "us-west-1" but I have a hard time finding docs to tell me what is supported in different regions. serverless.yml Spec: ``` provider: name: aws runtime: python3.8 lambdaHashingVersion: 20201221 region: us-west-1 stage: ${opt:stage, 'stage'} deploymentBucket: name: name.serverless.${self:provider.region}.deploys deploymentPrefix: serverless iamRoleStatements: - Effect: Allow Action: - s3:PutObject - s3:GetObject Resource: "arn:aws:s3:::name.serverless.${self:provider.region}/*" - Effect: Allow Action: - cloudformation:DescribeStacks Resource: "*" - Effect: Allow Action: - lambda:PublishLayerVersion Resource: "*" layers: aws-abstraction-services-layer: # name: aws-abstraction-services-layer path: aws-abstraction-layer description: "This is the goal of uploading our abstractions to a layer to upload and use to save storage in deployment packages" compatibleRuntimes: - python3.8 allowedAccounts: - '*' plugins: - serverless-layers - serverless-python-requirements ``` Output of build log ``` [Container] 2022/04/12 17:14:41 Running command serverless deploy Running "serverless" from node_modules Deploying aws-services-layer to stage stage (us-west-1) [ LayersPlugin ]: => default ... ○ Downloading requirements.txt from bucket... ... ○ requirements.txt The specified key does not exist.. ... ○ Changes identified ! Re-installing... ... ○ pip install -r requirements.txt -t . ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. aws-sam-cli 1.40.1 requires requests==2.25.1, but you have requests 2.27.1 which is incompatible. WARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv WARNING: You are using pip version 21.1.2; however, version 22.0.4 is available. You should consider upgrading via the '/root/.pyenv/versions/3.8.10/bin/python3.8 -m pip install --upgrade pip' command. Collecting requests Downloading requests-2.27.1-py2.py3-none-any.whl (63 kB) Collecting charset-normalizer~=2.0.0 Downloading charset_normalizer-2.0.12-py3-none-any.whl (39 kB) Collecting certifi>=2017.4.17 Downloading certifi-2021.10.8-py2.py3-none-any.whl (149 kB) Collecting idna<4,>=2.5 Downloading idna-3.3-py3-none-any.whl (61 kB) Collecting urllib3<1.27,>=1.21.1 Downloading urllib3-1.26.9-py2.py3-none-any.whl (138 kB) Installing collected packages: urllib3, idna, charset-normalizer, certifi, requests Successfully installed certifi-2021.10.8 charset-normalizer-2.0.12 idna-3.3 requests-2.27.1 urllib3-1.26.9 ... ○ Created layer package /codebuild/output/src847310000/src/.serverless/aws-services-layer-stage-python-default.zip (0.8 MB) ... ○ Uploading layer package... ... ○ OK... ServerlessLayers error: Action: Lambda:publishLayerVersion Params: {"Content":{"S3Bucket":"name.serverless.us-west-1.deploys","S3Key":"serverless/aws-services-layer/stage/layers/aws-services-layer-stage-python-default.zip"},"LayerName":"aws-services-layer-stage-python-default","Description":"created by serverless-layers plugin","CompatibleRuntimes":["python3.8"],"CompatibleArchitectures":["x86_64","arm64"]} AWS SDK error: CompatibleArchitectures are not supported in us-west-1. Please remove the CompatibleArchitectures value from your request and try again [Container] 2022/04/12 17:14:47 Command did not exit successfully serverless deploy exit status 1 [Container] 2022/04/12 17:14:47 Phase complete: BUILD State: FAILED [Container] 2022/04/12 17:14:47 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: serverless deploy. Reason: exit status 1 [Container] 2022/04/12 17:14:47 Entering phase POST_BUILD [Container] 2022/04/12 17:14:47 Phase complete: POST_BUILD State: SUCCEEDED [Container] 2022/04/12 17:14:47 Phase context status code: Message: ```
1
answers
1
votes
33
views
asked 3 months ago

AWS SAM: set the authorization cache TTL in the resource template (AWS::Serverless::Api)

Hi all, I am using SAM in order to deploy my serverless application which consist of a REST API and a lambda authorizer. The REST API is not triggering a Lambda. It integrates other public services. When declaring the [AWS::Serverless::Api](https://docs.aws.amazon.com/fr_fr/serverless-application-model/latest/developerguide/sam-resource-api.html) and its [auth](https://docs.aws.amazon.com/fr_fr/serverless-application-model/latest/developerguide/sam-property-api-apiauth.html) attribte, I cannot find a way to configure the authorization-cache's TTL as in the [AWS::ApiGateway::Authorizer](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-authorizerresultttlinseconds) resource. Am I missing something? If not, is there any reason the authorization-cache's TTL configuration is not made available in the [AWS::Serverless::Api](https://docs.aws.amazon.com/fr_fr/serverless-application-model/latest/developerguide/sam-resource-api.html) element? This potentially missing feature is something minor for us, and does not block us in our project. It is more a nice-to-have, as I would prefer to not have to copy/paste the whole OpenAPI specification directly in the template file, but rather use the SAM feature to specify the API via the [AWS::Serverless::Api](https://docs.aws.amazon.com/fr_fr/serverless-application-model/latest/developerguide/sam-resource-api.html#sam-api-definitionuri) 's *DefinitionUri* attribute. This makes it possible to not have an API definition in the template, but to embbed this definition in a local file which will be automatically uploaded to S3 during the SAM deploy step. Thanks
1
answers
0
votes
59
views
asked 3 months ago
0
answers
0
votes
3
views
asked 3 months ago

Slow lambda responses when bigger load

Hi, Currently, I'm doing load testing using Gatling and I have one issue with my lambdas. I have two lambdas one is written in Java 8 and one is written in Python. I'm using Gatling for my load testing and I have a test where I'm doing one request with 120 concurrent users then I'm ramping them from 120 to 400 users in 1 minute, and then Gatling is doing requests with 400 constants users per second for 2 minutes. There is a weird behavior in these lambdas because the responses are very high. In the lambdas there is no logic, they are just returning a String. Here are some screenshots of Gatling reports: [Java Report][1] [Python Report][2] I can add that I did some tests when Lambda is warm-up and there is the same behaviour as well. I'm using API Gateway to run my lambdas. Do you have any idea why there is such a big response time? Sometimes I'm receiving an HTTP error that says: i.n.h.s.SslHandshakeTimeoutException: handshake timed out after 10000ms Here is also my Gatling simulation code: public class OneEndpointSimulation extends Simulation { HttpProtocolBuilder httpProtocol = http .baseUrl("url") // Here is the root for all relative URLs .acceptHeader("text/html,application/xhtml+xml,application/json,application/xml;q=0.9,*/*;q=0.8") // Here are the common headers .acceptEncodingHeader("gzip, deflate") .acceptLanguageHeader("en-US,en;q=0.5") .userAgentHeader("Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:16.0) Gecko/20100101 Firefox/16.0"); ScenarioBuilder scn = scenario("Scenario 1 Workload 2") .exec(http("Get all activities") .get("/dev")).pause(1); { setUp(scn.injectOpen( atOnceUsers(120), rampUsersPerSec(120).to(400).during(60), constantUsersPerSec(400).during(Duration.ofMinutes(1)) ).protocols(httpProtocol) ); } } I also checked logs and turned on the X-ray for API Gateway but there was nothing there. The average latency for these services was 14ms. What can be the reason for that slow Lambda responses? [1]: https://i.stack.imgur.com/sCx9M.png [2]: https://i.stack.imgur.com/SuHU0.png
0
answers
0
votes
8
views
asked 3 months ago

Load testing serverless stack using Gatling

Hi, I'm doing some load testing on my serverless app and I see that it is unable to handle some higher loads. I'm using API Gateway. Lambda(Java 8) and DynamoDB. The code that I'm using is the same as this from this [link]([https://github.com/Aleksandr-Filichkin/aws-lambda-runtimes-performance/tree/main/java-graalvm-lambda/src/lambda-java). In my load testing, I'm using Gatling. The load that I configured is that I'm doing a request with 120 users, then in one minute I ramp users from 120 to 400, and then for 2 minutes I'm making requests with 400 constant users per second. The problem is that my stack is unable to handle 400 users per second. Is it normal? I thought that serverless will scale nicely and will work like a charm. Here is my Gatling simulation code: ```java public class OneEndpointSimulation extends Simulation { HttpProtocolBuilder httpProtocol = http .baseUrl("url") // Here is the root for all relative URLs .acceptHeader("text/html,application/xhtml+xml,application/json,application/xml;q=0.9,*/*;q=0.8") // Here are the common headers .acceptEncodingHeader("gzip, deflate") .acceptLanguageHeader("en-US,en;q=0.5") .userAgentHeader("Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:16.0) Gecko/20100101 Firefox/16.0"); ScenarioBuilder scn = scenario("Scenario 1 Workload 2") .exec(http("Get all activities") .get("/activitiesv2")).pause(1); { setUp(scn.injectOpen(atOnceUsers(120), rampUsersPerSec(120).to(400).during(60), constantUsersPerSec(400).during(Duration.ofMinutes(2)) ).protocols(httpProtocol) ); } } ``` Here are the Gatling report results: [Image link](https://ibb.co/68SYDsb) I'm also receiving an error: **i.n.h.s.SslHandshakeTimeoutException: handshake timed out after 10000ms ** -> This is usually approx 50 requests. It is happening when Gatling is starting to inject 400 constant users per second. I'm wondering what could be wrong. It is too much for API Gateway, Lambda and DynamoDB?
2
answers
0
votes
56
views
asked 3 months ago

synchronous queue implementation on AWS

I have a queue in which producers are adding data and consumers wants to read and process it. In the diagram below producers are adding data in a queue with (Px, Tx, X) example (P3, T3,10) here, P3 is the producer ID, T3 is the number of packets required to process and 10 is data. for (P3, T3,10) consumer needs to read 3 packets from the P3 producer so In the Image below, one of the consumer needs to pick (P3, T3,10), (P3, T3,15) and (P3, T3,5) and perform a function on data that just add all the number that is 10+15+5 = 30 and save 30 to DB. Similarly there is a case for P1 producer (P1,T2,1) and (P1,T2,10) sum = 10+1 = 11 to DB. I have read about AWS Kinesis but it has issues, all consumers read the same data which doesn't fit my case. The major issue is how we can limit consumers for: 1 - Read data queue in synchronous. 2 - If one of the consumers has read (P1, T2,1) then only this consumer can read the next packet from the P1 producer (This point is the major issue for me as the consumer need to add those two number) 3 - This can also cause deadlock as some of the consumers will be forced to read data from a particular producer only because they have already read one packet from the same producer, now they have to wait for the next packet to perform add. I have also read about SQS and MQ but the above challenges still exist for them too. ![Image](https://i.stack.imgur.com/7b3Mm.png) [https://i.stack.imgur.com/7b3Mm.png](https://i.stack.imgur.com/7b3Mm.png) My current approach: for N produces I have started N EC2 instances, producers send data to EC2 through WebSocket (Websocket is not a requirement) and I can process it there easily. As you can see having N EC2 to process N producers will cause budget issues, how can I improve on this solution.
1
answers
0
votes
18
views
asked 3 months ago
  • 1
  • 90 / page