By using AWS re:Post, you agree to the Terms of Use
/AWS Lambda/

Questions tagged with AWS Lambda

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Old data not inserted into Timestream: RejectedRecordsException

Hello! We have an IoT Core Rule which is fired when a MQTT message is published to certain topic. The message structure is: ``` { "triggers": ["door"], "datetime": "2022-06-01T00:00:00Z", "errCodes": [100], "strength": 107, "net": "GMS", "eco": 0, "light": 0, "def": 0, "fan": 0, "defrost": 1, "Mdef": 0, "comp": 0, "door": 0, "Tcond": 38.1, "Tevap": 1, "Tint": 3.8, "topic": "abc/ar/data/NVC1/test-vscode-3" } ``` We have a requirement where data buffered in remote devices could be sent to IoT Core, so we need to send a "datetime" field (the second one) in the payload. The Iot Core Rule fires a AWS Lambda function, which generates a multi-measure record to be finally sent to Timestream: ``` { "Dimensions":[ { "Name":"hw_model", "Value":"NVC1" }, { "Name":"serial_device", "Value":"test-vscode-3" } ], "MeasureName":"multimeasuredata", "MeasureValueType":"MULTI", "MeasureValues":[ { "Name":"Tint", "Value":"3.8", "Type":"DOUBLE" }, { "Name":"Tevap", "Value":"1", "Type":"DOUBLE" } ], "Time":"1654041600000" } ``` The Timestream table retention periods are: Memory store retention: 45 days Magnetic store retention: 180 days Magnetic store writes: ENABLED The exception thrown is: ``` { "errorType":"RejectedRecordsException", "errorMessage":"One or more records have been rejected. See RejectedRecords for details.", "name":"RejectedRecordsException", "$fault":"client", "$metadata":{ "httpStatusCode":419, "requestId":"VKL72WIIMCBGQNWMMSQLK7CAAQ", "attempts":1, "totalRetryDelay":0 }, "RejectedRecords":[ { "Reason":"The record timestamp is outside the time range [2022-06-17T15:21:13.756Z, 2022-06-27T22:51:04.174Z) of the data ingestion window.", "RecordIndex":0 } ], "__type":"com.amazonaws.timestream.v20181101#RejectedRecordsException", "message":"One or more records have been rejected. See RejectedRecords for details.", "stack":[ "RejectedRecordsException: One or more records have been rejected. See RejectedRecords for details.", " at deserializeAws_json1_0RejectedRecordsExceptionResponse (/var/task/node_modules/@aws-sdk/client-timestream-write/dist-cjs/protocols/Aws_json1_0.js:947:23)", " at deserializeAws_json1_0WriteRecordsCommandError (/var/task/node_modules/@aws-sdk/client-timestream-write/dist-cjs/protocols/Aws_json1_0.js:888:25)", " at processTicksAndRejections (node:internal/process/task_queues:96:5)", " at async /var/task/node_modules/@aws-sdk/middleware-serde/dist-cjs/deserializerMiddleware.js:7:24", " at async /var/task/node_modules/@aws-sdk/middleware-signing/dist-cjs/middleware.js:11:20", " at async StandardRetryStrategy.retry (/var/task/node_modules/@aws-sdk/middleware-retry/dist-cjs/StandardRetryStrategy.js:51:46)", " at async /var/task/node_modules/@aws-sdk/middleware-logger/dist-cjs/loggerMiddleware.js:6:22", " at async Runtime.exports.handler (/var/task/lambda.js:58:20)" ] } ``` We are not falling out the memory retention period (45 days) in this example (27 days), but the range returned in the exception is (IDK why) [2022-06-17T15:21:13.756Z, 2022-06-27T22:51:04.174Z) Do you have any ideas of why this is the range and hence why the record cannot be inserted? Thanks !
1
answers
0
votes
27
views
asked a day ago
0
answers
0
votes
28
views

Unit value in RDS CloudWatch Alarm is Null

Hello, I have been trying to obtain a value from an SNS message passed from a CloudWatch Alarm for the DatabaseConnections Metric. The value of the Unit is null hence I cannot parse the message to obtain the DBIdentifier value. Is there a reason why the CloudWatch alarm does not have a value for the Unit? Looking at the AWS RDS documentation, the value for DatabaseConnections Metric, Unit should be Count but the alarm gives null. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-metrics.html#rds-cw-metrics-instance Below is the message obtained from the SNS message body: ``` { "AlarmName": "awsrds-poc-test-High-DB-Connections", "AlarmDescription": "When the DB-Connections is lower than 1", "AWSAccountId": "<##########>", "AlarmConfigurationUpdatedTimestamp": "2022-06-17T08:54:32.735+0000", "NewStateValue": "ALARM", "NewStateReason": "Threshold Crossed: 1 out of the last 1 datapoints [0.0 (17/06/22 08:50:00)] was less than the threshold (1.0) (minimum 1 datapoint for OK -> ALARM transition).", "StateChangeTime": "2022-06-17T08:55:15.402+0000", "Region": "US East (N. Virginia)", "AlarmArn": "arn:aws:cloudwatch:us-east-1:678932343753:alarm:awsrds-poc-test-High-DB-Connections", "OldStateValue": "OK", "OKActions": [], "AlarmActions": [ "arn:aws:sns:us-east-1:678932343753:sns-alarm" ], "InsufficientDataActions": [], "Trigger": { "MetricName": "DatabaseConnections", "Namespace": "AWS/RDS", "StatisticType": "Statistic", "Statistic": "AVERAGE", "Unit": null, "Dimensions": [ { "value": "<######>", "name": "DBInstanceIdentifier" } ], "Period": 300, "EvaluationPeriods": 1, "DatapointsToAlarm": 1, "ComparisonOperator": "LessThanThreshold", "Threshold": 1, "TreatMissingData": "", "EvaluateLowSampleCountPercentile": "" } } ```
0
answers
0
votes
29
views
asked 8 days ago

My local MongoDB is refusing to connect with AWS SAM Lambda in python

I have set up an AWS Lambda function using the AWS SAM app. I have also downloaded local MongoDB on my machine. I am trying to make a connection between AWS Lambda and MongoDB. You can see my code below: ``` import json import pymongo client = pymongo.MongoClient('mongodb://localhost:27017/') mydb = client['Employee'] def lambda_handler(event, context): information = mydb.employeeInformation record = { 'FirstName' : 'Rehan', 'LastName' : 'CH', 'Department' : "IT" } information.insert_one(record) print("Record added") return { "statusCode": 200, "body": json.dumps( { "message": "hello world", # "location": ip.text.replace("\n", "") } ), } ``` When I run the sam app using command ``` sam local invoke ``` it throws an error that you can see below: ``` [ERROR] ServerSelectionTimeoutError: localhost:27017: [Errno 111] Connection refused, Timeout: 30s, Topology Description: <TopologyDescription id: 62b16aa14a95a3e56eb0e7cb, topology_type: Unknown, servers: [<ServerDescription ('localhost', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('localho raise ServerSelectionTimeoutError(, line 227, in _select_servers_looprtn_support ``` I also have searched for this error and eventually, I found some but didn't get help from them. That's why I have to post it again. Its my first time interaction with MongoDB. Can someone tell me how do I resolve this error, or where I am doing wrong?
2
answers
0
votes
27
views
asked 8 days ago

RequestParameters for Api Event in Serverless::Function in JSON - how does it work?

I'm trying to add some query string parameters for a Lambda function, using a SAM template written in JSON. All the examples are in YAML? Can anyone point out where I'm going wrong. Here's the snippet of the definition: ``` "AreaGet": { "Type": "AWS::Serverless::Function", "Properties": { "Handler": "SpeciesRecordLambda::SpeciesRecordLambda.Functions::AreaGet", "Runtime": "dotnet6", "CodeUri": "", "MemorySize": 256, "Timeout": 30, "Role": null, "Policies": [ "AWSLambdaBasicExecutionRole" ], "Events": { "AreaGet": { "Type": "Api", "Properties": { "Path": "/", "Method": "GET", "RequestParameters": [ "method.request.querystring.latlonl": { "Required": "true" }, "method.request.querystring.latlonr": { "Required": "true" } ] } } } } }, ``` and here's the error message I get: > Failed to create CloudFormation change set: Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document. Number of errors found: 1. Resource with id [AreaGet] is invalid. Event with id [AreaGet] is invalid. Invalid value for 'RequestParameters' property. Keys must be in the format 'method.request.[querystring|path|header].{value}', e.g 'method.request.header.Authorization'. Sorry I know this is a bit of a beginners question, but I'm a bit lost as to what to do, as I can't find any information about this using JSON. Maybe you can't do it using JSON? Thanks, Andy.
1
answers
0
votes
30
views
asked 8 days ago

How to properly raise python lambda exception to appsync response resolver

Hello, I'm trying to implement some custom declared exceptions in a python aws lambda. The lambda function is correctly hooked up to an appsync cfnresolver. (I get the proper response when testing the query manually in the AWS console) raising a custom exception in that lamba function for testing also generates and error. But the issue I'm running in is that if I manually define the response_mapping_template of that cfnresolver the errorType of the resulting response turn into "lambda:unhandled" instead of the "CustomException" I get when I don't define the response_mapping_template. And while it does work when I don't define the response_mapping_template. That's not really a solution since the response_mapping_template will be needed in the future. I've looked everywhere. And tried different solutions that allows me to use an own defined response_mapping_template and still get the correct errorType in the response. But nothing I've tried resolves my issue. And sadly my issue is with VTL. Else I could just look into the code directly to see what my issue is. Things I've tried: * Both 2017 and 2018 request_mapping_template. This changed nothing. * Removing the response_mapping_template. This worked. But yeah no response_mapping_template functionality isn't viable. * Tried looking for the defaults of response_mapping_template but it was too obscure to find in the sdk sadly. (It would be nice if someone could show me where?) * A few different response_mapping_template syntaxes. I've found roughly 3 different ones on stackoverflow and aws pages. Non did anything different. I'm kinda stumped after looking for two days now. Are you not supposed to use normal exceptions in Lambda's? Are you supposed to populate the response object with the error info? Am I overlooking any documentation?
0
answers
0
votes
9
views
asked 9 days ago

IAM Policy - AWS Transfer Family

Hello, This question may seem a bit long-winded since I will be describing the relevant background information to hopefully avoid back and forth, and ultimately arrive at a resolution. I appreciate your patience. I have a Lambda function that is authenticating users via Okta for SFTP file transfers, and the Lambda function is called through an API Gateway. My company has many different clients, so we chose this route for authentication rather than creating user accounts for them in AWS. Everything has been working fine during my testing process except for one key piece of functionality. Since we have many customers, we don't want them to be able to interact or even see another customer's folder within the dedicated S3 bucket. The directory structure has the main S3 bucket at the top level and within that bucket resides each customer's folder. From there, they can create subfolders, upload files, etc. I have created the IAM policy - which is an inline policy as part of an assumed role - as described in this document: https://docs.aws.amazon.com/transfer/latest/userguide/users-policies.html. My IAM policy looks exactly like the one shown in the "Creating a session policy for an Amazon S3 bucket" section of the documentation. The "transfer" variables are defined in the Lambda function. Unfortunately, those "transfer" variables do not seem to be getting passed to the IAM policy. When I look at the Transfer Family endpoint log, it is showing access denied after successfully connecting (confidential information is redacted): <user>.39e979320fffb078 CONNECTED SourceIP=<source_ip> User=<user> HomeDir=/<s3_bucket>/<customer_folder>/ Client="SSH-2.0-Cyberduck/8.3.3.37544 (Mac OS X/12.4) (x86_64)" Role=arn:aws:iam::<account_id>:role/TransferS3AccessRole Kex=diffie-hellman-group-exchange-sha256 Ciphers=aes128-ctr,aes128-ctr <user>.39e979320fffb078 ERROR Message="Access denied" However, if I change the "transfer" variables in the Lambda function to include the actual bucket name and update the IAM policy accordingly, everything works as expected; well, almost everything. With this change, I am not able to restrict access and, thus, any customer could interact with any other customer's folders and files. Having the ability to restrict access by using the "transfer" variables is an integral piece of functionality. I've searched around the internet - including this forum - and cannot seem to find the answer to this problem. Likely, I have overlooked something and hopefully it is an easy fix. Looking forward to getting this resolved. Thank you very much in advance!
4
answers
0
votes
49
views
asked 11 days ago

Athena federated query on PostgresSQL

Hi I am trying to execute queries on a postgresql database I created in AWS. I added a data source to Athena, I created the data source for postgresql and I created the lambda function. In lambda function I set: * default connection string * spill_bucket and spill prefix (I set the same for both: 'athena-spill'. In the S3 page I cannot see any athena-spill bucket) * the security group --> I set the security group I created to access the db * the subnet --> I set one of the database subnet I deployed the lambda function but I received an error and I had to add a new environment variable created with the connection string but named as 'dbname_connection_string'. After adding this new env variable I am able to see the database in Athena but when I try to execute any query on this database as: ``` select * from tests_summary limit 10; ``` I receive this error: ``` GENERIC_USER_ERROR: Encountered an exception[com.amazonaws.SdkClientException] from your LambdaFunction[arn:aws:lambda:eu-central-1:449809321626:function:data-production-athena-connector-nina-lambda] executed in context[retrieving meta-data] with message[Unable to execute HTTP request: Connect to s3.eu-central-1.amazonaws.com:443 [s3.eu-central-1.amazonaws.com/52.219.170.25] failed: connect timed out] This query ran against the "public" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: 3366bd80-143e-459c-a4da-5350b5ab4a77 ``` What could be causing the problem? Thanks a lot!
2
answers
0
votes
42
views
asked 12 days ago

Cognito Migration Trigger errors when Lambda execution time too high

I am currently in the process of validating the migration of a set of users to a cognito user pool via the migration trigger, the essence of the lambda function for the trigger can be boiled down to: ``` def lambda_handler(event, context): response = requests.post(external_auth_api_url, json_with_user_and_pass) if response.status_code = 200: event["response"] = { "userAttributes": { "username": event["userName"], "email": event["userName"], "email_verified": "true" }, "finalUserStatus": "CONFIRMED", "messageAction": "SUPPRESS" } return event ``` This is doing an external rest call to the old system the user was signing in through as per the documentation and returning a success response. The issue I noticed is that if the lambda function time is too long, for example, the average execution time of this lambda for me right now via ngrok is about 5 seconds total, cognito is failing when I call initiateAuth with USERNAME_PASSWORD flow and returning the following: ``` botocore.errorfactory.UserNotFoundException: An error occurred (UserNotFoundException) when calling the InitiateAuth operation: Exception migrating user in app client xxxxxxxxxxxx ``` I managed to validate that this issue was occurring by simply returning a success response without doing an external REST call and essentially bringing the lambda function runtime down to milliseconds, in which case I got the tokens as expected and the user was successfully migrated. I also tested this by simply having a lambda function like: ``` def lambda_handler(event, context): time.sleep(5) event["response"] = { "userAttributes": { "username": event["userName"], "email": event["userName"], "email_verified": "true" }, "finalUserStatus": "CONFIRMED", "messageAction": "SUPPRESS" } return event ``` This fails with the same error response as above. If anyone can advise, I am not sure if there is a maximum time the migration trigger will wait that is not documented, I wouldn't expected the trigger to have such a thing if the migration trigger's intention is to do external REST calls which may or may not be slow. Thanks in advance!
1
answers
2
votes
19
views
asked 14 days ago

CodeDeploy AfterAllowTestTraffic ECS hook doesn't behave as expected

I am following [this AWS guide](https://docs.aws.amazon.com/codedeploy/latest/userguide/tutorial-ecs-deployment-with-hooks.html) to create a CodeDeploy blue/green deployment on ECS. In the example I've modified the Lambda hook function `AfterAllowTestTraffic` to make a simple `axios.get(<application_load_balancer_dns:test-listener-port>)` call for the test listener on port 8080, which should return 200 if the replacement task is successfully deployed. However, it seems that the Lambda function is called too early during `AfterAllowTestTraffic`, because it originally hits the original task set (blue) and returns 200 resulting in a successful deploy, despite documentation saying the below: >AfterAllowTestTraffic – Use to run tasks after the test listener serves traffic to the replacement task set This is unexpected behaviour because I deliberately deployed a broken replacement (green) task using this hook, so it should have failed since the test traffic would be serving the broken green task. I tested this by implementing a timeout in the Lambda function, and then pinging the URL 10 seconds later - this returned a 502 error as expected. I also verified in ECS logs that the axios call incorrectly hit the blue task, before subsequently hitting the green one. Access logs for the application load balancer also show the axios call to the blue target group. There must be something I'm missing here, but it doesn't make any sense to us. Any insight would be much appreciated!
0
answers
0
votes
9
views
asked 14 days ago

AWS CLI greengrass v2 create-deployment using JSON to import lambda not importing lambda artifact

I am importing lambdas as components for ggv2 using the AWS CLI. The lambdas import successfully but when I deploy to greengrass v2 I get the following error: > Error occurred while processing deployment. {deploymentId=********************, serviceName=DeploymentService, currentState=RUNNING}java.util.concurrent.ExecutionException: com.aws.greengrass.componentmanager.exceptions.NoAvailableComponentVersionException: No local or cloud component version satisfies the requirements. Check whether the version constraints conflict and that the component exists in your AWS account with a version that matches the version constraints. If the version constraints conflict, revise deployments to resolve the conflict. Component devmgmt.device.scheduler version constraints: thinggroup/dev-e01 requires =3.0.61. at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) The version exists as it was imported successfully but the artifact is not transferred to the Greengrass Core. If I import the lambda from the AWS Management Console then it works as expected. Here is my CLI json input file and the command I am running. What am I missing? `aws greengrassv2 create-component-version --cli-input-json file://lambda-import-worker.json` *lambda-import-worker.json file:* ``` { "lambdaFunction": { "lambdaArn": "arn:aws:lambda:*******:***************:function:devmgmt-worker:319", "componentName": "devmgmt.device.scheduler", "componentVersion": "3.0.61", "componentPlatforms": [ { "name": "Linux amd64", "attributes": { "os": "All", "platform": "All" } } ], "componentDependencies": { "aws.greengrass.TokenExchangeService":{ "versionRequirement": ">=2.0.0 <3.0.0", "dependencyType": "HARD" }, "aws.greengrass.LambdaLauncher": { "versionRequirement": ">=2.0.0 <3.0.0", "dependencyType": "HARD" }, "aws.greengrass.LambdaRuntimes": { "versionRequirement": ">=2.0.0 <3.0.0", "dependencyType": "SOFT" } }, "componentLambdaParameters": { "maxQueueSize": 1000, "maxInstancesCount": 100, "maxIdleTimeInSeconds": 120, "timeoutInSeconds": 60, "statusTimeoutInSeconds": 60, "pinned": true, "inputPayloadEncodingType": "json", "environmentVariables": {}, "execArgs": [], "linuxProcessParams": { "isolationMode": "NoContainer" }, "eventSources": [ { "topic": "device/notice", "type": "PUB_SUB" }, { "topic": "$aws/things/thingnameManager/shadow/name/ops/update/accepted", "type": "IOT_CORE" }, { "topic": "dev/device", "type": "IOT_CORE" } ] } } } ```
1
answers
0
votes
20
views
asked 14 days ago

Run a Lambda function for RDS using the DBInstanceIdentifier

Hello, I have been trying to automate stopping RDS instance using a Lambda Function when the number of connections made to the instances are low than 1. I have used an EventBridge that triggers the Lambda function when the Alarm created goes into In Alarm. From this, it stops all the RDS instances even those with connections. I understand that the issue is in the Lambda function since it loops through all instances and turns them off. I was inquiring if there is a way to pass the DBInstanceIdentifier of the instance in alarm state only to the lambda function for it to only shut down the instance in which the alarm is on. Below is the lambda code used. import boto3 import os target_db = None region = os.environ['AWS_REGION'] rds = boto3.client('rds', region_name=region) def get_tags_for_db(db): instance_arn = db['DBInstanceArn'] instance_tags = rds.list_tags_for_resource(ResourceName=instance_arn) return instance_tags['TagList'] def get_tags_for_db_cluster(db): instance_arn = db['DBClusterArn'] instance_tags = rds.list_tags_for_resource(ResourceName=instance_arn) return instance_tags['TagList'] def lambda_handler(event, context): dbs = rds.describe_db_instances() readReplica = [] for db in dbs['DBInstances']: readReplicaDB = db['ReadReplicaDBInstanceIdentifiers'] readReplica.extend(readReplicaDB) print("readReplica : " + str(readReplica)) for db in dbs['DBInstances']: db_id = db['DBInstanceIdentifier'] db_engine = db['Engine'] print('DB ID : ' + str(db_id)) db_tags = get_tags_for_db(db) print("All Tags : " + str(db_tags)) tag = next(iter(filter(lambda tag: tag['Key'] == 'AutoStop' and tag['Value'].lower() == 'true', db_tags)), None) print("AutoStop Tag : " + str(tag)) if db_engine not in ['aurora-mysql','aurora-postgresql']: if db_id not in readReplica and len(readReplica) == 0: if tag: target_db = db print("DB Details : " + str(target_db)) db_id = target_db['DBInstanceIdentifier'] db_status = target_db['DBInstanceStatus'] print("DB ID : " + str(db_id)) print("DB Status : " + str(db_status)) if db_status == "available": AutoStopping = rds.stop_db_instance(DBInstanceIdentifier=db_id) print("Stopping DB : " + str(db_id)) else: print("Database already stopped : " + str(db_id)) else: print("AutoStop Tag Key not set for Database to Stop...") else: print("Cannot stop or start a Read-Replica Database...") dbs = rds.describe_db_clusters() readReplica = [] for db in dbs['DBClusters']: readReplicaDB = db['ReadReplicaIdentifiers'] readReplica.extend(readReplicaDB) print("readReplica : " + str(readReplica)) for db in dbs['DBClusters']: db_id = db['DBClusterIdentifier'] db_engine = db['Engine'] print('DB ID : ' + str(db_id)) db_tags = get_tags_for_db_cluster(db) print("All Tags : " + str(db_tags)) tag = next(iter(filter(lambda tag: tag['Key'] == 'AutoStop' and tag['Value'].lower() == 'true', db_tags)), None) print("AutoStop Tag : " + str(tag)) if db_engine in ['aurora-mysql','aurora-postgresql']: if db_id not in readReplica and len(readReplica) == 0: if tag: target_db = db db_id = target_db['DBClusterIdentifier'] db_status = target_db['Status'] print("Cluster DB ID : " + str(db_id)) print("Cluster DB Status : " + str(db_status)) if db_status == "available": AutoStopping = rds.stop_db_cluster(DBClusterIdentifier=db_id) print("Stopping Cluster DB : " + str(db_id)) else: print("Cluster Database already stopped : " + str(db_id)) else: print("AutoStop Tag Key not set for Cluster Database to Stop...") else: print("Cannot stop or start a Read-Replica Cluster Database...")
2
answers
0
votes
30
views
asked 15 days ago

Best practices for lambda layer dependencies

Hi all, We recently started using the OpenTelemetry lambda layer for python: https://aws-otel.github.io/docs/getting-started/lambda/lambda-python in our serverless applications. We've encountered an issue with dependencies in a few of our projects, where the version of a particular dependency required by the lambda layer would conflict with the version installed in the lambda function itself. For example, the lambda layer had a requirement for protobuf>=3.15.0, whereas our application was using 3.13.0, causing the following error in the logs: ``` 2022-06-01T15:59:35.215-04:00 Configuration of configurator failed 2022-06-01T15:59:35.215-04:00 Traceback (most recent call last): 2022-06-01T15:59:35.215-04:00 File "/opt/python/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py", line 105, in _load_configurators 2022-06-01T15:59:35.215-04:00 entry_point.load()().configure(auto_instrumentation_version=__version__) # type: ignore 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 215, in configure 2022-06-01T15:59:35.215-04:00 self._configure(**kwargs) 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 231, in _configure 2022-06-01T15:59:35.215-04:00 _initialize_components(kwargs.get("auto_instrumentation_version")) 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 181, in _initialize_components 2022-06-01T15:59:35.215-04:00 trace_exporters, log_exporters = _import_exporters( 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 149, in _import_exporters 2022-06-01T15:59:35.215-04:00 for (exporter_name, exporter_impl,) in _import_config_components( 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 136, in _import_config_components 2022-06-01T15:59:35.215-04:00 component_impl = entry_point.load() 2022-06-01T15:59:35.215-04:00 File "/var/task/pkg_resources/__init__.py", line 2470, in load 2022-06-01T15:59:35.215-04:00 self.require(*args, **kwargs) 2022-06-01T15:59:35.215-04:00 File "/var/task/pkg_resources/__init__.py", line 2493, in require 2022-06-01T15:59:35.215-04:00 items = working_set.resolve(reqs, env, installer, extras=self.extras) 2022-06-01T15:59:35.215-04:00 File "/var/task/pkg_resources/__init__.py", line 800, in resolve 2022-06-01T15:59:35.215-04:00 raise VersionConflict(dist, req).with_context(dependent_req) 2022-06-01T15:59:35.215-04:00 pkg_resources.ContextualVersionConflict: (protobuf 3.13.0 (/var/task), Requirement.parse('protobuf>=3.15.0'), {'googleapis-common-protos'}) 2022-06-01T15:59:35.215-04:00 Failed to auto initialize opentelemetry 2022-06-01T15:59:35.215-04:00 Traceback (most recent call last): 2022-06-01T15:59:35.215-04:00 File "/opt/python/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py", line 123, in initialize 2022-06-01T15:59:35.215-04:00 _load_configurators() 2022-06-01T15:59:35.215-04:00 File "/opt/python/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py", line 109, in _load_configurators 2022-06-01T15:59:35.215-04:00 raise exc 2022-06-01T15:59:35.215-04:00 File "/opt/python/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py", line 105, in _load_configurators 2022-06-01T15:59:35.215-04:00 entry_point.load()().configure(auto_instrumentation_version=__version__) # type: ignore 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 215, in configure 2022-06-01T15:59:35.215-04:00 self._configure(**kwargs) 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 231, in _configure 2022-06-01T15:59:35.215-04:00 _initialize_components(kwargs.get("auto_instrumentation_version")) 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 181, in _initialize_components 2022-06-01T15:59:35.215-04:00 trace_exporters, log_exporters = _import_exporters( 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 149, in _import_exporters 2022-06-01T15:59:35.215-04:00 for (exporter_name, exporter_impl,) in _import_config_components( 2022-06-01T15:59:35.215-04:00 File "/var/task/opentelemetry/sdk/_configuration/__init__.py", line 136, in _import_config_components 2022-06-01T15:59:35.215-04:00 component_impl = entry_point.load() 2022-06-01T15:59:35.215-04:00 File "/var/task/pkg_resources/__init__.py", line 2470, in load 2022-06-01T15:59:35.215-04:00 self.require(*args, **kwargs) 2022-06-01T15:59:35.215-04:00 File "/var/task/pkg_resources/__init__.py", line 2493, in require 2022-06-01T15:59:35.215-04:00 items = working_set.resolve(reqs, env, installer, extras=self.extras) 2022-06-01T15:59:35.215-04:00 File "/var/task/pkg_resources/__init__.py", line 800, in resolve 2022-06-01T15:59:35.215-04:00 raise VersionConflict(dist, req).with_context(dependent_req) 2022-06-01T15:59:35.266-04:00 pkg_resources.ContextualVersionConflict: (protobuf 3.13.0 (/var/task), Requirement.parse('protobuf>=3.15.0'), {'googleapis-common-protos'}) ``` We've encountered the same issue with other libraries. My question is whether there are any best practices or recommendations to deal with this issue a little bit better. Should the lambda layer maybe publish its list of dependencies for each version so that the service using it will know what to expect? I feel like this is introducing a very loose dependency that will be only caught in runtime, which seems problematic to me. Hope it makes sense. I searched older posts and couldn't find anything relevant. Many thanks in advance, Juan
0
answers
1
votes
43
views
asked 15 days ago

API Gateway WSS Endpoint not found

I've created a WSS chat app using the sample that comes with the AWS dotnet lambda templates. My web front end can connect ok and it creates a record in dynamo but when I try to broadcast a message to all connections I get the following error: `Name or service not known (execute-api.ap-southeast-2.amazonaws.com:443) ` I'm using the following code to set it: var protocol = "https"; //var protocol = "wss"; var domainName = request.RequestContext.DomainName; //var domainName = "ID HERE.execute-api.ap-southeast-2.amazonaws.com"; var stage = request.RequestContext.Stage; // var stage = ""; //var stage = "test"; //var stage = "test/@connections"; var endpoint = $"{protocol}://{domainName}/{stage}"; and it logs the following: ``` API Gateway management endpoint: https://ID HERE.execute-api.ap-southeast-2.amazonaws.com/test ``` Ive tried all the combinations and a custom domain. Im thinking that ap-southeast-2 does not support wss ? Or ... ?? Been stuck on this for a while now. About ready to give up. Anyone got any ideas?? Update: Heres the code for sending the message - it just an updated version of the sample. From the startup: ``` public Functions() { DDBClient = new AmazonDynamoDBClient(); // Grab the name of the DynamoDB from the environment variable setup in the CloudFormation template serverless.template if (Environment.GetEnvironmentVariable(TABLE_NAME_ENV) == null) { throw new ArgumentException($"Missing required environment variable {TABLE_NAME_ENV}"); } ConnectionMappingTable = Environment.GetEnvironmentVariable(TABLE_NAME_ENV) ?? ""; this.ApiGatewayManagementApiClientFactory = (Func<string, AmazonApiGatewayManagementApiClient>)((endpoint) => { return new AmazonApiGatewayManagementApiClient(new AmazonApiGatewayManagementApiConfig { ServiceURL = endpoint, RegionEndpoint = RegionEndpoint.APSoutheast2, // without this I get Credential errors LogResponse = true, // dont see anything extra with these LogMetrics = true, DisableLogging = false }); }); } ``` And the SendMessageFunction: ``` try { // Construct the API Gateway endpoint that incoming message will be broadcasted to. var protocol = "https"; //var protocol = "wss"; var domainName = request.RequestContext.DomainName; //var domainName = "?????.execute-api.ap-southeast-2.amazonaws.com"; var stage = request.RequestContext.Stage; // var stage = ""; //var stage = "test"; //var stage = "test/@connections"; var endpoint = $"{protocol}://{domainName}/{stage}"; context.Logger.LogInformation($"API Gateway management endpoint: {endpoint}"); JObject message = JObject.Parse(request.Body); context.Logger.LogInformation(request.Body); if (!GetRecipient(message, context, out WSMessageRecipient? recipient)) { context.Logger.LogError($"Invalid or empty WSMessageRecipient"); return new APIGatewayProxyResponse { StatusCode = (int)HttpStatusCode.BadRequest, Body = "Nothing to do or invalid request" }; } if (!GetData(message, context, out string? data)) { context.Logger.LogError($"Invalid or empty WSSendMessage"); return new APIGatewayProxyResponse { StatusCode = (int)HttpStatusCode.BadRequest, Body = "Nothing to do or invalid request" }; } var stream = new MemoryStream(UTF8Encoding.UTF8.GetBytes(data!)); if (stream.Length == 0) { context.Logger.LogError($"Empty Stream"); return new APIGatewayProxyResponse { StatusCode = (int)HttpStatusCode.BadRequest, Body = "Empty data stream" }; } // List all of the current connections. In a more advanced use case the table could be used to grab a group of connection ids for a chat group. ScanResponse scanResponse = await GetConnectionItems(recipient); // Construct the IAmazonApiGatewayManagementApi which will be used to send the message to. var apiClient = ApiGatewayManagementApiClientFactory(endpoint); context.Logger.LogInformation($"Table scan of {ConnectionMappingTable} got {scanResponse.Items.Count} records."); // Loop through all of the connections and broadcast the message out to the connections. var count = 0; foreach (var item in scanResponse.Items) { var connectionId = item[ConnectionIdField].S; context.Logger.LogInformation($"Posting to connection {count}: {connectionId}"); var postConnectionRequest = new PostToConnectionRequest { ConnectionId = connectionId, Data = stream }; try { stream.Position = 0; await apiClient.PostToConnectionAsync(postConnectionRequest); context.Logger.LogInformation($"Posted to connection {count}: {connectionId}"); count++; } catch (AmazonServiceException e) { // API Gateway returns a status of 410 GONE then the connection is no // longer available. If this happens, delete the identifier // from our DynamoDB table. if (e.StatusCode == HttpStatusCode.Gone) { context.Logger.LogInformation($"Deleting gone connection: {connectionId}"); var ddbDeleteRequest = new DeleteItemRequest { TableName = ConnectionMappingTable, Key = new Dictionary<string, AttributeValue> { {ConnectionIdField, new AttributeValue {S = connectionId}} } }; await DDBClient.DeleteItemAsync(ddbDeleteRequest); } else { context.Logger.LogError( $"Error posting message to {connectionId}: {e.Message}"); context.Logger.LogInformation(e.StackTrace); } } catch (Exception ex) { context.Logger.LogError($"Bugger, something fecked up: {ex.Message}"); context.Logger.LogInformation(ex.StackTrace); } } return new APIGatewayProxyResponse { StatusCode = (int)HttpStatusCode.OK, Body = "Data sent to " + count + " connection" + (count == 1 ? "" : "s") }; } catch (Exception e) { context.Logger.LogInformation("Error Sending Message: " + e.Message); context.Logger.LogInformation(e.StackTrace); return new APIGatewayProxyResponse { StatusCode = (int)HttpStatusCode.InternalServerError, Body = $"Failed to send message: {e.Message}" }; } ```
2
answers
0
votes
35
views
asked 15 days ago

Trying to call API with a list of URLs but Lambda is timing out

I'm trying to call the Pagespeed Insights API and save the response back in Dynamo. The Lambda timeout is 15 min. I will eventually need to call the API with about 100 URLs with an average response time of 20-30 sec. What is the best approach on doing this? My current code looks like this: ``` const { v4 } = require('uuid'); const axios = require('axios'); const urls = require('urls.json'); const endpoint = "https://www.googleapis.com/pagespeedonline/v5/runPagespeed"; const API_KEY = ""; const dynamodb = require('aws-sdk/clients/dynamodb'); const docClient = new dynamodb.DocumentClient(); const tableName = process.env.SAMPLE_TABLE; var id; const insertRecords = async (_id, _url, _lighthouseResults) => { const metrics = _lighthouseResults.lighthouseResult.audits.metrics.details.items[0]; console.log(metrics); const params = { TableName: tableName, Item: { id: _id, created_at: new Date().toISOString(), URL: _url, metrics }, }; console.log("got to db entry") console.log(_id) console.log(_url) console.log(_lighthouseResults) return docClient.put(params).promise(); // convert to Promise } exports.putItemHandler = async (event) => { // Async function for (const url of urls) { id = v4(); console.log(url + " - " + id); const lighthouseResults = await getLighthouse(url); await insertRecords(id, url, lighthouseResults) // wait until it finish and go to next item .catch((error) => { console.log(error); // throw error; don't care about this error, just continue }); } console.log("Done"); }; const getLighthouse = async (url) => { console.log("inside getLighthouse") try { const resp = await axios.get(endpoint, { params: { key: API_KEY, url: url, category: 'performance', strategy: 'mobile' } }); return resp.data } catch (err) { console.error(err) } } ```
3
answers
0
votes
43
views
asked 20 days ago

GGv2: Pinned (long lived) Node.js Lambda Times Out

I've created a Node.js Lambda function based on the examples found here (https://github.com/aws-samples/aws-greengrass-lambda-functions) and imported it as a Greengrass V2 component. Additionally, I've configured the Lambda function component as a 'pinned' or 'long-lived' function (i.e., it should remain running in the background). Also, the Lambda function is configured NOT to run in a Greengrass container (i.e., `NoContainer`). Initially, upon deploying the Lambda function, it would not run at all. Then, after increasing the `timeoutInSeconds` value from 3 to 60, I was able to see the function start and run, but then it is promptly killed via `SIGTERM` after ~60 seconds. Increasing the `timeoutInSeconds` value to the max allowed (2147483647) doesn't seem to change the behavior either (and isn't really a good solution). Since a 'pinned' function should be able to run indefinitely, I would think the `timeoutInSeconds` value would not matter to the execution of the function (i.e., Greengrass should not kill it)? I have seen some older comments/notes from other users (https://www.repost.aws/questions/QUJcrxYJosQHe_jTAyaAzYOw/issues-node-js-hello-world-running-core-1-9-2) that this can happen when the `callback()` function is not called in your Lambda's handler function, but I tried this, and it did not seem to fix the issue. I also tried using an asynchronous (`async`) handler, but this didn't behave any differently. Is there another setting that must be configured properly in Greengrass V2? The Lambda component? The Lambda function itself? Do I need construct the Lambda handler in a specific way? Are there any better examples of Lambda functions for Greengrass than what is at the link above? Thanks!
3
answers
0
votes
67
views
asked 20 days ago

AWS compute savings plan commitment calculation

I am trying to understand how the commitment per hour calculation has been done for AWS Savings plan recommendation for past 30 days. There is no document on how to calculate it, how is the calculation done for leaving the on-demand spend? As per the below usage of the services mentioned in below table, AWS cost explorer is recommending below commitment... "You could save an estimated $315 monthly by purchasing the recommended Compute Savings Plan. Based on your past 30 days of usage, we recommend purchasing 1 Savings Plan with a total commitment of $1.390/hour for a 1-year term. With this commitment, we project that you could save an average of $0.43/hour - representing a 17% savings compared to On-Demand. To account for variable usage patterns, this recommendation maximizes your savings by leaving an average $0.65/hour of On-Demand spend. Recommendations require up to 24 hours to update after a purchase." So we came to a conclusion that we should spend 0.65/hour of On-demand and 1.390/hour of commitment? Please suggest and share how we can calculate all this information if I need to do that with a different set of data. | Service| Intance family| instance type| No. of instances| Region | On-demand Spend ($) | On-demand Usage | On-demand rate | SP rate | SP Spend | % discount | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | EC2 | t2| micro | 1 | US East (N. Virginia) | 8.352| 720| 0.0116 | 0.0078 | 5.616 | 33% | | EC2 | t2| nano | 1 | EU (Ireland) | 4.536 | 720 | 0.0063 | 0.0048 | 3.456 | 24% | | EC2 | t2 | micro | 6 | EU (Ireland) | 53.837658 | 4,272.83 | 0.0126000000 | 0.0095 | 40.591885 | 25% | | EC2 | t2 | large | 1 | EU (Ireland) | 72.576 | 720 | 0.1008 | 0.0761 | 54.792 | 25% | | EC2 | c6a | large | 1 | EU (Ireland) | 38.4495552| 468.44 | 0.08208 | 0.05854 | 27.4224776 | 29% | | EC2 | c4 | 4xlarge | 1 | EU (Ireland) | 421.73| 466 | 0.905 | 0.667 | 310.822 | 26% | | Fargate | | GB-Hours | | EU (Ireland) | 224.9872755| 50615.81 | 0.0044450000 | 0.0034671 | 175.4900749 | 22% | | Fargate | | vCPU-Hours | | EU (Ireland) | 916.9323152 | 22651.49 | 0.0404800000 | 0.0315744 | 715.2072059 | 22% | | Lambda | | EU-Lambda-GB-Second (Second) | | EU (Ireland) | 28.250059 | 1695000.15 | 0.0000166667 | 0.0000138 | 23.39100207 | 17% | | Lambda | | EU-Request (Requests) | | EU (Ireland) | 0.94 | 4688540 | .20 | .20 | 0.94 | 0% | | **Total **| | | | | 1770.590863 | | 1.363321667 | 1.0587953 | 1357.728645 | |
1
answers
0
votes
56
views
asked 22 days ago

How do I updated a CloudWatch Synthetics Canary layer version number using the AWS CLI?

Hello, I created a CloudWatch Synthetics Canary via a console blueprint I want to update the active layer version ("39" – bolded below) using the AWS CLI I see it in the Code["SourceLocationArn"] attribute in my describe-canary response The [update-canary operation](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/synthetics/update-canary.html) doesn't have a SourceLocationArn option for the --code value. How do I updated the layer version number using the AWS CLI? Thank you ``` { "Id": "aae1dc97-773b-47bb-ae72-d0bb54eec60c", "Name": "daily-wisdom-texts", "Code": { ** "SourceLocationArn": "arn:aws:lambda:us-east-1:875425895862:layer:cwsyn-daily-wisdom-texts-aae1dc97-773b-47bb-ae72-d0bb54eec60c:39", ** "Handler": "pageLoadBlueprint.handler" }, "ExecutionRoleArn": "arn:aws:iam::875425895862:role/service-role/CloudWatchSyntheticsRole-daily-wisdom-texts-cf9-de04b4eb2bb3", "Schedule": { "Expression": "rate(1 hour)", "DurationInSeconds": 0 }, "RunConfig": { "TimeoutInSeconds": 60, "MemoryInMB": 1000, "ActiveTracing": false }, "SuccessRetentionPeriodInDays": 31, "FailureRetentionPeriodInDays": 31, "Status": { "State": "RUNNING", "StateReasonCode": "UPDATE_COMPLETE" }, "Timeline": { "Created": 1641343713.153, "LastModified": 1652481443.745, "LastStarted": 1652481444.83, "LastStopped": 1641675597.0 }, "ArtifactS3Location": "cw-syn-results-875425895862-us-east-1/canary/us-east-1/daily-wisdom-texts-cf9-de04b4eb2bb3", "EngineArn": "arn:aws:lambda:us-east-1:875425895862:function:cwsyn-daily-wisdom-texts-aae1dc97-773b-47bb-ae72-d0bb54eec60c:57", "RuntimeVersion": "syn-python-selenium-1.0", "Tags": { "blueprint": "heartbeat" } } ```
1
answers
0
votes
12
views
asked 22 days ago

AWS-SDK-Javascript v3 API call to DynamoDB is return undefined and ignoring execution of a console.log command

The goal of this code snippet is retreiving all connectionsids of a chat room to reply to a chat sendmessage command in the API Gateway WebSocket. I have used PutCommand and GetCommand a lot, but today I'm using the QueryCommand for the first time. The code Part 1, the DynamoDB call: ``` export async function ddbGetAllRoomConnections(room) { const params = { "TableName": "MessageTable", "KeyConditionExpression": "#DDB_room = :pkey", "ExpressionAttributeValues": { ":pkey": "" }, "ExpressionAttributeNames": { "#DDB_room": "room" }, "ScanIndexForward": true, "Limit": 100 }; console.log("ddbGetAllRoomConnections-1:",params); const data = await ddbClient.send( new QueryCommand(params) ); console.log("ddbGetAllRoomConnections-2:",data); return data; } ``` The calling part: ``` const normalConnections = ddbGetAllRoomConnections(connData.lastroom); if (typeof normalConnections.Items === 'undefined' || normalConnections.Items.length <= 0) { throw new Error("Other Connections not found"); } ``` The following logfile entries are occuring in sequence: ``` logfile puzlle message1: ddbGetAllRoomConnections-1: { TableName: 'MessageTable', KeyConditionExpression: '#DDB_room = :pkey', ExpressionAttributeValues: { ':pkey': '' }, ExpressionAttributeNames: { '#DDB_room': 'room' }, ScanIndexForward: true, Limit: 100 } logfile puzlle message2: ERROR Error: Other Connections not found at Runtime.handler (file:///var/task/chat-messages.js:49:21) at processTicksAndRejections (node:internal/process/task_queues:96:5) { statusCode: 500 } logfile puzlle message3: END RequestId: ``` Waht irritates me is, the following sequence of occurences in the logfile: 1. ddbGetAllRoomConnections-1: is coming correctly before the ddbClient.send command 2. after the ddbClient.send command there is no ddbGetAllRoomConnections-2 log entry 3. The next logentry is after the call of ddbGetAllRoomConnections showing the value undefined. I tried also PartiQL per ExecuteCommand, then debugging with Dynobase I retrieved the code for the params section in the current setting.
1
answers
0
votes
23
views
asked 22 days ago

Lambda Handler No Space On Device Error

Have a lambda function that is throwing an error of "No space left on device". The lambda function creates a custom resource handler defined within the lambda python code: response = cfn.register_type( Type='RESOURCE', TypeName='AWSQS:MYCUSTOM::Manager', SchemaHandlerPackage="s3://xxx/yyy/awsqs-mycustom-manager.zip", LoggingConfig={"LogRoleArn": "xxx", "LogGroupName": "awsqs-mycustom-manager-logs"}, ExecutionRoleArn="xxx" The lambda function when created has the following limits set: 4GB of Memory and 4GB of Ephemeral space. However, I was still receiving a no space on device even thought the '/tmp/' is specified and this is plenty of space. Doing additional digging I added a "df" output inside of the code/zip file. When the output prints is shows that only 512MB of space is available in temp? Filesystem 1K-blocks Used Available Use% Mounted on /mnt/root-rw/opt/amazon/asc/worker/tasks/rtfs/python3.7-amzn-201803 27190048 22513108 3293604 88% / /dev/vdb 1490800 14096 1460320 1% /dev **/dev/vdd 538424 872 525716 1% /tmp** /dev/root 10190100 552472 9621244 6% /var/rapid /dev/vdc 37120 37120 0 100% /var/task Its like a new instance was created internally and did not adopt the size from the parent. Forgive me if technically my language is incorrect as this is the first time busting this out and seeing this type of error. Just has me confused as too what is going on under the covers, and I can find no documentation on how to increase the ephemeral storage within the handler even though the originating lamda function in which this is defined has already had the limits increased.
1
answers
0
votes
41
views
asked 23 days ago

How to use if-then-else and loop construct in input transformer in EventBridge?

Is there a way to define **if-then-else** and **looping** constructs using JSONPath, while defining the configuring the "input path" for EventBridge Input Transformer? E.g. For the following input ``` { "category": { "name": "Nike Shoes", "level": 2, "translation": null, "ancestors": { "edges": [ { "node": { "name": "Shoes", "slug": "shoes", "level": 0 } }, { "node": { "name": "Running Shoes", "slug": "running-shoes", "level": 1 } } ] } } } ``` I need the output to be ``` { "categories.lvl0": "Shoes", "categories.lvl1": "Shoes > Running shoes", "categories.lvl2": "Shoes > Running shoes > Nike shoes", } ``` The following is the python logic for the output I wish to achieve ```python if node["category"]["level"] != 0: category_hierarchy = list(map(lambda x: x["node"]["name"], node["category"]["ancestors"]["edges"])) category_hierarchy.append(new_item["categoryName"]) for i in range(len(category_hierarchy)): new_item[f'categories.lvl{i}'] = " > ".join(category_hierarchy[0:i+1]) ``` if the level of the main category ("Nike shoes" here) is not equal to 0, then I want to loop through its ancestors and define variables of the form `categories.lvl(n)` with the logic defined in column 2 below, to get the values defined in column 3 | Variable | Logic | Value required | | --------- | ------ | ------------------ | | category.lvl0 | ancestor category with level 0 | Shoes | | category.lvl1 | ancestor category with level 0 > ancestor category with level 1 | Shoes> Running shoes | | category.lvl2 | ancestor category with level 0 > ancestor category with level 1 > main category (with level 0) | Shoes> Running shoes > Nike shoes| I could frame the following JSONPath construct for now, which in plain English, represents: "if the category level is 0, then proceed to output names of the ancestor categories" JSONPath - `$..edges[?(@.level!=0)]..name` Output ``` [ "Shoes", "Running shoes" ] ``` However, I am not sure how to proceed further.
0
answers
0
votes
33
views
asked a month ago
  • 1
  • 90 / page