By using AWS re:Post, you agree to the Terms of Use
/All/
All Questions
Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Manage Greengrass-V2 Components in central account

I'm currently trying to create a component in a tenant account using the artifact packaged in a central account S3 bucket. The tenant account and central account are in the same AWS Organization. I've tried the following settings to enable the tenant accounts to access the S3 bucket: 1. On the central account S3 bucket (I wasn't sure what Principal Service/User was trying to test this access, so I just "shotgunned" it): ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "greengrass.amazonaws.com", "iot.amazonaws.com", "credentials.iot.amazonaws.com" ] }, "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": "arn:aws:s3:::MY-CENTRAL-ACCOUNT-BUCKET/*" }, { "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": [ "s3:GetObject", "s3:GetObjectVersion", "s3:GetObjectTorrent", "s3:GetObjectVersionAcl", "s3:GetObjectAcl" ], "Resource": "arn:aws:s3:::MY-CENTRAL-ACCOUNT-BUCKET/*", "Condition": { "StringEquals": { "aws:PrincipalOrgID": "o-abc123def456" } } }, ... ] } ``` 2. On the `GreengrassV2TokenExchangeRole` in the tenant account, I've added the `AmazonS3FullAccess` AWS Managed policy (just to see if I could eliminate this Role as the blocker) I've verified that, as a User in the tenant account, I have access to the object in S3 and can do `aws s3 cp` as a tenant User (so the bucket policy doesn't seem to be blocking things). Whenever I try creating the Component in the tenant account, I'm met with: ``` Invalid Input: Encountered following errors in Artifacts: {s3://MY-CENTRAL-ACCOUNT-BUCKET/com.example.my-component-name/1.0.0-dev.0/application.zip = Specified artifact resource cannot be accessed} ``` ... using either the AWS IoT Greengrass Console and the AWS CLI. What am I missing? Is there a different service-linked role, I should be allowing in the S3 Bucket Resource Policy? It just seems like an access-test during Component creation and not an actual attempt to access the resource. I'm fairly certain if I assumed the Greengrass-TES role, I'd be able to download the artifact too (although I haven't explicitly done that yet).
0
answers
0
votes
5
views
asked 3 hours ago

Data Catalog schema table getting modified when I run my Glue ETL job

I created a Data Catalog with a table that I manually defined. I run my ETL job and all works well. I added partitions to both the table in the Data Catalog, as well as the ETL job. it creates the partitions and I see the folders being created in S3 as well. But my table data types change. I originally had: | column | data type | | --- | --- | | vid | string | | altid | string | | vtype | string | | time | timestamp | | timegmt | timestamp | | value | float | | filename | string | | year | int | | month | int | | day | int | But now after the ETL job with partitions, my table ends up like so: | column | data type | | --- | --- | | vid | string | | altid | string | | vtype | string | | time | bigint | | timegmt | bigint | | value | float | | filename | string | | year | bigint | | month | bigint | | day | bigint | Before this change of data types, I could do queries in Athena. Including a query like this: ``` SELECT * FROM "gp550-load-database"."gp550-load-table-beta" WHERE vid IN ('F_NORTH', 'F_EAST', 'F_WEST', 'F_SOUTH', 'F_SEAST') AND vtype='LOAD' AND time BETWEEN TIMESTAMP '2021-05-13 06:00:00' and TIMESTAMP '2022-05-13 06:00:00' ``` But now with the data types change, I get an error when trying to do a query like above ``` "SYNTAX_ERROR: line 1:154: Cannot check if bigint is BETWEEN timestamp and timestamp This query ran against the "gp550-load-database" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: 2a5287bc-7ac2-43a8-b617-bf01c63b00d5" ``` So then if I go into the the table and change the data type back to "timestamp", I then run the query and get a different error: ``` "HIVE_PARTITION_SCHEMA_MISMATCH: There is a mismatch between the table and partition schemas. The types are incompatible and cannot be coerced. The column 'time' in table 'gp550-load-database.gp550-load-table-beta' is declared as type 'timestamp', but partition 'year=2022/month=2/day=2' declared column 'time' as type 'bigint'. This query ran against the "gp550-load-database" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: f788ea2b-e274-43fe-a3d9-22d80a2bbbab" ``` Does anyone know what is happening?
0
answers
0
votes
5
views
asked 6 hours ago

Token Exchange Service (TES) failing to fetch credentials

**Gentlefolks, why is the the token exchange service (TES) failing to fetch credentials?** I have greengrass installed on Ubuntu 20.x.x running on a virtual machine. At the end of the numbered items is an error log (truncated) obtained from `/greengrass/v2/logs/greengrass.log`. Thank you What I've done or what I think you should know: 1. There exist `GreenGrassServiceRole` which contains `AWSGreengrassResourceAccessRolePolicy`, and every other policy containing the word "greengrass" in its name. It also has a trust relationship as below. This was created when I installed Greengrass, but I added additional policies. ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "greengrass.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "aws:SourceAccount": "<MY_ACCOUNT_NUMBER>" }, "ArnLike": { "aws:SourceArn": "<ARN_THAT_SHOWS_MY_REGION_AND_ACC_NUMBER>:*" } } }, { "Effect": "Allow", "Principal": { "Service": "credentials.iot.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } ``` 2. `aws.greengrass.Nucleus` component is deployed with the following configuration update. The alias also exists. ``` { "reset": [], "merge": { "ComponentConfiguration": { "DefaultConfiguration": { "iotRoleAlias": "GreengrassV2TokenExchangeRoleAlias", "awsRegion": "us-east-1", "iotCredEndpoint": "https://sts.us-east-1.amazonaws.com", "iotDataEndpoint": "<ENDPOINT_OBTAINED_FROM_IOT_CORE_SETTINGS>" } } } } ``` 3. `aws.greengrass.TokenExchangeService` is deployed. 4. There's a custom component that uses the Greengrass SDK to publish to IoT Core. It has the following configuration update. ``` { "reset": [], "merge": { "ComponentDependencies": { "aws.greengrass.TokenExchangeService": { "VersionRequirement": "^2.0.0", "DependencyType": "HARD" } } } } ``` 5. There is an IoT policy from a previous exercise which attached to the core device (Ubuntu on virtual machine) certificate. It allows **all** actions. There's also another `GreengrassTESCertificatePolicyGreengrassV2TokenExchangeRoleAlias` which is associated with the thing's certificate. The policy allows `iot:AssumeRoleWithCertificate` **ERROR LOG BELOW** ``` 2022-05-21T21:07:39.592Z [ERROR] (pool-2-thread-28) com.aws.greengrass.tes.CredentialRequestHandler: Error in retrieving AwsCredentials from TES. {iotCredentialsPath=/role-aliases/GreengrassV2TokenExchangeRoleAlias/credentials, credentialData=Failed to get connection} 2022-05-21T21:08:38.071Z [WARN] (pool-2-thread-28) com.aws.greengrass.tes.CredentialRequestHandler: Encountered error while fetching credentials. {iotCredentialsPath=/role-aliases/GreengrassV2TokenExchangeRoleAlias/credentials} com.aws.greengrass.deployment.exceptions.AWSIotException: Unable to get response at com.aws.greengrass.iot.IotCloudHelper.getHttpResponse(IotCloudHelper.java:95) at com.aws.greengrass.iot.IotCloudHelper.lambda$sendHttpRequest$1(IotCloudHelper.java:80) at com.aws.greengrass.util.BaseRetryableAccessor.retry(BaseRetryableAccessor.java:32) at com.aws.greengrass.iot.IotCloudHelper.sendHttpRequest(IotCloudHelper.java:81) at com.aws.greengrass.tes.CredentialRequestHandler.getCredentialsBypassCache(CredentialRequestHandler.java:207) at com.aws.greengrass.tes.CredentialRequestHandler.getCredentials(CredentialRequestHandler.java:328) at com.aws.greengrass.tes.CredentialRequestHandler.getAwsCredentials(CredentialRequestHandler.java:337) at com.aws.greengrass.tes.LazyCredentialProvider.resolveCredentials(LazyCredentialProvider.java:24) at software.amazon.awssdk.awscore.internal.AwsExecutionContextBuilder.resolveCredentials(AwsExecutionContextBuilder.java:165) at software.amazon.awssdk.awscore.internal.AwsExecutionContextBuilder.invokeInterceptorsAndCreateExecutionContext(AwsExecutionContextBuilder.java:102) at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.invokeInterceptorsAndCreateExecutionContext(AwsSyncClientHandler.java:69) at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.lambda$execute$1(BaseSyncClientHandler.java:78) at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.measureApiCallSuccess(BaseSyncClientHandler.java:175) at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:76) at software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45) at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:56) at software.amazon.awssdk.services.s3.DefaultS3Client.getBucketLocation(DefaultS3Client.java:3382) at com.aws.greengrass.componentmanager.builtins.S3Downloader.lambda$getRegionClientForBucket$2(S3Downloader.java:134) at com.aws.greengrass.util.RetryUtils.runWithRetry(RetryUtils.java:50) at com.aws.greengrass.componentmanager.builtins.S3Downloader.getRegionClientForBucket(S3Downloader.java:133) at com.aws.greengrass.componentmanager.builtins.S3Downloader.getDownloadSize(S3Downloader.java:115) at com.aws.greengrass.componentmanager.ComponentManager.prepareArtifacts(ComponentManager.java:420) at com.aws.greengrass.componentmanager.ComponentManager.preparePackage(ComponentManager.java:377) at com.aws.greengrass.componentmanager.ComponentManager.lambda$preparePackages$1(ComponentManager.java:338) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.net.UnknownHostException: https at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797) at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1509) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1368) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1302) at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at jdk.internal.reflect.GeneratedMethodAccessor51.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at software.amazon.awssdk.http.apache.internal.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:80) at com.sun.proxy.$Proxy15.connect(Unknown Source) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) at software.amazon.awssdk.http.apache.internal.impl.ApacheSdkHttpClient.execute(ApacheSdkHttpClient.java:72) at software.amazon.awssdk.http.apache.ApacheHttpClient.execute(ApacheHttpClient.java:253) at software.amazon.awssdk.http.apache.ApacheHttpClient.access$500(ApacheHttpClient.java:106) at software.amazon.awssdk.http.apache.ApacheHttpClient$1.call(ApacheHttpClient.java:232) at com.aws.greengrass.iot.IotCloudHelper.getHttpResponse(IotCloudHelper.java:88) ... 27 more 2022-05-21T21:08:38.073Z [ERROR] (pool-2-thread-28) com.aws.greengrass.tes.CredentialRequestHandler: Error in retrieving AwsCredentials from TES. {iotCredentialsPath=/role-aliases/GreengrassV2TokenExchangeRoleAlias/credentials, credentialData=Failed to get connection} ```
1
answers
0
votes
10
views
asked 6 hours ago

Elemental Mediaconvert job template for Video on Demand

I launched the fully managed video on demand template from here https://aws.amazon.com/solutions/implementations/video-on-demand-on-aws/?did=sl_card&trk=sl_card. I have a bunch of questions on how to tailor this service to my use case. I will each separate questions for each. Firstly, is possible to use my own GUID as an identifier for the mediaconvert jobs and outputs. The default GUID tagged onto the videos in this workflow are independent of my application server. So it's difficult for the server to track who owns what video on the destination s3 bucket. Secondly, I would like to compress the video input for cases where the resolution is higher than 1080p. For my service i don't want to process any videos higher than 1080p. Is there a way i can achieve this without adding a lamda during the ingestion stage to compress it? I know it can by compressed on the client, i am hoping this can be achieved on this workflow, perhaps using mediaconvert? Thirdly, based on some of the materials i came across about this service, aside from the hls files mediaconvert generates, its supposed to generate an mp4 version of my video for cases where a client wants to download the full video as opposed to streaming. That is not the default behaviour, how do i achieve this? Lastly, how do i add watermarks to my videos in this workflow. Forgive me if some of these questions feel like things i could have easily researched on and gotten solutions. I did do some research, but i failed to grasp a clear understanding on anything
0
answers
0
votes
2
views
asked 6 hours ago
0
answers
0
votes
4
views
asked a day ago

how to use sagemaker with input mode FastFile with files that has Chinese in their name?

This post is both a bug report and a question. We are trying to use SageMaker to train a model and everything is quite standard. Since we have a lot of images, we'll suffer from a super long image downloading time if we don't change the input_mode to FastFile. Then I struggled to successfully load image in the container. In my dataset there are a lot of samples whose name contains Chinese. When I started debugging because I could not properly load files, I found that when sagemaker mounts the data from s3, it didn't take care of the encoding correctly. Here is an image name and the image path inside the training container: `七年级上_第10章分式_七年级上_第10章分式_1077759_title_0-0_4_mathjax` `/opt/ml/input/data/validation/\u4E03\u5E74\u7EA7\u4E0A_\u7B2C10\u7AE0\u5206\u5F0F_\u4E03\u5E74\u7EA7\u4E0A_\u7B2C10\u7AE0\u5206\u5F0F_1077759_title_0-0_4_mathjax.png` This is not neat but still I can have the right path in the container. The problem is that I'm not able to read the file even though the path exists: what I mean is `os.path.exists('/opt/ml/input/data/validation/\u4E03\u5E74\u7EA7\u4E0A_\u7B2C10\u7AE0\u5206\u5F0F_\u4E03\u5E74\u7EA7\u4E0A_\u7B2C10\u7AE0\u5206\u5F0F_1077759_title_0-0_4_mathjax.png')` gives true but `cv2.imread('/opt/ml/input/data/validation/\u4E03\u5E74\u7EA7\u4E0A_\u7B2C10\u7AE0\u5206\u5F0F_\u4E03\u5E74\u7EA7\u4E0A_\u7B2C10\u7AE0\u5206\u5F0F_1077759_title_0-0_4_mathjax.png')` returns None. Then I tried to open the file and fortunately it gives an error The code is `with open('/opt/ml/input/data/validation/\u4E03\u5E74\u7EA7\u4E0A_\u7B2C10\u7AE0\u5206\u5F0F_\u4E03\u5E74\u7EA7\u4E0A_\u7B2C10\u7AE0\u5206\u5F0F_1077759_title_0-0_4_mathjax.png', 'rb') as f: a = f.read() ` and it gives me the error `OSError: [Errno 107] Transport endpoint is not connected` I tried to load a file in the same folder whose name doesn't contain any Chinese. Everything works well in this case so I'm sure that the Chinese characters in the filenames are causing problems. I wonder if there is a quick walk around so I don't need to rename maybe 80% of the data in s3.
0
answers
0
votes
9
views
asked a day ago

Unable to scan Wifi networks with AmazonFreeRTS in iOS app after connecting to one

Hi, I'm working with the AmazonFreeRTS library for iOS. The app I'm writing only needs to perform the function of connecting the device to a Wifi network. I'm able to successfully establish a Bluetooth connection to my device, and I'm able to retrieve a list of nearby Wifi networks. I'm also able to connect to my Wifi network by providing its password with the SaveNetworkRequest. I can tell that it worked because I can see that the device in my router's DNS table. The problem is that once the device is connected to Wifi, I can no longer get a list of Wifi networks in order to either see which network is connected or potentially let the user connect to a different network. When I issue the command to list networks, I get no response back from the device, not even an error. Thinking the problem might be in my code, I completely wiped out and reset my device (it's an ESP32 dev board, so I'm able to do this using esptool) and I repeated the procedure using the demo app provided by Amazon (AmazonFreeRTSDemo). The result is the same, once I connect, I can no longer see a list of networks. Is there some command I need to give in order to either disconnect my device from Wifi or otherwise put it in a state where it is able to scan Wifi networks again? I can see that the demo app starts the scan process by writing a 1 to the NetworkConfigControl address, and I've done the same, but it doesn't appear to help. Thanks, Frank
0
answers
0
votes
11
views
asked a day ago

Can't get Partitions to work with my Glue Data Catalog

I have S3 files that are uploaded to a single bucket. There is no folders or anything like that, its just 1 file per hour uploaded to this bucket. I run a Glue ETL job on these files, do some transformations, and insert the data into a Glue Data Catalog stored in a different bucket. I can then query that Glue Data Catalog with Athena, and that works. What I would like to do is store the files in the S3 folder of the Data Catalog as YEAR/MONTH/DAY, using partitions. Even though the SOURCE data is just files uploaded every hour with no partitions, I want to store them in the Data Catalog WITH partitions. So I extracted the YEAR, MONTH, DAY from the files during Glue ETL, and created columns in my Data Catalog table accordingly and marked them as partitions: Partition 1 YEAR Partition 2 MONTH Partition 3 DAY The proper values are in these columns, and I have verified that. After creating the partitions I rand MSCK REPAIR TABLE on the table, and it came back with "Query Ok." I then ran my Glue ETL job. When I look in the S3 bucket I do not see folders created. I just see regular r-part files. When I click on the Table Schema it shows the columns YEAR, MONTH, DAY marked as partitions, but when I click on View Partitions it just shows: year month day No partitions found What do I need to do? These are just CSV files. I can't control the process that is uploading the raw data to S3, it is just going to store hourly files in a bucket. I can control the ETL job and the Data Catalog. When I try to query after creating the partitions and running MSCK REPAIR TABLE, there is no data returned. Yet I can go into the Data Catalog bucket and pull up one of the r-part files and the data is there.
1
answers
0
votes
13
views
asked 2 days ago

Pyspark job fails on EMR on EKS virtual cluster: java.lang.ClassCastException

Hi, we are in the process of migrating our pyspark jobs from EMR classic (EC2-based) to EMR on EKS virtual cluster. We have come across a strange failure in one job where we are reading some avro data from s3 and saving them straight back in parquet format. Example code: ``` df = spark.read.format("avro").load(input_path) df \ .withColumnRenamed("my_col", "my_new_col") \ .repartition(60) \ .write \ .mode("append") \ .partitionBy("my_new_col", "date") \ .format("parquet") \ .option("compression", "gzip") \ .save(output_path) ``` This fails with the following message at the .save() call (We can tell from the Python traceback, not included here for brevity): > Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 17) (10.0.3.174 executor 4): java.lang.ClassCastException: cannot assign instance of java.lang.invoke.SerializedLambda to field org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.dataReader$1 of type scala.Function1 in instance of org.apache.spark.sql.execution.datasources.FileFormat$$anon$1 We are running this with `--packages org.apache.spark:spark-avro_2.12:3.1.1` in sparkSubmitParameters. Exact same code ran fine in a normal EMR cluster. Comparing the environments, both have Spark 3.1.1, Scala version version 2.12.10, only the Java version is different: 1.8.0_332 (EMR classic) vs 1.8.0_302 (EMR on EKS). We should also mention that we were able to run another job successfuly on EMR on EKS, that job doesn't have this avro-to-parquet step (the input is already in parquet format). So we suspect it has something to do with the extra org.apache.spark:spark-avro_2.12:3.1.1 package we are importing. We searched the web for the java.lang.ClassCastException and found a couple of issues [here](https://issues.apache.org/jira/browse/SPARK-29497) and [here](https://issues.apache.org/jira/browse/SPARK-25047), but they are not particularly helpful to us since our code is in Python. Any hints what might be the cause? Thanks and regards, Nikos
1
answers
0
votes
11
views
asked 2 days ago

CDK Route 53 zone lookup brings back wrong zone ID

We are attempt to update our IaC code base to CDK v2. Prior to that we're deploy entire stacks of our system in another test environment. One part of a stack creates a TLS certificate for use with our load balancer. ``` var hostedZone = HostedZone.FromLookup(this, $"{config.ProductName}-dns-zone", new HostedZoneProviderProps { DomainName = config.RootDomainName }); DnsValidatedCertificate certificate = new DnsValidatedCertificate(this, $"{config.ProductName}-webELBCertificate-{config.Environment}", new DnsValidatedCertificateProps { HostedZone = hostedZone, DomainName = config.AppDomainName, // Used to implement ValidationMethod = ValidationMethod.DNS Validation = CertificateValidation.FromDns(hostedZone) }); ``` For some reason, the synthesised template defines the hosted zone ID for that AWS::CloudFormation::CustomResource has *something else other than the actual zone ID* in that account. That causes the certificate request validation process to fail - thus the whole cdk deploy - since it cannot find the real zone to place the validation records in. If looking at the individual pending certificate requests in Certificate Manager page, they can be approved by manually pressing the [[Create records in Route 53]] button, which finds the correct zone to do so. Not sure where exactly CDK is finding this mysterious zone ID that does not belong to us? ``` "AppwebELBCertificatetestCertificateRequestorResource68D095F7": { "Type": "AWS::CloudFormation::CustomResource", "Properties": { "ServiceToken": { "Fn::GetAtt": [ "AppwebELBCertificatetestCertificateRequestorFunctionCFE32764", "Arn" ] }, "DomainName": "root.domain", "HostedZoneId": "NON-EXISTENT ZONE ID" }, "UpdateReplacePolicy": "Delete", "DeletionPolicy": "Delete", "Metadata": { "aws:cdk:path": "App-webELBStack-test/App-webELBCertificate-test/CertificateRequestorResource/Default" } } ```
1
answers
0
votes
8
views
asked 2 days ago

Unknown reason for API Gateway WebSocket LimitExceededException

We have several API Gateway WebSocket APIs, all regional. As their usage has gone up, the most used one has started getting LimitExceededException when we send data from Lambda, through the socket, to the connected browsers. We are using the javascript sdk's [postToConnection](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/ApiGatewayManagementApi.html#postToConnection-property) function. The usual behavior is we will not get this error at all, then we will get several hundred spread out over 2-4 minutes. The only documentation we've been able to find that may be related to this limit is the [account level quota](https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html#apigateway-account-level-limits-table) of 10,000 per second (and we're not sure if that's the actual limit we should be looking at). If that is the limit, the problem then is that we are nowhere near it. For a single deployed API we're hitting a maximum of 3000 messages sent through the socket **per minute** with an overall account total of about 5000 per minute. So nowhere near the 10,000 per second. The only thing we think may be causing it is we have a "large" number messages going through the socket relative to the number of connected clients. For the api that's maxing at about 3000 messages per minute, we usually have 2-8 connected clients. Our only guess is there may be a lower limit to number of messages per second we can send to a specific socket connection, however we cannot find any docs on this. Thanks for any help anyone can provide
1
answers
0
votes
29
views
asked 2 days ago

WorkMail suddenly quit working for incoming mail

My Amazon WorkMail suddenly quit working for incoming email, but I can still use it for outgoing. I've changed nothing in my configuration. Ideas? Here is the returned mailer message: Delivery has failed to these recipients or groups: scotto@chateausylvania.com Your message couldn't be delivered. Try to send it again later. If the problem continues, please contact your email admin. Diagnostic information for administrators: Generating server: inbound-smtp.us-east-1.amazonaws.com scotto@chateausylvania.com Remote Server returned '554 4.3.0 < #4.3.0 smtp; 451 4.3.0 This message could not be delivered due to a recipient error. Please try again later.>' Original message headers: ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bzjaz1W5uxMoLF20bGq9quqYq3SUCdYY8zHQvIZkxkKBZReq/GveJE2eHCLx3/EPuliLfs5JdpiZkxTe/ur1qoRa7RLSsG7iXxubjBq17iCbyL6tFzqLBCyJd9jceYjYZX5BGh9+f+A/I7hwlGXstl4ULASappMmQr0JyswAa/bN6MfdzkeP0FYBgrJ8EFV6kcv3kQDgHZKSFBZ4iViTfpEhuMhJdrehQDKfMmLW0I0wDcfAS9cIhVIYeM1G/5e8yBB90r33GtS/8Zy4y/27q7oCEsaQzIvtEObLJuGBuRJMT/i5hSizQhmMxlepA1fh4adgGQHoJYJSOf/kRs8sww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=s35c/OROCcjbeBjcoYUwp/dnu7aWJa2K/ryNVo0lP4U=; b=KqkJSovayntDz6z7RPZ58aKFWBlY99JTfmz3zLKbb1wrJuaovJtBEAtQvKSFW4JgBTQQn3aJRMgSHVpn9pvw8QG36bz2lPfbhoKKK6nYOHUNyQbbyMm+WLSz5CraKvp5evw1FMXWre0KauOLUGUKH37bKa48l3HHtVEIBWAsgDlVNeG+V51XIgIGiN9a0yt9C/CEbFhaCZplHUTTyZMLqg4/bch/mtOoc78h3pv0kIykcTVIBlS9Iqt+q+LbjAJ0x/pVyUqATZG6ZZ53MYsv2e4BmLYTuXdvstcKuCzGXIh8XwTgd4kcgmNpVp/fkPRbrMF/9Adu0KUxQINiZgvlrw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=snhu.edu; dmarc=pass action=none header.from=snhu.edu; dkim=pass header.d=snhu.edu; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=snhu.onmicrosoft.com; s=selector2-snhu-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=s35c/OROCcjbeBjcoYUwp/dnu7aWJa2K/ryNVo0lP4U=; b=Tvg81lMP2ozEoNwxrvbfUUqTWhcRUWnlFvCf1SfglUfYp9P+zGnSFkKe91d9zODak7pqC0hUYEnMyvJfyr+fnk+GxqX+9qvNS3Y5L+ct/dRa7fyQKqKDvotpklmLXKO6sRlKNNV0TjtjRv8B05srfHZriNwyQhQpo+BP9O1C/vw= Received: from DS7PR05MB7221.namprd05.prod.outlook.com (2603:10b6:5:2d2::15) by SN6PR05MB4592.namprd05.prod.outlook.com (2603:10b6:805:2d::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5273.5; Thu, 19 May 2022 17:10:43 +0000 Received: from DS7PR05MB7221.namprd05.prod.outlook.com ([fe80::ecd9:ac0c:f727:5e56]) by DS7PR05MB7221.namprd05.prod.outlook.com ([fe80::ecd9:ac0c:f727:5e56%6]) with mapi id 15.20.5293.007; Thu, 19 May 2022 17:10:43 +0000 From: "Overmyer, Scott" <s.overmyer@snhu.edu> To: Scott Overmyer <scotto@chateausylvania.com> Subject: test Thread-Topic: test Thread-Index: Adhro15u0LBfykd7SgaL9p5aouINYA== Date: Thu, 19 May 2022 17:10:43 +0000 Message-ID: <DS7PR05MB7221CE8A476BEF42974937E7F0D09@DS7PR05MB7221.namprd05.prod.outlook.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: yes X-MS-TNEF-Correlator: authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=snhu.edu; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 50542666-a20f-45b1-df02-08da39ba821e x-ms-traffictypediagnostic: SN6PR05MB4592:EE_ x-microsoft-antispam-prvs: <SN6PR05MB45924EE5A772D0AD7C8E3243F0D09@SN6PR05MB4592.namprd05.prod.outlook.com> x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: fBHCNffMfOg74jdoRWCXZbnZyXxk6PLx94+EfCtuUsXEDSNtvA2dseHjnE60faJ69gpvjf4uyY6JO/vIJp7R0awpKj1njRLWRj/DwYhEBsKJGg8dNvh/pxSqwjuy+m55aFF1jZCBBxYLRVFMZJqaxX5OJpC1gyZx32EXMYqHWKEunuovRlndw556D7Yj/pdcF+xupXQ1ys/i52f66ftkLcAJwjCnKbssgq3l4RP5pXP9J4/7KgewoNSAzWoUzL4G82bWSQyE0NTg6Atdlbyl/tUezLRTS3R/hJR+Nl68SPfPglD5b0DTr15jOQLME2Xg4PYmCveud60lLWeyoH3MUNjcjrtFsBpA5GwfMfg3pnpNDknv/Yy77Yx+79pVjlL21D9263bgWRy4HnU72kc5kHIZJvgMgKVRMhLFQq3j7pzDg6eQpn6Jp8TtkIEJGtcbE8XdWU/1lTDaUSPqdxs+M8u/MdvYVzmVk7phgIXiedd4GCW1/jNV3ymqb7y+w4HdJccm++M6VGDyQXazU2Wm8uuzp0cD98gZIOYwEiT9SkiJ6aUJ5Frqxvuiwj10leRbxA/nuvrJK2ucTq5hYU2qyAL4fXjPFpg66f1OxZ++KA7gO17HGZMt0ruK1PDrl78YvT821UGXP8mdx+W1qlSeEQoLe/LJIVd4diodmps/LemX28TkCpqL7Z3tTLKrvRWq0qmlJK3Go5pMC3JBxIC9tH9RsxifDPzZO1B8EWLM3323p3WwVmJ/E4ndpeLUCst2Vh2vV9JknVJOewLJugVkCnYMnl1Xs21yR2jsTk06oCo0IJ4m6lC0j81C4+7CHx8x x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR05MB7221.namprd05.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(166002)(76116006)(558084003)(52536014)(508600001)(38070700005)(38100700002)(66946007)(3480700007)(5660300002)(55016003)(9686003)(66446008)(99936003)(186003)(40140700001)(6916009)(26005)(122000001)(64756008)(86362001)(2906002)(66476007)(66556008)(8936002)(786003)(33656002)(316002)(8676002)(71200400001)(75432002)(6506007)(7696005)(7116003)(220243001);DIR:OUT;SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?V3aqiDRPfbsGmlisQSai32OKp+rxjEr8esL1n4noEP5MrQqli9C+eIDVdVtl?= =?us-ascii?Q?1zx2VITVOht5Wj4qx2eBBICHjscujvB+FhPfwHTdZ+GK28zOWNMS1r0o0hVQ?= =?us-ascii?Q?pUfGBrmuYPiMVAGTvVoioHlVBxljvxo3ojb/F2tTDfNth2egLnzMk8eUXff9?= =?us-ascii?Q?EoAnyq0m5WK3AcxI8jyjmqWiRiNyLYOfkxznvJrXEQLbGSwAZ8+Gx9SRG6Zu?= =?us-ascii?Q?uvogp3x6rngtbbeEFKu5+LTNtX9PaIllhvxdBlDXZDIiVvv4R8YG67W/17MM?= =?us-ascii?Q?xw+KnsXIwuHFSuhmazqDcZwiH+KMQDvlmBDoKFqXtzrCCImcF37I/EFrgrOr?= =?us-ascii?Q?tdNFZpN7rI1ZsMwXnIs70B29S0PTAQ7Bh/oW2idswVL6rttm/ZHF88zNhCDf?= =?us-ascii?Q?HbVHly4OskRUbCCNoqvPlkPqHOAFgkaCZ6vnNc7W2J6ulsbgZg3/Im03TX+/?= =?us-ascii?Q?5u/AiEpLo7RQ+Bkm5sZ7YqVs1DvvD1agH8t+qiiz6di6Iseqne+uKFPOG3+0?= =?us-ascii?Q?ZWF0npZothAn70v0iBFENDuFvcBMw/fQsOsPX0zeOiSXRI9St/3SRW6cKB+g?= =?us-ascii?Q?JBp1EQIw64rR0HJlRAcMJPOnp3CwtMN3gj7jeqTVFii/fop0VElmnmAYAaNN?= =?us-ascii?Q?tJ2/FS/I1fx/yKEpA6X9qhkrJzh0pmGvw9bLnZXA3nTg2qvGxsUqYANTM5Rh?= =?us-ascii?Q?WUR56O4ypRkeQke1v+8z67PHG1EtAMpNoWVccmc7Q4f9/Z+YN1QBb6X4gk55?= =?us-ascii?Q?Eug92owBSPT3ZJ6tccxZC1bLCIQp98us01NYjvD/uNoYMjewOvZpi21usHcA?= =?us-ascii?Q?4YPxJcB/k2YGkYgskBvtLMZdvugnXYAvFr9jd8ixCBajdrKQTCFvsrru89L/?= =?us-ascii?Q?ezIrrUgSbwSquWYw3rJIQ/Fo6h/Vs2zFVd9ESiYZuglQRen5hch2NOycWZuT?= =?us-ascii?Q?LO/aex+4bug4sQ1cBPX7nZpjINLWr8e5Y06dRYhUUfz0IRklCTDFZ9/nYcc/?= =?us-ascii?Q?r4e0SjaRetd2Z6ex8defjWVOrguxQxcwL7Xw1OsIbdUc2eKzN1gWswdF75qp?= =?us-ascii?Q?e0EwmecXLXxhwHfam/GIVNemSoR1bdRH6GpHoTSVkk1xCEH9Gni+no4gV74d?= =?us-ascii?Q?Xnjr1wacPmWPrRoPfG624JHr6tHNzhtpQiGxqbPMsHpGDk2JxSnq67f6ejSd?= =?us-ascii?Q?TX/BRmOmQiTX+9X0OpZNe9BUD/pSy5qC35MXOAeYbZLPfUS7QngFrouhhNge?= =?us-ascii?Q?f4p19jP7VDedE+TuKIZJ6dk+doEdPtuhLVv7utdXOYKPm/b8yKDzZEgL6bGB?= =?us-ascii?Q?bWcINrxuG5+ICRISU4HqJCeuiVnJdiuAq6yN5zr25Azq5kZy0NrE1n3yhYnj?= =?us-ascii?Q?/F7x4+B5OTvor9X2URQ1O23e0VRPr7Z8lXkvvtf/6AYba2gHn55lwP+ueK1z?= =?us-ascii?Q?6DRHZ5WbdOmeDexB7KQOeOuVsAvbBbjSWHI5LWtrUKDLaHcTDVO9CoCnxkVT?= =?us-ascii?Q?pQiaoRSWhMUh3Pb3qHv7NcyVwVMyouPAbvKVAHUfLbaRJvGi3RRG6gfzJ5S8?= =?us-ascii?Q?RQNc7m8Pl04dW9q2LnPUvaUMPFkCog2tIO2/6sWLlHHLgGBfxYZFAVg+sRim?= =?us-ascii?Q?OWfJCqKURvnsFYu7JhY126JUt+GQ+bllzzHN7/Q3MKRJVHuWjtpan+OnUCZK?= =?us-ascii?Q?iXOnNSaKEMO82R1OBfGy5cUfDz9AStaASaYjqdAfvyDKJBXdjsSA9rcvq9ur?= =?us-ascii?Q?7vYEGMSPKA=3D=3D?= Content-Type: text/plain MIME-Version: 1.0 X-OriginatorOrg: snhu.edu X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: DS7PR05MB7221.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 50542666-a20f-45b1-df02-08da39ba821e X-MS-Exchange-CrossTenant-originalarrivaltime: 19 May 2022 17:10:43.5639 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 2baef15b-b8de-423f-9d8a-46f3686d8848 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: IhGbTbzXOf0CGwcGuV42T2fNAJH6jT2SUqYcepRCBIu5u1+M0zXC1RtOaCwLkdv2CzxWp7RHHGd+MunOgDX3hw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR05MB4592
1
answers
0
votes
15
views
asked 2 days ago

Using MSK as trigger to a Lambda with SASL/SCRAM Authentication

Hi, I have set up a MSK cluster with SASL/SCRAM authentication. I have stored the username and password in a secret using AWS Secrets Manager. Now I am trying to set the topic in the MSK cluster as an event source to a Lambda function. In order to do so, I am following this documentation: https://aws.amazon.com/blogs/compute/using-amazon-msk-as-an-event-source-for-aws-lambda/ However the above documentation is for unauthenticated protocol. So I tried to add the authentication and the secret. I also added a policy in the execution role of the Lambda function that lets it read the secret value: "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "secretsmanager:*" ], "Resource": [ "arn:aws:secretsmanager:****:*******:secret:*" ] }, { "Effect": "Allow", "Action": "secretsmanager:ListSecrets", "Resource": "*" } ]} When I am trying to add the trigger, I am getting the error: An error occurred when creating the trigger: Cannot access secret manager value arn:aws:secretsmanager:*****:*****:secret:*******. Please ensure the role can perform the 'secretsmanager:GetSecretValue' action on your broker in IAM. (Service: AWSLambda; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: ****; Proxy: null) I am not able to understand this error since I have included in the policy all the Actions from "secretsmanager" on all the Resources in my account. Can someone help?
2
answers
0
votes
13
views
asked 3 days ago
  • 1
  • 90 / page