By using AWS re:Post, you agree to the Terms of Use
/All/
All Questions
Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Launch Announcement - New ALB enhancements provide options to specify how to process Host header and X-Forwarded-For header

We are happy to announce that we just launched two enhancements to define how the Application Load Balancer (ALB) will process *Host* header and *X-Forwarded-For* header. These options provide additional flexibility in handling HTTP/HTTPS requests and allow customers to migrate their workloads to ALB. *Background:* AWS customers had asked for flexibility in specifying how ALB would handle Host and X-Forwarded-For headers in HTTP/HTTP Requests. The enhancements are as follows: *Host Header Enhancement:* * Currently, ALB modifies Host header in the incoming HTTP/HTTPS Request, and appends listener port before sending it to targets. For example, the Host: www.amazon.com header in the HTTP Request is modified to Host: www.amazon.com:8443 before ALB sends it to targets. This will remain the default behavior for backward compatibility. * With this enhancement, when enabled using a new attribute, ALB will send the Host header without any modification to the target. For example, the Host: www.amazon.com header in the HTTP Request will not be modified and sent to target as is. *X-Forwarded-For Header Enhancement:* * Currently, ALB appends IP address of the previous hop to the X-Forwarded-For header before forwarding it to targets. This will remain the default behavior for backward compatibility. * With this enhancement, customers can now specify whether the ALB should preserve or delete the X-Forwarded-For header before sending it to the targets. *Launch Details:* * Both enhancements do not change the default behavior and existing ALBs are not affected. * The enhancements are available using API and AWS Console. * The enhancements are available in all commercial, GovCloud, and China regions. These will be deployed in ADC regions at a later date based on demand. *Launch Materials:* * Documentation for Host header enhancement - https://docs.aws.amazon.com/elasticloadbalancing/latest/application/application-load-balancers.html#host-header-preservation * Documentation for X-Forwarded-For header enhancement - https://docs.aws.amazon.com/elasticloadbalancing/latest/application/x-forwarded-headers.html#x-forwarded-for Please give these a try and also let the customers know. Thank you.
0
answers
1
votes
30
views
asked 4 hours ago

StartCallAnalyticsJob : User is not authorized to access this resource

Hi everybody, I wanna ask you about AWS Transcribe Analytics Call. API is well with AWS Transcribe but I need also sentiment Analysis, so I try to use AWS Transcribe Analytics. There is my code : ``` from __future__ import print_function import time import boto3 transcribe = boto3.client('transcribe', 'us-east-1') job_name = "my-first-call-analytics-job" job_uri = "PATH_S3_TO_WAV_WHO_HAD_WORD_FOR_AWS_TRANSCRIBE" output_location = "PATH_TO_CREATED_FOLDER" data_access_role = "arn:aws:s3:::MY_BUCKET_NAME_WHERE_WAV_FILES" transcribe.start_call_analytics_job( CallAnalyticsJobName = job_name, Media = { 'MediaFileUri': job_uri }, DataAccessRoleArn = data_access_role, OutputLocation = output_location, ChannelDefinitions = [ { 'ChannelId': 0, 'ParticipantRole': 'AGENT' }, { 'ChannelId': 1, 'ParticipantRole': 'CUSTOMER' } ] ) while True: status = transcribe.get_call_analytics_job(CallAnalyticsJobName = job_name) if status['CallAnalyticsJob']['CallAnalyticsJobStatus'] in ['COMPLETED', 'FAILED']: break print("Not ready yet...") time.sleep(5) print(status) ``` I had done aws configure and I use a IAM user who have AdministratorAccess. > **botocore.exceptions.ClientError: An error occurred (AccessDeniedException) when calling the StartCallAnalyticsJob operation: User: MY_ARN_USER is not authorized to access this resource** Any help please ? Thank you very much!
0
answers
0
votes
4
views
asked 13 hours ago

Amazon SES - Do not want mail forwarded to Amazon as it may cause lost mail

Hi, We have a (maybe) unique situation. Our company setup Amazon SES to send customer notifications, etc. This appears to be quite reliable! However, we have a potential issue that we hope we can resolve. We configured an MX record as required by Amazon. It is pretty simple to explain. We initially setup the MX records as required... mail.example.com (priority 5) And also: feedback-smtp.us-east-2.amazonses.com (priority 10) This exposes a potential issue: If we are receiving an email, and the server is rebooting for an update, or there is a temporary connection issue at the hosting provider, the server sending us the mail will then fall back to feedback-smtp.us-east-2.amazonses.com. But because there are a number of mailboxes in the company, there is no way we can receive mail at feedback-smtp.us-east-2.amazonses.com. We need it to not accept any connection, so the SMPT server that sent the mail will re-queue it and try to send it to us again. I deleted the MX Record to prevent this, then I got the message below from Amazon. "IMPORTANT: If Amazon SES cannot detect the required MX record in 3 days, you will no longer be able to use "example.com" as a MAIL FROM domain. Consequently, any verified identities that are configured to use this MAIL FROM domain will not be able to send emails unless they are configured to fall back to the Amazon SES default MAIL FROM domain." Is there any way to prevent feedback-smtp.us-east-2.amazonses.com from accepting any connection for incoming mail for our company, so it will not generate a permanent error? Yesterday there was an issue at our host with receiving connections from different regions (it appears). This caused Gmail to detect feedback-smtp.us-east-2.amazonses.com as our main server even though it was sent from mail.example.com, and it was bounced saying "The IP address sending this message does not have a 550-5.7.25 PTR record setup". In summary, is there a workaround to prevent Amazon from receiving mail should our regular server be rebooting or have a temporary connection issue. Hope this makes sense. Thanks, Steve
0
answers
0
votes
6
views
asked 15 hours ago

Not able to abort redshift connection - having a statement in waiting state

At certain point of time, all java threads which abort the redshift db connections get blocked in the service. Thread dump: ``` thread-2" #377 prio=5 os_prio=0 cpu=23073.41ms elapsed=1738215.53s tid=0x00007fd1c413a000 nid=0x5a1f waiting for monitor entry [0x00007fd193dfe000] java.lang.Thread.State: BLOCKED (on object monitor) at com.amazon.jdbc.common.SStatement.close(com.foo.drivers.redshift@1.2.43.1067/Unknown Source) - waiting to lock <0x00000006086ac800> (a com.amazon.redshift.core.jdbc42.PGJDBC42Statement) at com.amazon.jdbc.common.SConnection.closeChildStatements(com.foo.drivers.redshift@1.2.43.1067/Unknown Source) at com.amazon.jdbc.common.SConnection.closeChildObjects(com.foo.drivers.redshift@1.2.43.1067/Unknown Source) at com.amazon.jdbc.common.SConnection.abortInternal(com.foo.drivers.redshift@1.2.43.1067/Unknown Source) - locked <0x0000000607941af8> (a com.amazon.redshift.core.jdbc42.S42NotifiedConnection) at com.amazon.jdbc.jdbc41.S41Connection.access$000(com.foo.drivers.redshift@1.2.43.1067/Unknown Source) at com.amazon.jdbc.jdbc41.S41Connection$1.run(com.foo.drivers.redshift@1.2.43.1067/Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.9.1/ThreadPoolExecutor.java:1128) at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.9.1/ThreadPoolExecutor.java:628) at java.lang.Thread.run(java.base@11.0.9.1/Thread.java:829) ``` These are blocked on the threads which are still running statement on these connections. ``` thread-366" #23081 daemon prio=5 os_prio=0 cpu=972668.98ms elapsed=1553882.44s tid=0x00007fd1642b3000 nid=0x73ff waiting on condition [0x00007fd1920ac000] java.lang.Thread.State: TIMED_WAITING (parking) at jdk.internal.misc.Unsafe.park(java.base@11.0.9.1/Native Method) - parking to wait for <0x00000006086ae350> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.9.1/LockSupport.java:234) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.9.1/AbstractQueuedSynchronizer.java:2123) at java.util.concurrent.ArrayBlockingQueue.poll(java.base@11.0.9.1/ArrayBlockingQueue.java:432) at com.amazon.jdbc.communications.InboundMessagesPipeline.validateCurrentContainer(com.foo.drivers.redshift@1.2.43.1067/Unknown Source) at com.amazon.jdbc.communications.InboundMessagesPipeline.getNextMessageOfClass(com.foo.drivers.redshift@1.2.43.1067/Unknown Source) at com.amazon.redshift.client.PGMessagingContext.doMoveToNextClass(com.foo.drivers.redshift@1.2.43.1067/Unknown Source) at com.amazon.redshift.client.PGMessagingContext.getReadyForQuery(com.foo.drivers.redshift@1.2.43.1067/Unknown Source) at com.amazon.redshift.client.PGMessagingContext.closeOperation(com.foo.drivers.redshift@1.2.43.1067/Unknown Source) at com.amazon.redshift.dataengine.PGAbstractQueryExecutor.close(com.foo.drivers.redshift@1.2.43.1067/Unknown Source) at com.amazon.jdbc.common.SStatement.replaceQueryExecutor(com.foo.drivers.redshift@1.2.43.1067/Unknown Source) at com.amazon.jdbc.common.SStatement.executeNoParams(com.foo.drivers.redshift@1.2.43.1067/Unknown Source) at com.amazon.jdbc.common.SStatement.execute(com.foo.drivers.redshift@1.2.43.1067/Unknown Source) - locked <0x00000006086ac800> (a com.amazon.redshift.core.jdbc42.PGJDBC42Statement) ``` Statement executed in these threads : `statement.execute(“SHOW SEARCH_PATH”);` Once the java service is restarted, it works fine. But after a few days, this issue comes up again. Q1 a. Why a close connection thread is blocked even if its child statement is in a queued state? Q1 b. Is there a way to force close the connection? Q2 Why are the child statement in the waiting state?
0
answers
0
votes
6
views
asked 15 hours ago

Kinesis Analytics for SQL Application Issue

Hello, I am having trouble to properly handle query with tumbling window. My application sends 15 sensor data messages per second to Kinesis Data Stream, which is used as an input stream for Kinesis Analytics application. I am trying to run an aggregation query using a GROUP BY clause to process rows in a tumbling window by 60 second interval. The output stream then sends data to a lambda function. I expect that the messages should arrive at lambda every 60 seconds but instead, they arrive much faster, almost every second, and the aggregations don't work as expected. Here is the CloudFormation template that I am using: ApplicationCode: CREATE OR REPLACE STREAM "SENSORCALC_STREAM" ( "name" VARCHAR(16), "facilityId" INTEGER, "processId" BIGINT, "sensorId" INTEGER NOT NULL, "min_value" REAL, "max_value" REAL, "stddev_value" REAL); CREATE OR REPLACE PUMP "SENSORCALC_STREAM_PUMP" AS INSERT INTO "SENSORCALC_STREAM" SELECT STREAM "name", "facilityId", "processId", "sensorId", MIN("sensorData") AS "min_value", MAX("sensorData") AS "max_value", STDDEV_SAMP("sensorData") AS "stddev_value" FROM "SOURCE_SQL_STREAM_001" GROUP BY "facilityId","processId", "sensorId", "name", STEP("SOURCE_SQL_STREAM_001".ROWTIME BY INTERVAL '60' SECOND); KinesisAnalyticsSensorApplicationOutput: Type: "AWS::KinesisAnalytics::ApplicationOutput" DependsOn: KinesisAnalyticsSensorApplication Properties: ApplicationName: !Ref KinesisAnalyticsSensorApplication Output: Name: "SENSORCALC_STREAM" LambdaOutput: ResourceARN: !GetAtt SensorStatsFunction.Arn RoleARN: !GetAtt KinesisAnalyticsSensorRole.Arn DestinationSchema: RecordFormatType: "JSON" I would really appreciate your help in pointing what I am missing. Thank you, Serge
0
answers
0
votes
7
views
asked 16 hours ago

Glue Hudi get the freshly added or updated records

Hello, I'm using Hudi connector in Glue, first, I bulk inserted the initial dataset to Hudi table, I'm adding a daily incremental records and I can query them using Athena, what I'm trying to do is to get the newly added, updated or deleted records in a separate parquet file. I've tried different approaches and configurations using both copy on write and merge on read tables but still can get the updates in a separate file. I used these configurations in different combinations: 'className' : 'org.apache.hudi', 'hoodie.datasource.hive_sync.use_jdbc': 'false', 'hoodie.datasource.write.precombine.field': 'ts', 'hoodie.datasource.write.recordkey.field': 'uuid', 'hoodie.payload.event.time.field': 'ts', 'hoodie.table.name': 'table_name', 'hoodie.datasource.hive_sync.database': 'hudi_db', 'hoodie.datasource.hive_sync.table': 'table_name', 'hoodie.datasource.hive_sync.enable': 'false', # 'hoodie.datasource.write.partitionpath.field': 'date:SIMPLE', 'hoodie.datasource.write.hive_style_partitioning': 'true', 'hoodie.meta.sync.client.tool.class': 'org.apache.hudi.aws.sync.AwsGlueCatalogSyncTool', 'hoodie.datasource.write.table.type': 'COPY_ON_WRITE', 'path': 's3://path/to/output/', # 'hoodie.datasource.write.operation': 'bulk_insert', 'hoodie.datasource.write.operation': 'upsert', # 'hoodie.datasource.hive_sync.partition_extractor_class': 'org.apache.hudi.hive.NonPartitionedExtractor', # 'hoodie.datasource.hive_sync.partition_extractor_class': 'org.apache.hudi.hive.MultiPartKeysValueExtractor', 'hoodie.datasource.write.keygenerator.class': 'org.apache.hudi.keygen.NonpartitionedKeyGenerator', # 'hoodie.compaction.payload.class': 'org.apache.hudi.common.model.OverwriteWithLatestAvroPayload', # 'hoodie.cleaner.policy': 'KEEP_LATEST_COMMITS', 'hoodie.cleaner.delete.bootstrap.base.file': 'true', "hoodie.index.type": "GLOBAL_BLOOM", 'hoodie.file.index.enable': 'true', 'hoodie.bloom.index.update.partition.path': 'true', 'hoodie.bulkinsert.shuffle.parallelism': 1, # 'hoodie.datasource.write.keygenerator.class': 'org.apache.hudi.keygen.CustomKeyGenerator' Thank you.
1
answers
0
votes
10
views
asked 16 hours ago

Old data not inserted into Timestream: RejectedRecordsException

Hello! We have an IoT Core Rule which is fired when a MQTT message is published to certain topic. The message structure is: ``` { "triggers": ["door"], "datetime": "2022-06-01T00:00:00Z", "errCodes": [100], "strength": 107, "net": "GMS", "eco": 0, "light": 0, "def": 0, "fan": 0, "defrost": 1, "Mdef": 0, "comp": 0, "door": 0, "Tcond": 38.1, "Tevap": 1, "Tint": 3.8, "topic": "abc/ar/data/NVC1/test-vscode-3" } ``` We have a requirement where data buffered in remote devices could be sent to IoT Core, so we need to send a "datetime" field (the second one) in the payload. The Iot Core Rule fires a AWS Lambda function, which generates a multi-measure record to be finally sent to Timestream: ``` { "Dimensions":[ { "Name":"hw_model", "Value":"NVC1" }, { "Name":"serial_device", "Value":"test-vscode-3" } ], "MeasureName":"multimeasuredata", "MeasureValueType":"MULTI", "MeasureValues":[ { "Name":"Tint", "Value":"3.8", "Type":"DOUBLE" }, { "Name":"Tevap", "Value":"1", "Type":"DOUBLE" } ], "Time":"1654041600000" } ``` The Timestream table retention periods are: Memory store retention: 45 days Magnetic store retention: 180 days Magnetic store writes: ENABLED The exception thrown is: ``` { "errorType":"RejectedRecordsException", "errorMessage":"One or more records have been rejected. See RejectedRecords for details.", "name":"RejectedRecordsException", "$fault":"client", "$metadata":{ "httpStatusCode":419, "requestId":"VKL72WIIMCBGQNWMMSQLK7CAAQ", "attempts":1, "totalRetryDelay":0 }, "RejectedRecords":[ { "Reason":"The record timestamp is outside the time range [2022-06-17T15:21:13.756Z, 2022-06-27T22:51:04.174Z) of the data ingestion window.", "RecordIndex":0 } ], "__type":"com.amazonaws.timestream.v20181101#RejectedRecordsException", "message":"One or more records have been rejected. See RejectedRecords for details.", "stack":[ "RejectedRecordsException: One or more records have been rejected. See RejectedRecords for details.", " at deserializeAws_json1_0RejectedRecordsExceptionResponse (/var/task/node_modules/@aws-sdk/client-timestream-write/dist-cjs/protocols/Aws_json1_0.js:947:23)", " at deserializeAws_json1_0WriteRecordsCommandError (/var/task/node_modules/@aws-sdk/client-timestream-write/dist-cjs/protocols/Aws_json1_0.js:888:25)", " at processTicksAndRejections (node:internal/process/task_queues:96:5)", " at async /var/task/node_modules/@aws-sdk/middleware-serde/dist-cjs/deserializerMiddleware.js:7:24", " at async /var/task/node_modules/@aws-sdk/middleware-signing/dist-cjs/middleware.js:11:20", " at async StandardRetryStrategy.retry (/var/task/node_modules/@aws-sdk/middleware-retry/dist-cjs/StandardRetryStrategy.js:51:46)", " at async /var/task/node_modules/@aws-sdk/middleware-logger/dist-cjs/loggerMiddleware.js:6:22", " at async Runtime.exports.handler (/var/task/lambda.js:58:20)" ] } ``` We are not falling out the memory retention period (45 days) in this example (27 days), but the range returned in the exception is (IDK why) [2022-06-17T15:21:13.756Z, 2022-06-27T22:51:04.174Z) Do you have any ideas of why this is the range and hence why the record cannot be inserted? Thanks !
1
answers
0
votes
27
views
asked a day ago

Session Manager unable to connect to instance in public subnet

I can't seem to get an instance in a public subnet to connect via session manager. The subnet that the instance ends up deploying to has `0.0.0.0/0` set to an internet gateway. The security group has no inbound rules and an outbound rule of `Allow` `0.0.0.0/0`. The instance profile has the `AmazonSSMManagedInstanceCore` managed policy, the instance is on a public subnet with an internet gateway and a security group that allows all outbound requests, and it’s running AmazonLinux 2, so the SSM agent should be installed. I even added a userData command to install the latest again, but that didn’t change anything. From the console, I see the following error message: > **We weren't able to connect to your instance. Common reasons for this include:** > * SSM Agent isn't installed on the instance. You can install the agent on both [Windows instances](https://docs.aws.amazon.com/en_us/console/systems-manager/agent-windows) and [Linux instances](https://docs.aws.amazon.com/en_us/console/systems-manager/agent-linux). > * The required [IAM instance profile](https://docs.aws.amazon.com/en_us/console/systems-manager/qs-instance-profile) isn't attached to the instance. You can attach a profile using [AWS Systems Manager Quick Setup](https://docs.aws.amazon.com/en_us/console/systems-manager/qs-quick-setup). > * Session Manager setup is incomplete. For more information, see [Session Manager Prerequisites.](https://docs.aws.amazon.com/en_us/console/systems-manager/session-manager-prerequisites) Here's a sample of CDK code that replicates the problem: ```typescript const region = 'us-east-2' const myInstanceRole = new Role(this, 'MyRole', { assumedBy: new ServicePrincipal('ec2.amazonaws.com'), }) myInstanceRole.addManagedPolicy( ManagedPolicy.fromAwsManagedPolicyName('AmazonSSMManagedInstanceCore') ) const myUserData = UserData.forLinux() myUserData.addCommands( `sudo yum install -y https://s3.${region}.amazonaws.com/amazon-ssm-${region}/latest/linux_amd64/amazon-ssm-agent.rpm`, 'sudo systemctl restart amazon-ssm-agent', ) const myInstance = new Instance(this, 'MyInstance', { instanceType: InstanceType.of(InstanceClass.C6I, InstanceSize.LARGE), machineImage: MachineImage.latestAmazonLinux({ generation: AmazonLinuxGeneration.AMAZON_LINUX_2, cpuType: AmazonLinuxCpuType.X86_64, }), vpc: Vpc.fromLookup(this, 'ControlTowerVPC', { vpcName: 'aws-controltower-VPC', }), vpcSubnets: { subnetType: SubnetType.PUBLIC, }, blockDevices: [ { deviceName: '/dev/xvda', volume: BlockDeviceVolume.ebs(30, { volumeType: EbsDeviceVolumeType.GP2, encrypted: true, }), }, ], userData: myUserData, role: myInstanceRole, detailedMonitoring: true, }) ```
1
answers
0
votes
28
views
asked a day ago

Aurora upgrade 2 to 3 / MySql 5.7 to 8.0: potential bug in pre-check validation (depreciated words)

We have noticed that the pre-checks for the upgrade of MySQL 5.7 to MySQL 8 are having issues with character combinations that "resemble" depreciated words. For example, the depreciated "Group By ... DESC" is one of those constructs "[https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.MySQL.html#USER_UpgradeDBInstance.MySQL.57to80Prechecks]()" "There must be no queries and stored program definitions from MySQL 8.0.12 or lower that use ASC or DESC qualifiers for GROUP BY clauses." While our stored procedures use Group by's, there is no associated "DESC" word with them. However, the character sequence does appear in the stored procedure in various forms: * There is a call to another stored procedure called "update_fc**desc**ription();". It has the characters "desc" within the name * There are columns in the queries (table columns) with the name "blah**Desc**riptionblah" * There is a block comment that has the word "**Desc**ription:" that describes the stored procedure (documentation) However, there are no "DESC" words associated with the "Group by". For testing, * I deleted the comments word, and that issue no longer appeared as an error * I renamed the call to the other stored procedure update_fc**desc**ription(); to update_fc**dxexscxrixp**tion();, and that issue no longer appeared as an error * The columns that have the characters "desc" I couldn't work around without a lot of changing to the stored procedure It seems that there is a Stackoverflow question outlining this behavior too: [https://stackoverflow.com/questions/71412470/aws-mysql-aurora-major-version-2-3-upgrade-pre-checks-obsolete-procedure]() Also, a "re:Post" question too: [https://repost.aws/questions/QUWJzlcpitRoGM0woZVOylBQ/aurora-2-to-3-mysql-5-7-to-8-0-upgrade-pre-check-incorrect-validation-on-store-procedure]() This is clearly a bug in the pre-check process and is limiting our upgrade from MySQL 5.7 to 8. Any updates on this being fixed/addressed? Thank you.
0
answers
0
votes
28
views
asked a day ago

How to properly and completely terminate a multipart upload?

In our Java app we have what is basically boilerplate S3 V2 code for creating a multipart upload of a file to S3. We absolutely need the ability to cancel the upload, and recover all resources used by the upload process, INCLUDING the CPU and network bandwidth. Initially we tried simply cancelling the completionFuture on the FileUpload, but that doesn't work. I can watch the network traffic continue to send data to S3, until the entire file is uploaded. Cancelling the completionFuture seems to stop S3 from reconstructing the file, but that's not sufficient. In most cases we need to cancel the upload because we need the network bandwidth for other things, like streaming video. I found the function shutdownNow() in the TransferManager class, and that seemed promising, but it looks like it's not available in the V2 SDK (I found it in the V1 sources). I've seen a function getSubTransfers() in the V1 MultipleFileUpload class that returns a list of Uploads, and the Upload class has an abort() function, but again, we need to use V2 for other reasons. I've also found and implemented code that calls listMultipartUploads, looks for the upload we want to cancel, creates an abortMultipartUploadRequest, issues it and the threads keep on rolling, and rolling, and rolling.... Is there a "correct" way of terminating a multipart upload, including the threads processing the upload?
0
answers
0
votes
8
views
asked a day ago
  • 1
  • 90 / page