Questions tagged with Amazon Simple Queue Service

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

I've added a HTTP API route integration that sends a message to an sqs queue. I would like to map the response to something other than xml in the api response. If the only option is to map to a response header that may work, but the only options to select a value from the sendMessage response is the use `$response.body.<json_path>` which will not work with xml. Is there anyway to have this integration (sqs-sendMessage) not return xml ? If not, is there anyway to map an xml value to a response header or body? (without using a lambda in between the endpoint and queue)
1
answers
0
votes
20
views
asked 2 days ago
We are migrating project from Springboot 2.7.3 to 3.0.0. We have used 'amazon-sqs-java-messaging-lib' dependency. We are creating bean by using ``` @Bean public ConnectionFactory sqsConnectionFactory() { return new SQSConnectionFactory( new ProviderConfiguration(), new AmazonSQSBufferedAsyncClient(amazonSQSAsync, queueBufferConfig)); } ``` which returns "javax.jms.ConnectionFactory" but springboot 3.0 doesn't support javax.jms.ConnectionFactory. So we are using "jakarta.jms.ConnectionFactory;" which is from Jakarta EE 9 API. It looks like 'amazon-sqs-java-messaging-lib' doesn't provide support for jakarta. Is there any other way to create connection?
0
answers
0
votes
19
views
asked 3 days ago
Hi there, As is not recommended by "Database per service" design pattern, every integration between microservices should be done with any messaging system? We have an application where users can upload videos. The API is available using GraphQL, and we have federation to route the video uploads to a cluster of servers responsible to create the video in the database (RDS). Once the video is uploaded to S3, a service that is triggered by a S3 event start a MediaConvert job to create a HLS profile. Once completed, we need to mark the video as available to viewers (updating the table). What is the best practice to do this? The convert service should connect to the database and update the record? Execute a service API to update the record? Send a SQS message that will be handle in the cluster that is connected to the database?
1
answers
0
votes
27
views
Luciano
asked 4 days ago
Hi Team, Is it possible to send messages from SQS to SES, is there any configurations in the console. Currently I am working in the ireland region, and have created a topic, while subscribing the only option i have found is Amazon SQS, but I want to send an email. So please help me if there is a way to send messages from SQS to SES in console
1
answers
0
votes
31
views
asked 13 days ago
We are looking at ways of unifying our DLQ handling but a problem we are facing an issue with identification of the actual source. Our current approach is to have a unique DLQ for each lambda or similar (could be sns, scheduler, etc.) and then have a lambda with the DLQ as source which further processes the failed messages (could be automatic retries, storing in Dynamo for manual handling, etc.). But we think this becomes a bit cumbersome since we will quickly incur quite high fees for the polling (we have a lot of places where we need DLQs). Our absolute dream scenario is if we could set a tag where we add the DLQ but this isn't available. An alternative would be to use the SenderID in the SQS message to identify the source but this doesn't really seem possible. Anyone have any suggestions?
1
answers
0
votes
26
views
asked 16 days ago
Fails because of some reason,This could be for a few reasons, S3 service itself has an internal error. or it could mean that the rate of data access is too high. So we are planning to implement DLQs mechanism for S3. Is it possible use AWS SQS DLQs mechanism ?. What is the best option to implement DLQ for S3.
2
answers
0
votes
51
views
asked 20 days ago
Hi, i have been trying security lake from few days, after dealing with lots of erros and all i was finally able to activate security lake in my account but further i wanted to ingest that data into splunk , i refer the following offical document to connect my AWS to splunk, https://github.com/splunk/splunk-add-on-for-amazon-security-lake/blob/main/Splunk%20Add-on%20for%20Amazon%20Security%20Lake.pdf it may seems for me that AWS account is connected but there is some permission issue regarding sqs, when i am trying to configure input i am getting error for Access denied to listqueues. i checked for permissions but it is already being given for role. Requesting you to please help me with that as this security lake completely new in AWS and there are not much resources available to look for. Hope you understand and proceed to help me with my concern on live call. i am attaching screenshot of error in Splunk![Enter image description here](/media/postImages/original/IMfr1KzRrIQ-WyfobbUXsxQg)
0
answers
0
votes
17
views
asked 24 days ago
Hello forum, I am facing a weird issue and that is not consistent. I have a SQS FIFO Queue subscribed to SNS FIFO topic. I have added a filter policy to receive only specific messages (messages with specific attributes) on SQS FIFO Queue. This setup is working fine. But since last few weeks Im facing an issue where the SQS FIFO Queue suddenly stops receiving any messages. This occurred around 5 to 6 times since last three weeks. Please find the observations below for the tests performed to confirm that there is no issue with SNS (publishing on SNS FIFO topic successfully) and the issue is with same SQS FIFO Queue - 1. I created one other SQS FIFO Queue with different name but same configurations (i.e. subscribing to same SNS FIFO topic with same filter policy). I observed that the new SQS FIFO Queue immediately starts receiving messages but the old SQS FIFO Queue is unable to receive message. 2. If I subscribe the existing SQS FIFO Queue to other SNS FIFO topic and publish on the SNS FIFO topic, the existing SQS FIFO queue did not receive any message. While another SQS FIFO Queue subscribed to this SNS FIFO topic was receiving the published messages. 3. I also observed that the old SQS FIFO Queue stars receiving messages automatically after couple of hours. 4. If I delete this SQS FIFO Queue and create a new with same name and configurations, the new SQS FIFO Queue starts receiving messages immediately. 5. All SQS FIFO Queues are being affected for one specific SNS FIFO topic. And to this SNS FIFO topic Im publishing from AWS Lambda. 6. For other SNS FIFO topic to SQS FIFO Queue is not at all affected and its working fine all the time. I have already tried [this](https://repost.aws/questions/QURhOWYtseQ6Ovy6GGpSTnTw/sqs-not-receiving-sns-message) but it's of no use. Has anyone faced similar issue and help me with this.
1
answers
0
votes
36
views
asked a month ago
Does Amazon EventBridge Pipes integration with SQS use long polling? Wondering what the pricing is like for this integration?
1
answers
0
votes
30
views
profile picture
m0ltar
asked a month ago
What is a scalable solution for running lambda at specific times in the future? We are building a SaaS platform, in which our users can request a task to happen according a recurring schedule (simple parameter of {"frequency": "NNN minutes"}. They can also edit/delete this schedule, to affect that future events. There will be K's of users, requesting M's of tasks over time (from 1 min to 1 year). The tasks will not be created in the order they should execute. I need high integrity on completion. I've discounted just using sqs. I've looked at cloudwatch events, but have concerns about scaling. I've considered putting the tasks into a DB table then polling. Is there something else I should look at?
2
answers
0
votes
32
views
asked a month ago
[ENV] Trigger Lambda -> ap-northeast-2 SQS queue -> ap-northeast-1 Crawler EC2 -> ap-northeast-1 [Pipeline] Trigger Lambda -> send_message -> SQS queue -> Crawler EC2 [Permission] Trigger Lambda -> queue.grant_send_messages(trigger_lambda) Crawler EC2 -> queue.grant_consume_messages(crawler_cluster_worker) [Owner] ALL -> root account [Python Code in Trigger Lambda] sqs_client.send_message( QueueUrl=CRAWLER_SQS_MESSAGE_QUEUE_URL, MessageBody=json.dumps(sqs_message) ) [Trigger Lambda's Error Detail] [ERROR] ClientError: An error occurred (AccessDenied) when calling the SendMessage operation: Access to the resource https://sqs.ap-northeast-1.amazonaws.com/ is denied. Traceback (most recent call last): File "/var/task/main.py", line 136, in lambda_handler raise e File "/var/task/main.py", line 116, in lambda_handler sqs_client.send_message( File "/var/task/botocore/client.py", line 530, in _api_call return self._make_api_call(operation_name, kwargs) File "/var/task/botocore/client.py", line 960, in _make_api_call raise error_class(parsed_response, operation_name) [Question] The trigger lambda failed an sending sqs message always succeeded. There are no issues with permissions and regions. It is judged to be a temporary error. Can you tell me what could be the reason for the temporary error?
1
answers
0
votes
68
views
asked a month ago
![Enter image description here](/media/postImages/original/IMTkHHDgJlQF2qOyL2ZM07yw) Hi all. I am using a cloud formation template to create some resources in my aws account. All the aws resources in the template are getting created successfully but when it creates sqs queue it always fails. I have tried running the same template in different aws accounts but its failing in all the accounts. Its a wierd behaviour as just this morning I successfully created a stack in one of the accounts but its failing since then. Can someone help me troubleshoot this issue? ![Enter image description here](/media/postImages/original/IM-ZsFdJLmRc2VO-1UtkO-nQ)
1
answers
0
votes
35
views
asked 2 months ago