- Newest
- Most votes
- Most comments
Hi,
Have you check your Lambda function's memory usage to confirm that this is the problem? Unfortunately, in case memory (10,240 MB) or timeout (15 minutes) restrictions do not allow you to carry out the operation, and you cannot optimize the query, you should look for another service that better fits your use case requirements.
If you have maxed out on Lambda, then Lambda is not the right choice of service for achieving what you want to achieve and you might need to look at alternative ways. There is not enough information in your question about the use case to suggest any alternatives:
- What will happen to the result of the queries after ?
- Why is it necessary to run the queries in Lambda function instead of creating a view in DB and querying that view ?
- How frequently are these Lambdas called and what triggers the execution of these functions ?
- How big is the data returned (1 million rows can be small or extremely large depending on the type/number of fields)?
After fetching the data I need to write it in xlsx file. when I run locally it consumes more than 16 gb of memory. So, is it obvious that if my local system is consuming more than 16 gb of memory where as lambda is only capable of 10 gb memory in my case i thought after seeing that lambda memory is not having enough memory fetch and write i am really confused and i also wanted you tell that we have more than 21 tables and more than 600 columns and in this if i join only 4 tables which has approximately 300 - 350 columns it self is not working i think memory is not sufficient. So our duty to generate a report . So i hope you understand my scenario and help us according to it. Thank You in advance
Relevant content
- asked 5 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 3 years ago
- AWS OFFICIALUpdated 3 years ago
- AWS OFFICIALUpdated 3 years ago
After fetching the data I need to write it in xlsx file. when I run locally it consumes more than 16 gb of memory. So, is it obvious that if my local system is consuming more than 16 gb of memory where as lambda is only capable of 10 gb memory in my case i thought after seeing that lambda memory is not having enough memory fetch and write i am really confused and i also wanted you tell that we have more than 21 tables and more than 600 columns and in this if i join only 4 tables which has approximately 300 - 350 columns it self is not working i think memory is not sufficient. So our duty to generate a report . So i hope you understand my scenario and help us according to it. Thank You in advance
Since you've already developed the application/process for it, one of many possible solutions is to deploy it on ECS and run it as a scheduled task, which allows you to get around the memory and timeout limitations of Lambda functions.