How should i configure my emr cluster to handle large data

0

I have an EMR cluster and I have used the treasure data connector to read data from table into dataframe using pyspark. Now these tables that I'm trying to read have approximately 100 million to 500 million rows of data so whenever I'm trying to read data, that read is fast like it gets completed in few seconds but if I'm trying to do count or show operation it is taking an hour which is considerably a lot of time. I have configured my cluster as follows(please refer to the images):- Enter image description hereEnter image description here

Enter image description here

Can you help to debug the issue and configure cluster correctly

gefragt vor einem Monat310 Aufrufe
1 Antwort
0

Hello,

Thank you for writing on re:Post.

I see that you want to know how you can improve the performance of your current EMR cluster running large datasets.

First of all, I would request you to tune your Spark memory parameters by using the below AWS Best Practices Guide - [+] https://aws.github.io/aws-emr-best-practices/docs/bestpractices/Applications/Spark/troubleshooting_and_tuning/ This will assist you in tuning the driver and memory parameters according to the instance type being used. As default executor memory of 8g looks less.

Secondly, I recommend you to read about using maximizeResourceAllocation. This will allow the executors to use the maximum resources possible on each node in a cluster. [+] https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-configure.html#emr-spark-maximizeresourceallocation

Next, I ask you to check the Cloudwatch Metrics of the Cluster when the count or show is running to check if the resources are falling short causing the delay. Especially metrics like - ContainerPending, YARNMemoryAvailablePercentage. if they are showing high load, then you may need to increase the maximum cluster size in the Managed Scaling settings. Also the limits set in the Managed scaling is in units which is not as same as number of nodes. In InstanceFleet, every node has a weight which can be provided during cluster creation. Please check that and set the scaling limits accordingly.

I hope I was able to address your query. Thanks and have a great day ahead!

AWS
SUPPORT-TECHNIKER
beantwortet vor einem Monat
profile picture
EXPERTE
überprüft vor einem Monat
AWS
SUPPORT-TECHNIKER
überprüft vor einem Monat

Du bist nicht angemeldet. Anmelden um eine Antwort zu veröffentlichen.

Eine gute Antwort beantwortet die Frage klar, gibt konstruktives Feedback und fördert die berufliche Weiterentwicklung des Fragenstellers.

Richtlinien für die Beantwortung von Fragen