How to finetune llama 2 7B model from jumpstart using pdf data

0

I have multiple PDF data which consists of bunch of paragraphs, I need to finetune llama 2 7B model and ask question about the content in the PDF. Earlier, I tried llama 2 7B chat in which I provided data by extracting the text from PDF using langchain.

Whereas now I would like to finetune the Llama 2 7B model, so can someone guide me on how to finetune the model with pdf data, like what is the correct format to preprocess the data and how to pass the data to finetune the model.

Abdul
已提問 9 個月前檢視次數 4184 次
1 個回答
2

Hi, The optimal path is to use: AWS Textract to convert your pdf back to text and then train your ML model on this text.

AWS Textract service page: https://aws.amazon.com/textract/

Textract developer guide: https://docs.aws.amazon.com/textract/latest/dg/what-is.html

To have a detailled use case of Textract applied to ML, this video is very interesting: https://www.youtube.com/watch?v=WA0T8dy0aGQ

Finally, to apply to Llama2 fine tuning: https://www.anyscale.com/blog/fine-tuning-llama-2-a-comprehensive-case-study-for-tailoring-models-to-unique-applications

Finally, to do that finetuning on SageMaker: https://www.linkedin.com/pulse/enhancing-language-models-qlora-efficient-fine-tuning-vraj-routu

You have a SageMaker notebook for it: https://github.com/philschmid/huggingface-llama-2-samples/blob/master/training/sagemaker-notebook.ipynb

Best,

Didier

profile pictureAWS
專家
已回答 9 個月前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南