How to finetune llama 2 7B model from jumpstart using pdf data

0

I have multiple PDF data which consists of bunch of paragraphs, I need to finetune llama 2 7B model and ask question about the content in the PDF. Earlier, I tried llama 2 7B chat in which I provided data by extracting the text from PDF using langchain.

Whereas now I would like to finetune the Llama 2 7B model, so can someone guide me on how to finetune the model with pdf data, like what is the correct format to preprocess the data and how to pass the data to finetune the model.

1 回答
2

Hi, The optimal path is to use: AWS Textract to convert your pdf back to text and then train your ML model on this text.

AWS Textract service page: https://aws.amazon.com/textract/

Textract developer guide: https://docs.aws.amazon.com/textract/latest/dg/what-is.html

To have a detailled use case of Textract applied to ML, this video is very interesting: https://www.youtube.com/watch?v=WA0T8dy0aGQ

Finally, to apply to Llama2 fine tuning: https://www.anyscale.com/blog/fine-tuning-llama-2-a-comprehensive-case-study-for-tailoring-models-to-unique-applications

Finally, to do that finetuning on SageMaker: https://www.linkedin.com/pulse/enhancing-language-models-qlora-efficient-fine-tuning-vraj-routu

You have a SageMaker notebook for it: https://github.com/philschmid/huggingface-llama-2-samples/blob/master/training/sagemaker-notebook.ipynb

Best,

Didier

profile pictureAWS
专家
已回答 9 个月前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则