How can we get an LLM to unlearn information?

0

How can we get an open-source LLM to unlearn information? Because the information is outdated or our customer has a different view on the world.

AWS
demandé il y a 10 mois610 vues
2 réponses
0

You would have to fine tune it with new updated information.

Here is a great blog on how to fine tune a foundation model: https://aws.amazon.com/blogs/machine-learning/domain-adaptation-fine-tuning-of-foundation-models-in-amazon-sagemaker-jumpstart-on-financial-data/

AWS
répondu il y a 10 mois
0

Especially with very large LLMs the knowledge base is massive. It may not be up-to-date, nor may you like the answers in there.

Frequently we use finetuning (see Bob's answer) to teach the model the shape of answers and questions. Doing finetuning the right way can help the model to learn when to say "I don't know".

However, for large models it may be hard to change the knowledge base or to add to it. For many use cases customers instead use Retrieval Augmented Generation (RAG). With RAG you retrieve the fact from another data source and then use the capabilities of a LLM to comprehend the source and formulate an answer based on that context. The source retrieval can be up-to-date and only use information that you trust, like from your own FAQ for example.

Blogpost on RAG

AWS
mkamp
répondu il y a 10 mois

Vous n'êtes pas connecté. Se connecter pour publier une réponse.

Une bonne réponse répond clairement à la question, contient des commentaires constructifs et encourage le développement professionnel de la personne qui pose la question.

Instructions pour répondre aux questions