How can we get an LLM to unlearn information?

0

How can we get an open-source LLM to unlearn information? Because the information is outdated or our customer has a different view on the world.

AWS
asked 9 months ago586 views
2 Answers
0

You would have to fine tune it with new updated information.

Here is a great blog on how to fine tune a foundation model: https://aws.amazon.com/blogs/machine-learning/domain-adaptation-fine-tuning-of-foundation-models-in-amazon-sagemaker-jumpstart-on-financial-data/

AWS
answered 9 months ago
0

Especially with very large LLMs the knowledge base is massive. It may not be up-to-date, nor may you like the answers in there.

Frequently we use finetuning (see Bob's answer) to teach the model the shape of answers and questions. Doing finetuning the right way can help the model to learn when to say "I don't know".

However, for large models it may be hard to change the knowledge base or to add to it. For many use cases customers instead use Retrieval Augmented Generation (RAG). With RAG you retrieve the fact from another data source and then use the capabilities of a LLM to comprehend the source and formulate an answer based on that context. The source retrieval can be up-to-date and only use information that you trust, like from your own FAQ for example.

Blogpost on RAG

AWS
mkamp
answered 9 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions