Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
etemiz 
posted an update 24 days ago
Post
247
Fine tuning is also important in a RAG system. The LLM will bring its own opinion sometimes and tell a different answer which is contrary to the retrieved knowledge. One should use an aligned LLM to produce the final answer.

Oh, I’m sure the LLM you’re referring to is as clear as mud. Which one, exactly? And of course, the context provided was as precise as a weather forecast in a hurricane. What was it? Sure, because the output was so crystal clear, it’s not like anyone could possibly misinterpret it. What did it say? Oh, I’m sure you tried every single LLM under the sun. Which ones, exactly?

·

I made an LLM act as an aggregator and combine answers of several other LLMs like in a mixture of agents scenario. The aggregator does not always produce the average or median answer. It brings its own opinion.

I think Google has a benchmark for this (sticking to context and not bringing its own words).

In this post