top of page
  • Harrison A.

Aiding LLMs in Explainability: Vector Databases

Recently, Stanford professor Christopher Potts held a presentation discussing LLMs. In it, he addresses their lack of explainability (something XAIF has covered in other blog posts). An approach to mitigating this weakness is a concept called Vector Databases.


As a form of retrieval augmentation, Vector DBs are a parallel solution to LLMs that include indexed sources that the LLM pulls from. Unlike just learning from the sources, the sources are linked back to their origination with a series of indices and unique IDs, allowing for citations and back tracing to the source of information.


As an example, say a user interacts with an LLM chatbot like they would today. In parallel to the generative response the LLM would give, a retriever model would be reverse indexing the response, searching for the sources to its answer. The LLM would then cite its sources real time in a separate visual or in the response itself, say with footnotes.


For more information of Vector DBs, see here.

85 views0 comments

Recent Posts

See All

©2023 by XAIF. Proudly created with Wix.com

bottom of page