GraphRAG: LLM-Derived Knowledge Graphs for RAG
Ғылым және технология
Watch my colleague Jonathan Larson present on GraphRAG!
GraphRAG is a research project from Microsoft exploring the use of knowledge graphs and large language models for enhanced retrieval augmented generation. It is an end-to-end system for richly understanding text-heavy datasets by combining text extraction, network analysis, LLM prompting, and summarization.
For more details on GraphRAG check out aka.ms/graphrag
Read the blogpost: www.microsoft.com/en-us/resea...
Check out the arxiv paper: arxiv.org/abs/2404.16130
And follow me on other platforms so you’ll never miss out on my updates!
💌 Sign up for my free AI newsletter Chaos Theory: alexchao.substack.com/subscribe
🐦 Follow me on Twitter / alexchaomander
📷 And Instagram! / alexchaomander
🎥 And TikTok! / alexchaomander
👥 Connect with me on LinkedIn / alexchao56
Пікірлер: 85
What scenarios do you see GraphRAG being useful for?
@jtjames79
22 күн бұрын
Using GraphRAG to make GraphRAGs. Because AI should be able to go down the rabbit hole.
@alexanderroodt5052
22 күн бұрын
Profiling people
@Sergio-rq2mm
22 күн бұрын
Any where, where relationships are important. Abstract associations between data sets, perhaps laws, policies, etc, things that are very narrative driven, such as stories, etc. Nontypical datasets basically.
@alexanderroodt5052
22 күн бұрын
@@Sergio-rq2mm I choose to go the 1984 route
@ktbumjun
21 күн бұрын
Bible study
This is basically causal grounding. We figure semantic symbolic reasoning, from an architectural perspective. Add a powerful model…something very compelling AGI-like would be the result I would assume(plus mcts sampling lol). Causal grounding is huge hole in current models. This is dope research. Kudos.
Looking forward to the code for this!
this was so well explained, nicely done. my first thoughts: 1. i'd be curious to see benchmarks with cheaper LLMs. from my experience, even much smaller models like llama-3-8b can come close to gpt-4 in this use-case (entity extraction and relationships). a little fine-tuning could likely match or surpass gpt-4 for much cheaper. 2. i wonder how this could be augmented with datasources which already have some concept of relationships, ie wikipedia, dictionaries, hypertext.
@mrrohitjadhav470
17 күн бұрын
i was having thoughts🙂
@Rkcuddles
Күн бұрын
GPT 4 not understanding these deep relationships is bar far the biggest bottleneck in me using it. This is super exciting
I've been doing work in the area of creating knowledge graphs for codebases. The nice thing about generating them for code (as opposed to text) is that you don't have to rely on LLM calls to recognize and generate relationships, but you can utilize language servers and language parsers for that.
glad, i didn't skip this and watched video, thanks for sharing knowledge. seems very impressive.
This seems very powerful. Thanks for sharing it and explaining it well.
That final streamlit app was awesome!!
That last 5min of the video was epic!!!!! Dude amazing stuff!!! Also thanks for the tip on having the LLM generate the graph
I really like the addition of hierarchical agglomerative summarization, which gives holistic aanswers similar to RAPTOR RAG strategy but with the better data representation of knowledge graphs. I'll need to read the paper to understand if embeddings are used at all in this, and whether relationships are labelled or if they just have a strength value.
Please let me play with this! Impressive work !
While RAG is a good process for eliminating hallucinations, GraphRAG makes the retrieved context richer with its relationship-building techniques. The expense is worth it. Is the result set then re-graphed, or will the same query twice be as expensive?
fabulous work! wondering how long it takes to form a whole vector db and plus how many tokens will it take?
I really enjoyed this video! What tool did you use to visualise the POD cast graph?
This could be a game-changer in both public and private-sector intelligence analysis (as I am sure you figured out.) Looking forward to additional info - but what about the private dataset's format? Is it vectorized? If so, can we assume that there are optimal and sub-optimal approaches? (IOW, is it fair to assume vectorization can significantly impact GraphRAG's performance?)
May I know the underlying technology used for hosting the graph database? Was it Cosmos db?
@nas8318
23 күн бұрын
Likely neo4j
@alexchaomander
22 күн бұрын
It's graph database agnostic! You can use your choice of Graph DB. The technique is general enough to support multiple
@LadharAmir
17 күн бұрын
It's not about the datbase, it's about the methodlogy. RDF or PL graphs should both work
Great work! I was thinking to use a system like this to build the memory of an AI companion as it talks to the user. So in this case the knowledge graph will start empty and grow get built dynamically with every conversation. Do you see this as a good use case for GraphRAG?
Hii, i am working on solving the same problem of vector search rag is not good. can you plz share the code a tutorial will be even great !!
This is outstanding stuff!
What is technology stack for that?
This is just brilliant
Does the repeated term“regular RAG” refer to setups using vector databases?
Is there no standard comparison approach? For example one could take academic literature reviews, collect their references, throw in some more, and ask the llm system. Compare the result with the original review. There might be summaries available in the accounting and legal world, that could be used also
@alexchaomander
22 күн бұрын
Comparison is tough! It's another area of research we're heavily invested in. But I like the ideas that you're bringing up!
@sathyanarayanbalaji2971
21 күн бұрын
true that validation would be required to compare the result.
Is the rest of this conversation available somewhere, @alexchaomander?
How is this any different then Self Organizing Maps for RAG?
Is there an Open source implementation of this or how could I build it into my own app?
Seems like the video was incomplete. Is there another part
Excuse me if I’m wrong… listened to this while exercising… but the main issue explored here for each question was that questions like “what are the top themes?” Cannot be answered by the LLM with vanilla RAG. Is this correct? If so, then if context size grows large enough this will be less necessary right? Furthermore, by introducing a graph that has communities premised on topics/themes or whatever u decide, doesn’t that reduce the degrees of freedom of your system?
When will it be open sourced? :)
pls provide the code
@alexchaomander
22 күн бұрын
Code will be shared soon!
@SamuelJunghenn
22 күн бұрын
+1 🙏
@En1Gm4A
22 күн бұрын
@@alexchaomander Great! I have signed up for your newsletter. Will you inform about the code release there?
@Lutz1985
20 күн бұрын
le dot
@bejn5619
20 күн бұрын
+1
Great, this is something I also thought about when AI had difficulty finding relevant information a while back. Basically have filters to determine how the AI will maneuver the training data depending on what is prompted and relevance. This is something I thought about after reading a paper on the discovery of a new hybrid braincell type that acted as a trigger that could turn on and off pathways. So the context in the prompt is what's important. Because that decides which tags in the training data should be turned on and off. Which in the end will give you a unique pathway for the AI to retrieve data.
@DefenderX
19 күн бұрын
Also, the next step would create overarching filters between several AI agents. After you have all this, the next step is for AI to implement statistics in its reasoning.
GraphRAG Perfect !
is there source code anywhere for this?
Hi, are you going to share the code?
oh hey that's obsidian note style of note making it is interesting AI actually can remember better with the help of zettelkasten like human do!? can't wait until japan researcher conclude their research using chemical reactions in tube to emulate emotions, so machine can felt emotions through chemical reactions, like human do.... to me emotional are also the best way to learn and remembering things.
@RickySupriyadi
5 күн бұрын
so what if... instead of tube of chemical reactions... important informations and often asked questions had an emotional cue graph to create some kind of important profiling so that profile will serve as a mark whenever AI is the expert in that field (strong retrieval in specific field leading for future of MoE)
Would be a great tool for rapid and more reliable meta analysis
To understand semantic search first you need to understand how HNSW works, then you realice no wonder it dosent work. I ended up building a datastructure to combine vector search and entities
But knowledge graphs are very slow to query. I wonder if we can encode those graphs in the gpt model by building graph transformers.
@damianlewis7550
23 күн бұрын
I don’t think that’s the case. Optimized graph query engines can return results in milliseconds e.g. WikiMedia, Google etc. at a fraction of the computational cost of an LLM. The reason that GraphRAG is slow-ish is because the LLMs are slow.
@MrDonald911
23 күн бұрын
Google, Facebook, and Linkedin all use graph databases, it's actually much faster than relational DBs
@nas8318
22 күн бұрын
Slower than LLMs?
but don't you lose information in the process of making a knowledge graph, given how only a subset of the textual information is extracted and retained in the KG?
@computerrockstar2369
22 күн бұрын
I don't think the LLM really needs the graph to make any decisions. Its more valuable for human users to find related information
@LadharAmir
17 күн бұрын
You can use ETL to build your knowledge graph by yourself from RDMSs, then you will not loose information
implementations?
Police, FBI, CIA, etc... investigations (CSI AI)
What's a rag
@IlyaDenisov
18 күн бұрын
Retrieval Augmented Generation (use that as an input to your favourite search engine or AI companion)
American princess Google Plex SEO Sandra Mitra watching.....
The content is very political..