Excited to announce another major upgrade to WhyHow.AI’s Knowledge Graph SDK - tying vector chunks to graph nodes automatically, for a more deterministic and richer context window. Check out how we do it, why we did it, and an example benchmark of the increased completeness of the answer. Tired of just returning single-word triples from your knowledge graph? WhyHow.AI’s latest upgrade with vector chunk linking now lets you use a graph structure to determine which raw vector chunks to return to the context window, combining the best of knowledge graphs and vector search. “While the triples in a Knowledge Graph are useful in providing specific information that semantic similarity was unable to retrieve, we wanted to also allow leeway in the information represented and retrieved from the graph, to include the surrounding words and retrieving the relevant raw vector chunk tied to that graph node as well. By tying vector chunks to a knowledge graph, we get the advantages that lie in both vector and graph search.” - WhyHow.AI Design Partner WhyHow.AI builds workflow tools for data orchestration, and graph creation, and we work on top of any data extraction model you want to bring. In this case, we work on top of OpenAI, Neo4j, and Pinecone, and will be supporting the most popular data extraction models, LLMs, graph and vector databases. https://lnkd.in/eEJdUNPi
Have you guys built (or fine-tuned) an internal model yet for general knowledge base and/or query to Cypher?
Kalash Shah check this
Engineering | project driven | world scale
3mocan your vector chunks be categorized ?(imposing a schema)