7. Creating RAG apps with Semantic Kernel and Kernel Memory
Фильм және анимация
In this video, we will explore how to create RAG apps using Semantic Kernel and Kernel Memory. Kernel Memory makes ingestion, partition, creating chunks of memories, partition and retrieval of information extremely easy.
#artificialintelligence #semantickernel #aiconsultant #chatgpt #chatgptplugins #RAGApps
Пікірлер: 5
Thanks a ton for this easy to understand tutorial Supreet! I was struggling with env part and it was helpful. However, the code fails and ImportDocumentAsync saying "pipeline start failed". Below is the error message: Microsoft.KernelMemory.Pipeline.BaseOrchestrator[0] Pipeline start failed Azure.RequestFailedException: Service request failed. Status: 303 (See Other) Hope I am missing something. Is the component trying to reach Azure Servers for any validation? I understand serverless meaning "everything happens in local". Pls correct me if I am wrong.
Is Kernel memory internally using LLM or LLM is nowhere in the picture here. trying to understand if after retrieving the data from PDF, does it automatically pass it to LLM ?
Thank you so much for the video. Although, i have a problem with the answer being cut off mid sentence. Is that a token issue, or is it the fact that pipeline to OpenAi is not SignalR?. Hope you can help. Thanks
@mytube538
3 ай бұрын
Finally found the answer: You can set the maximum number of tokens for the answer when configuring Kernel Memory: var kernelMemory = new KernelMemoryBuilder(builder.Services) //... .WithSearchClientConfig(new() { AnswerTokens = 800 }); The default value for this property is 300.