diff --git a/docs/tutorials/langchainpy.md b/docs/tutorials/langchainpy.md index b8f96bcc4..ac7129230 100644 --- a/docs/tutorials/langchainpy.md +++ b/docs/tutorials/langchainpy.md @@ -42,12 +42,13 @@ text_splitter=RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0) all_splits = text_splitter.split_documents(data) ``` -It's split up, but we have to find the relevant splits and then submit those to the model. We can do this by creating embeddings and storing them in a vector database. For now, we don't have embeddings built in to Ollama, though we will be adding that soon, so for now, we can use the GPT4All library for that. We will use ChromaDB in this example for a vector database. `pip install GPT4All chromadb` +It's split up, but we have to find the relevant splits and then submit those to the model. We can do this by creating embeddings and storing them in a vector database. We can use Ollama directly to instantiate an embedding model. We will use ChromaDB in this example for a vector database. `pip install GPT4All chromadb` ```python -from langchain.embeddings import GPT4AllEmbeddings +from langchain.embeddings import OllamaEmbeddings from langchain.vectorstores import Chroma -vectorstore = Chroma.from_documents(documents=all_splits, embedding=GPT4AllEmbeddings()) +oembed = OllamaEmbeddings(base_url="http://localhost:11434", model="llama2") +vectorstore = Chroma.from_documents(documents=all_splits, embedding=oembed) ``` Now let's ask a question from the document. **Who was Neleus, and who is in his family?** Neleus is a character in the Odyssey, and the answer can be found in our text.