Prepare Ollama and LanceDB

Prepare Ollama and LanceDB

This is part of the Use RAG with Continue.dev series.

Prerequisites

I kept in line with the general prerequisites for the previous steps (python 3, uv for package management). In addition, I will assume you have ollama installed. Check here for different installation methods.

Setup

Ollama

Once Ollama is installed, you need to download the models which will be used:

  1. deepseek-coder-v2:latest - for the actual prompting
  2. mxbai-embed-large:latest - to generate and query embeddings

Note: I have not included a reranking model, mainly because the results I got were OK, and because ollama itself does not support reranking (at the moment of writing this).

You do this via {sh} ollama pull model_name.

Now have have the models needed, and we need to install the ollama API:

uv add ollama

LanceDB

LanceDB is a sqlite-like vector database, in the sense that you don't need a dedicated server. I'm using this because is suggested in the continue.dev documentation as easy-to-use. The command I used is:

uv add lancedb
uv add scikit-learn

Requests

If you haven't already installed it from the previous chapters, add the requests package with:

uv add requests

Now we have all dependencies and can start with indexing the code, one file at a time.

HTH,