Using Ollama to Install Deepseek 14B on a Laptop

Langchain used ollama to install Deepseek 14B on a laptop. They used for a local deep researching model.

$ ollama pull deepseek-r1:14b
$ export TAVILY_API_KEY=
$ uvx –refresh –from “langgraph-cli[inmem]” –with-editable . –python 3.11 langgraph dev

4 thoughts on “Using Ollama to Install Deepseek 14B on a Laptop”

  1. Cool. It’s becoming increasingly obvious that it is critically important to run any AI you’re using locally, rather than rely on web services which are privacy nightmares. And Deepseek, unsurprisingly for a Chinese product, is particularly bad in this regard.

    Always remember that web services are run for the benefit of the web service provider, not the user.

    • I’m particularly interested in this part of your comment. “And Deepseek, unsurprisingly for a Chinese product, is particularly bad in this regard”. What about the deepseekR1 models do you find to be be “bad”. Thanks

      • Not the model, the web service, which expressly transmits all data on interactions with users to servers in communist China.

        Which, yes, I consider worse than transmitting all the data to a domestic server.

Comments are closed.