Langchain used ollama to install Deepseek 14B on a laptop. They used for a local deep researching model.

$ ollama pull deepseek-r1:14b
$ export TAVILY_API_KEY=
$ uvx –refresh –from “langgraph-cli[inmem]” –with-editable . –python 3.11 langgraph dev


Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
But there not actually deepseek you install it’s a distilled version based on another model.
Cool. It’s becoming increasingly obvious that it is critically important to run any AI you’re using locally, rather than rely on web services which are privacy nightmares. And Deepseek, unsurprisingly for a Chinese product, is particularly bad in this regard.
Always remember that web services are run for the benefit of the web service provider, not the user.
I’m particularly interested in this part of your comment. “And Deepseek, unsurprisingly for a Chinese product, is particularly bad in this regard”. What about the deepseekR1 models do you find to be be “bad”. Thanks
Not the model, the web service, which expressly transmits all data on interactions with users to servers in communist China.
Which, yes, I consider worse than transmitting all the data to a domestic server.