How to Use Deepseek on Your Own Laptop to Keep Your Data Safe

If you’ve ever been concerned about your data when using AI, now is your lucky break!

Deepseek, a relatively new contender in the AI race, has made waves with its latest model, R1. This model rivals the performance of industry leaders like OpenAI but does so at a fraction of the cost!

Even though the model was launched on the 20th of January, the hype took a few days to permeate, and hit a high last week, as you can see by its search volume:

And the adoption stats back this up: it boasts over 2.6 million app downloads and 10 million AI assistant downloads on Google Play.

Furthermore, it’s reshaping the global tech landscape, affecting major stocks and the US-China tech race, mainly due to its cost-effectiveness and open-source model. It offers high-quality performance without breaking the bank, making it accessible for businesses of all sizes.

Why is this important?

Well, the important bit is right here: it’s open-source. While this is a word loved by techies, it means nothing else than the ability to download the model yourself. It’s yours – like a book or CD.

Yes, that’s right. You now have o1-level reasoning capabilities right here on your computer, without paying a dime!

And most importantly: Running DeepSeek R1 locally means your data stays with you. No more sending sensitive information to public LLM providers like OpenAI or Google. Your text generation happens right on your laptop, ensuring complete data privacy.

It’s not that open-source AI models are anything new. They were around ever since the beginning of the AI race. The most popular one being LLama by Meta. I’m sure you’ve heard of that one.

But these models were nowhere near the capabilities of OpenAI’s GPT 4, 4o, and o1 models.

You could have run LLama on your computer back in early 2023 with zero fuss, too, and your data would have been as safe as your afterparty photos before iCloud and Google. But the reasoning capability was so reduced that it didn’t make sense for most companies to even bother. The go-to would be ChatGPT most of the time.

So what happened? Well, nothing. Companies just didn’t use AI for their most important, mission-critical, IP-protected data. You were lucky if you were a public content producer or social media manager, because the content you put out there would become public anyways.

To address this issue, OpenAI had introduced various enterprise plans with stronger data privacy agreements. The more courageous companies embarked. But not the majority of the market.

And it’s not only market reports: Concerns around data privacy for AI usage were the most frequently asked questions in our masterclasses and consulting work.

But what happened on the 20th of January is different.

There’s suddenly a model that’s 1. on-par with GPT o1 and 2. completely free to use on your own computer or company tech infrastructure.

Let me repeat that: For the first time in LLM history (that’s ~2 years!) you can process your most secret, private, raison-d'être-defining, unique IP insights with real state-of-the art AI models. All whilst never leaving your company walls.

Let me show you how! The installation can get a bit technical, but don't worry—I've got you covered. Here’s a quick rundown to get you started:

  1. Install Ollama: This is a kind of app store for AI models. It also runs the installed models on your computer.

  2. Download and run the Deepseek R1 model:

    1. Open a terminal (it’s that black screen with white text). On Windows, press the Windows key, type “cmd” and click “Command Prompt”. On Mac, click the Launchpad icon in the Dock, type Terminal in the search field, then click Terminal.

    2. In the terminal type “ollama run deepseek-r1:1.5b” (without the quotes) – This will download the R1 model, the smallest version (1.5 billion parameters). If you have a particularly good computer, you can use larger versions, particularly the 7b and 8b.

  3. Deepseek has now started a web server on your computer that runs on web address https://127.0.0.1:11434 and that waits for prompts for the model.

Now we’ll install the chat application – The part that sends prompts to Ollama and manages chats.

  1. Install Chatbox: Download the version for your operating system, then run the installer.

  2. Open Chatbox: You’ll see an interface that’s similar to the one of ChatGPT:

  1. Connect Chatbox to Ollama

    1. Go to “Settings” on the lower left.

    2. Under the “Model” tab, look for “Model Provider” at the top.

    3. Choose “OLLAMA API”

    4. Then under “Model” click the drop-down and select “deepseek-r1:1.5b”

    5. Click Save

You’re ready to go! Click “New Chat”on the lower left and enjoy your local Deepseek installation!

Chatbox it's packed with many features that are comparable to any subscription service—and it’s free because it runs locally!

  • Chat with Documents & Images: Send files directly to Chatbox and get intelligent responses.

  • Coding Assistant: Generate and preview code in real-time (like Claude Artifacts)

  • Real-Time Web Search & Browsing: Include the latest info from the web in your responses (only for models that support it – currently Deepseek doesn’t)

  • Visualized Insights: Generate charts for clearer insights.

  • AI-Powered Image Creation

  • Privacy by Design: Keep your data local and secure.

If you need help with any of the steps, just respond to this message and I’m happy to help.

And to run larger (and more effective) Deepseek R1 models, you’ll need to set up your own cloud provider and repeat this process there. This helps with faster response times due to more powerful machines, and team-wide access, as it’s in the cloud.

If you’re curious to explore more about privacy and running your own AI models, join the Make Work Obsolete community.

Our full Gen AI curriculum includes not only AI privacy topics, but ranges from effective prompting techniques to AI Agent automation. We host weekly Q&A calls with tech experts, a Q&A forum, and a community of like-minded professionals looking to build their first AI automation that you can interact with.

Cheers,
Robert