Any hardware suggestions for DeepSeek
by m3lon - Sunday March 2, 2025 at 12:42 AM
#1
Just wondering about how DeepSeek or similar LLM modela may help... I've seen Github Copilot writing dozens of lines of code such as suggestions but I'm worried about privacy and so on so I'm thinking about using a self-hosted LLM...

I'm thinking about using Gpt4All or AnythinLLM but not sure about the type of hardware that may be required to get acceptable response times for, i. e., DeepSeek. I still have to start my own research about how much may it cost but if any of you have already tried something it will also be useful...
Reply
#2
(03-02-2025, 12:42 AM)m3lon Wrote: Just wondering about how DeepSeek or similar LLM modela may help... I've seen Github Copilot writing dozens of lines of code such as suggestions but I'm worried about privacy and so on so I'm thinking about using a self-hosted LLM...

I'm thinking about using Gpt4All or AnythinLLM but not sure about the type of hardware that may be required to get acceptable response times for, i. e., DeepSeek. I still have to start my own research about how much may it cost but if any of you have already tried something it will also be useful...

Deepseek, claude, and grok are probably the best options for coding, but self hosting narrows it down quite a lot due to mosts AI's not really being opensource. 

The best Deepseek modem requires a really high-end system. https://www.youtube.com/watch?v=Tq_cmN4j2yY.

any questions lmk, id be happy to try and help.
Reply
#3
(03-02-2025, 09:28 AM)Spiral Wrote: The best Deepseek modem requires a really high-end system. https://www.youtube.com/watch?v=Tq_cmN4j2yY.

Oh that was a cool watch... 2k USD is not that big... I have to definetely try the writeup...
Reply
#4
Hi m3lon.

I have personally used smaller models on an I5 with 6 cores and 16 GB of ram and, well, I have been "lucky" that they worked so slow.

So what I'm getting at is that, probably, you almost will need a high-end computer with 32-64 GB of RAM (or more) with a high-end processor. By other side you have the option to buy a Graphics Card with 16 or more GB of VRAM and use this to "load" the model.

About "prices", this is the best part: the price of the computer will not be "reduced". I mean: I don't say that you need a IBM server, but even so the price of the computer could aproacching to 3.000-4.000$ (or even surpass it if we add the graphics card).

And I insist: I coment this option thinking about trying to ensure that the model you mention is executed "fluidly and quickly."
Reply
#5
(03-03-2025, 02:43 PM)titohippie Wrote: And I insist: I coment this option thinking about trying to ensure that the model you mention is executed "fluidly and quickly."

Oh, that's the idea... I was initially thinking about giving a second life to gaming graphic cards but I'm not sure about whether the time spent will pay off and considering the estimations you've done.... I guess that I'll have to change my mind At least, it is a reference for me...

I'll keep you posted man, thanks for the comments!
Reply
#6
https://medium.com/@akshaykumar12527/set...b6ac9c92c2
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  building a pentesting rig need cpu suggestions ipoxyidang 4 2,997 01-24-2025, 11:46 PM
Last Post: TheBanano21

Forum Jump:


 Users browsing this thread: 1 Guest(s)