- cross-posted to:
- singularity@lemmit.online
- cross-posted to:
- singularity@lemmit.online
• NVIDIA released a demo version of a chatbot that runs locally on your PC, giving it access to your files and documents.
• The chatbot, called Chat with RTX, can answer queries and create summaries based on personal data fed into it.
• It supports various file formats and can integrate YouTube videos for contextual queries, making it useful for data research and analysis.
Shame they leave GTX owners out in the cold again.
2xxx too. It’s only available for 3xxx and up.
Just use Ollama with Ollama WebUI
The whole point of the project was to use the Tensor cores. There are a ton of other implementations for regular GPU acceleration.
deleted by creator
There were CUDA cores before RTX. I can run LLMs on my CPU just fine.
There are a number of local AI LLMs that run on any modern CPU. No GPU needed at all, let alone RTX.
This statement is so wrong. I have Ollama with llama2 dataset running decently on a 970 card. Is it super fast? No. Is it usable? Yes absolutely.
Source?