- cross-posted to:
- tech@lemmit.online
- cross-posted to:
- tech@lemmit.online
I haven’t used Opera since they switched from their own engine to chrome. They are now owned by a Chinese company, so it probably has at least as much tracking built into it as Google Chrome now.
I miss old opera before the buyout
That’s essentially Vivaldi now.
Apart from it being chromium based 😕
Have a look at Otter browser It aims to replicate the old interface. It is using QtWebEngine as Presto was closed source. It is in development since 10 years now. And it is open source.
QtWebEngine is Chromium :(
It’s Chromium all the way down.
Qt WebEngine uses code from the Chromium project. However, it is not containing all of Chrome/Chromium: Binary files are stripped out Auxiliary services that talk to Google platforms are stripped out, Source
While that’s one of the reasons I don’t want to use chromium, it’s not actually the main reason, if so I’d just use Ungoogled Chromium. I just want more web engines, and I dont want google to monopolise the internet.
Thanks, didn’t know about this
Vivaldi is made by many of the same people with similar features and vibe. It’s also chromium-based, though.
I used Fifth a bit, which is something aesthetically similar to old Opera made with fltk and a webkit port to fltk. But it’s abandoned now.
It’s so sad really, when I was a Windows user, it was Opera, when I moved to Linux, it was again Opera, then I also started using Conkeror (based on XULRunner).
Then Opera died. Then XULRunner died. No usable web browser anymore.
Don’t you like Firefox? It’s on both win and Linux.
I said “usable” ; it was usable when XULRunner was a thing (and you could use Firefox instead of just XULRunner).
Why the hell do I need this in a web browser? Why isn’t it a stand alone app?
If you think of LLMs as a thing to replace search bars then this kind of makes sense.
Just more unnecessary browser bloat.
Like search bars.
The more search bars the faster your internet becomes!
This is true. I asked my LLM.
If you think of LLMs as a thing to replace search bars
I don’t.
I haven’t tried LLMs myself, but even completely made up garbage would be better than today’s search engine results.
You either get advertisements for things that have nothing to do with what you’re trying to find or you get privacy preserving links to sites that have nothing to do with what you’re trying to find.
There are plenty of stand-alone LLM apps.
Same reason people get their WiFi from their ISP Modem+Router combo, even though it’s stupid to do so: People often confuse initial convenience for good.
Thats a cool feature for sure but I don’t trust opera.
Can’t they just stick to normal browser things like gaming integrations?
Intresting. But I’m curious about the performance.
A bigger LLM (mixtral) already struggles to run on my mid-range gaming PC. Trying to run an LLM that isn’t terrible on a standard laptop wouldn’t be a good experience.
I have no idea how this is set up to work technically, but most of the heavy lifting is gonna be on the GPU. I’m not sure that it matters much whether the browser is what’s pushing data to the GPU or some other package.
Most people probably don’t have a dedicated GPU and an iGPU is probably not powerfull enough to run an LLM at decent speed. Also a decent model requires like 20GB of RAM which most people don’t have.
It doesn’t just require 20GB of RAM, it requires that in VRAM. Which is a much higher barrier to entry.
But what if you have an AMD APU. Doesn’t that use your normal RAM as VRAM?
Not exactly. Most integrated chips have a small pool of dedicated VRAM, and then a bit more that they share with the system memory, though it’s generally only a portion, not all of it. It’s only Apple’s unified memory, and maybe other mobile chips that has them both share memory pool entirely, for better or worse, as far as I’m aware.
But it is worth noting that if you don’t have enough VRAM and have to put it into RAM, the minimum expectation is that you have twice the amount of RAM space. So if you have a GPU with 4GB of VRAM, and need to offload the extra to the system, you don’t need 16 GB, you need 32 GB.
Unlikely, at least on non-nvidia chips, and even on AMD, it’s only the latest four chips that support it. Anything older isn’t going to cut it.
You also need a fairly big amount of VRAM for models like that. (4 GB is the minimum for the common kinds, which is more than typical integrated systems, or 8 GB of system memory). You can get by with system RAM, but the performance will be quite bad, since you’re either relying on the CPU, or you’ll be adding the latency from data moving between them.
What’s LLM?
https://en.wikipedia.org/wiki/Large_language_model
A lot of the “AI” stuff that’s been in the news recently, chatbots and image generation and such, are based on LLMs.