Generative Data Intelligence

Opera supports running local LLMs without a connection

Date:

Opera has added experimental support for running large language models (LLMs) locally on the Opera One Developer browser as part of its AI Feature Drop Program.

Exclusive at the moment to the developer version of Opera One, Opera’s main internet browser, the update adds 150 different LLMs from 50 different LLM families, including LLaMA, Gemma, and Mixtral. Previously, Opera only offered support for its own LLM, Aria, geared as a chatbot in the same vein as Microsoft’s Copilot and OpenAI’s ChatGPT.

However, the key difference between Aria, Copilot (which only aspires to sort of run locally in the future), and similar AI chatbots is that they depend on being connected via the internet to a dedicated server. Opera says that with the locally run LLMs it’s added to Opera One Developer, data remains local to users’ PCs and doesn’t require an internet connection except to download the LLM initially.

Opera also hypothesized a potential use case for its new local LLM feature. “What if the browser of the future could rely on AI solutions based on your historic input while containing all of the data on your device?” While privacy enthusiasts probably like the idea of their data just being kept on their PCs and nowhere else, a browser-based LLM remembering quite that much might not be as attractive.

“This is so bleeding edge, that it might even break,” says Opera in its blog post. Though a quip, it isn’t far from the truth. “While we try to ship the most stable version possible, developer builds tend to be experimental and may be in fact a bit glitchy,” Opera VP Jan Standal told The Register.

As for when this local LLM feature will make it to regular Opera One, Standal said: “We have no timeline for when or how this feature will be introduced to the regular Opera browsers. Our users should, however, expect features launched in the AI Feature Drop Program to continue to evolve before they are introduced to our main browsers.”

Since it can be pretty hard to compete with big servers equipped with high-end GPUs from companies like Nvidia, Opera says going local will probably be “considerably slower” than using an online LLM. No kidding.

However, storage might be a bigger problem for those wanting to try lots of LLMs. Opera says each LLM requires between two and ten gigabytes of storage, and when we poked around in Opera One Developer, that was true for lots of LLMs, some of which were around 1.5 GB in size.

Plenty of LLMs provided through Opera One require way more than 10 GB, though. Many were in the 10-20 GB region, some were roughly 40 GB, and we even found one, Megadolphin, measuring in at a hefty 67 GB. If you wanted to sample all 150 varieties of LLMs included in Opera One Developer, the standard 1 TB SSD probably isn’t going to cut it.

Despite these limitations, it does mean Opera One (or at least the Developer branch) is the first browser to offer a solution for running LLMs locally. It’s also one of the few solutions at all to bring LLMs locally to PCs, alongside Nvidia’s ChatWithRTX chatbot and a handful of other apps. Though it is a bit ironic that an internet browser comes with an impressive spread of AI chatbots that explicitly don’t require the internet to work. ®

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?