The future of AI hardware is no hardware at all — why I think Rabbit and Humane just want to be bought by Apple
Let me be clear — this headline is a tad hyperbolic. “Tech journalist gets spicy in the title” is not the biggest shocker, but the essence of my point is very much there. Not only that, but at the Rabbit R1 launch party, CEO Jesse Lyu kind of kick-started the process. Let me explain.
Rabbit looks set to remain a hardware company for the time being, and as for Humane… Given the reviews, I wouldn’t be surprised if there are some crunch talks happening behind the scenes about the company’s future. Being in a small startup myself in the past that vanished in the midst of COVID, I’ve been through those anxiety-ridden meetings myself and my heart goes out to them.
But when I look at these companies, I can’t help but feel there’s an ulterior motive. Namely that the devices are more of a proof of concept that people can buy — the physical manifestation of a pitch deck for acquisition by the likes of Apple or Google.
What’s for sale to the right buyer?
The Cupertino chums and the Mountain View crew have certainly been on a shopping spree when it comes to AI, and it wouldn’t be too far of a stretch to say that these acquisitions have been for the sole purpose of improving their own efforts in this field.
Google Gemini has been going from strength to strength, and we’re hearing a ton about how iOS 18 is going to be one of the biggest updates we’ve ever seen to the iPhone — courtesy of AI. But there’s only so much they can realistically do before stumbling upon the rather broad patent that Rabbit has around the Large Action Model (LAM).
For those uninitiated, this uses the power of the rising trend in AI agents to act almost like a Large Language Model (LLM), but for learning how to interact with websites and phone apps. The potential of this is to take AI from simply something you ask questions of and receive advice that you need to take action on yourself, and turn it into a genuine assistant that can do things for you.
Sure, Apple and Google could come close to recreating this with their in-house developers — plugging directly into their services like Google Travel and iWork to make AI actionable, or even giving their version of agents the option to access apps at an API level (working through the code rather than learning to use the surface-level app).
To have something like Rabbit’s LAM baked directly into a smartphone’s OS is a mightily tasty prospect to take advantage of that beasty neural engine in the likes of the iPhone 15 Pro Max and Samsung Galaxy S24 Ultra.
Are the hardware compromises priced in?
At the moment, these impressive features are currently locked into these devices — sporting 4G LTE connectivity that is going to put a ceiling on just how much latency can be cut, and meaning you have to carry something extra around with you.
Let me be clear. I’m all for how AI has impacted hardware design and innovation. As you’ve probably read in my Rabbit R1 hands-on impressions, I’m a huge fan of its fun-yet-utilitarian aesthetic with its playful orange hue. Plus, I absolutely love my Meta Ray-Ban smart glasses (especially when they helped me a lot during my trip to Costa Rica).
However, there’s no denying that some corners have been cut here, from the restrained connectivity to the chipsets being used inside them. Now why is that? There’s certainly an element of keeping costs down to $199 for the R1.
If I was to speculate wildly, though, surely with the people behind these devices being smarter than I am, they probably know that the smartphone they carry in their pockets will be much more capable of AI tasks than their own devices — without the need to wear it on your shirt/take up another pocket.
So what if these devices are just a proof of concept just so happens to be available to buy? I mean working with the consumers to make your AI models smarter sounds pretty, well, smart to me. And in turn, that will drive the usefulness of the LAM and the value of the company up.
Like any company, Rabbit is looking forward and has ambitious plans. One of these may be a bit of a giveaway for these intentions.
Hopping towards a big buyout
In short, the Rabbit R1 launch party keynote gave us three key learnings: the R1 is giving us version 1.0 of the patented LAM, the company is thinking about a new wrist-based wearable AI device, and (most importantly to the point I’m making here) turning the RabbitOS that runs the R1 into an AI native desktop OS.
This sounds pretty black and white, but could mean a few things when you really start to question it. Is Rabbit aiming for this to be an app that you download onto your computer? Or are they looking to stand alongside Windows 11 and macOS as a holistic OS?
The former would be a rather small hurdle to clear, and one that would be limited in scope. However, the latter is a helluva ambitious goal, and one that would simply be an insurmountable challenge given how deeply entrenched desktop software is. Plus, let’s not forget that both Microsoft and Apple are already knee deep in launching AI in these OSes.
What I see is a sign — a small but subtle tell that the direction of the company is to hedge its bets across hardware and software, with an eye to either license the LAM to other companies or get acquired. Not to say that Lyu is eyeing up every potential suitor and desperate to sell-off. He’s created something big here, which is obvious given the popularity of R1.
And as someone who crashed and burned in a start-up in the past, I don’t regret a single day of that experience — the chance to take a look at something and challenge yourself to not only do it differently, but do it better. Personally, I hope Rabbit grows from its humble roots and stays independent.
But I simply can’t ignore a sign when I see it, and the more I think about it, the more it makes sense. The LAM would fit nicely into the flock of AI features reportedly coming to iOS 18 for example, and they’ve got rather deep pockets to snap up something that could make Siri leapfrog its smart assistant competition.