Cache Me Out

Technology on the move.

                        <article>
                            <p>Google's unveiling of a new line of <a data-analytics-id="inline-link" href="https://www.techradar.com/computing/artificial-intelligence/i-tried-googles-android-xr-prototype-and-they-cant-do-much-but-meta-should-still-be-terrified" target="_blank">AI-fueled smart glasses</a> built on the Android XR platform was only one of dozens of announcements at Google I/O this year. Even so, one facet in particular caught my eye as more important than it might have seemed to a casual viewer.</p><p>While the idea of wearing AI-powered lenses that can whisper directions into your ears while projecting your to-do list onto a mountain vista is exciting, it's how you'll look while you use them that grabbed my attention. Specifically, Google's partnership with Warby Parker and Gentle Monster to design their new smart glasses.</p><p>The spectre of <a data-analytics-id="inline-link" href="https://www.techradar.com/news/google-glass-you-were-my-favorite-gadget-mistake">Google Glass</a> and the shadow cast by the so-called Glassholes weraring them went unmentioned, but it's not hard to see the partnerships as part of a deliberate strategy to avoid repeating the mistakes made a decade ago. Wearing Google Glass might have said, “I’m wearing the future,” but it also hinted, “I might be filming you without your consent.” No one will think that Google didn't consider the fashion aspect of smart glasses this time. Meta’s Ray-Ban collaboration is based on a similar impulse.</p><p>If you want people to wear computers on their faces, you have to make them look good. Warby Parker and Gentle Monster are known for creating glasses that appeal to millennials and Gen Z, both in look and price.</p><p>"Warby Parker is an incredible brand, and they've been really innovative not only with the designs that they have but also with their consumer retail experience. So we're thrilled to be partnered with them," said Sameer Samat, president of Google’s Android Ecosystem, in an <a data-analytics-id="inline-link" href="https://www.youtube.com/watch?v=RvMWLYRCj6s" target="_blank">interview</a> with <em>Bloomberg</em>. "I think between Gentle Monster and Warby Parker, they're going to be great designs. First and foremost, people want to wear these and feel proud to wear them."</p><h2 id="smart-fashion-6">Smart fashion</h2><p>Wearables are not mini smartphones, and treating them that way has proven to be a mistake. Just because you want to scroll through AR-enhanced dog videos doesn't mean you don't want to look good simultaneously.</p><p>Plus, smart glasses may be the best way to integrate generative AI like Google Gemini into hardware. Compared to the struggles of the <a data-analytics-id="inline-link" href="https://www.techradar.com/computing/artificial-intelligence/humane-ai-pin-review-roundup">Humane AI Pin</a>, the <a data-analytics-id="inline-link" href="https://www.techradar.com/computing/artificial-intelligence/humane-ai-pin-review-roundup">Rabbit R1</a>, and the <a data-analytics-id="inline-link" href="https://www.techradar.com/computing/artificial-intelligence/a-new-ai-wearable-to-supplement-your-memory-is-happy-to-tread-where-other-ai-hardware-failed">Plaud.ai NotePin</a>, smart glasses feel like a much safer bet.</p><p>We already live in a world saturated with wearable tech. Smartwatches are ubiquitous, and wireless earbuds also have microphones and biometric sensors. Glasses occupy a lot of your face's real estate, though. They're a way people identify you far more than your watch. Augmented reality devices sitting on your nose need to be appealing, no matter which side of the lenses you look at.</p><p>Combine that with what the smart glasses offer wearers, and you have a much stronger product. They don't have to do everything, just enough to justify wearing them. The better they look, the less justification you need for the tech features.</p><p>Teaming up with two companies that actually understand design shows that Google understands that. Google isn’t pretending to be a fashion house. They’re outsourcing style strategies to people who know what they're doing. Google seems to have learned that if smart glasses are going to work as a product, they need to blend in with other glasses, not proclaim to the world that someone is wearing them.</p><p>How much they cost will matter, as setting smart glasses prices to match high-end smartphones will slow adoption. But if Google leverages Warby Parker and Gentle Monster’s direct-to-consumer experience to keep prices reasonable, they might entice a lot more people, and possibly undercut their rivals. People are used to spending a few hundred dollars on prescription glasses a reasonably sized extra charge for AI will be just another perk, like polarized prescription sunglasses.</p><p>Success here might also ripple out to smaller, but fashionable eyewear brands. Your favorite boutique frame designer might eventually offer 'smart' as a category, like they do with transition lenses today. Google is making a bet that people will choose to wear technology if it looks like something they would choose to wear anyway, and a bet on people wanting to look good is about as safe a bet I can imagine.</p><h3 class="article-body__section" id="section-you-might-also-like"><span>You might also like...</span></h3><ul><li><a href="https://www.techradar.com/computing/artificial-intelligence/why-2025-will-be-the-year-of-the-ai-smart-glasses">Why 2025 will be the year of the AI smart glasses</a></li><li><a href="https://www.techradar.com/computing/artificial-intelligence/meta-ai-is-here-to-take-on-chatgpt-and-give-your-ray-ban-meta-smart-glasses-a-fresh-ai-upgrade">Meta AI is here to take on ChatGPT and give your Ray-Ban Meta Smart Glasses a fresh AI upgrade</a></li><li><a href="https://www.techradar.com/computing/artificial-intelligence/i-tried-googles-android-xr-prototype-and-they-cant-do-much-but-meta-should-still-be-terrified">I tried Google's Android XR prototype and they can't do much but Meta should still be terrified</a></li></ul>
                                                        </article>
                        <article>
                            <hr><ul><li><strong>Anthropic has debuted two new Claude AI models named Claude Opus 4 and Claude Sonnet 4</strong></li><li><strong>Claude Opus 4 claims to be the best coding AI in the world</strong></li><li><strong>Claude Sonnet 4 is a smaller, streamlined model with major upgrades from Sonnet 3.7 version.</strong></li></ul><hr><p>Anthropic has unveiled Claude 4, the latest generation of its AI models. The company boasts that the new Claude Opus 4 and Claude Sonnet 4 models are at the top of the game for AI assistants with unmatched coding skills and the ability to function independently for long periods of time.</p><p>Claude Sonnet 4 is the smaller model, but it's still a major upgrade in power from the earlier Sonnet 3.7. Anthropic claims Sonnet 4 is much better at following instructions and coding. It's even been adopted by GitHub to power a new Copilot coding agent. It's likely to be much more widely used simply because it is the default model on the free tier for the Claude chatbot.</p><p>Claude Opus 4 is the flagship model for Anthropic and supposedly the best coding AI around. It can also handle sustained, multi-hour tasks, breaking them into thousands of steps to fulfill. Opus 4 also includes the "extended thinking" feature Anthropic tested on earlier models. Extended thinking allows the model to pause in the middle of responding to a prompt and use search engines and other tools until it has more data and can resume right where it left off.</p><p>That means a lot more than just longer answers. Developers can train Opus 4 to use all kinds of third-party tools. Opus 4 can even play video games pretty well, with Anthropic showing off how the AI performs during a game of Pokémon Red when given file access and permission to build its own navigation guide.</p><figure class="van-image-figure  inline-layout" data-bordeaux-image-check ><div class='image-full-width-wrapper'><div class='image-widthsetter' style="max-width:1920px;"><p class="vanilla-image-block" style="padding-top:56.25%;"><img id="ojTKbM2wHtWni4bZQBRzHF" name="Claude 4 Pokemon" alt="Claude 4 Pokemon" src="https://cdn.mos.cms.futurecdn.net/ojTKbM2wHtWni4bZQBRzHF.gif" mos="" align="middle" fullscreen="" width="1920" height="1080" attribution="" endorsement="" class=""></p></div></div><figcaption itemprop="caption description" class=" inline-layout"><span class="credit" itemprop="copyrightHolder">(Image credit: Anthropic)</span></figcaption></figure><h2 id="claude-4-power-6">Claude 4 power</h2><p>Both Claude 4 models boast enhanced features centered around tool use and memory. Opus 4 and Sonnet 4 can use tools in parallel and switch between reasoning and searching. And their memory system can save and extract key facts over time when provided access to external files. You won't have to re-explain what you want on every third prompt.</p><p>To make sure the AI is doing what you want, but not overwhelm you with every detail, Claude 4's models also offer what it calls “thinking summaries.” Instead of a wall of text detailing each of the potentially thousands of steps taken to complete a prompt, Claude employs a smaller, secondary AI model to condense the train of thought into something digestible.</p><p>A side benefit of the way the new models work is that they're less likely to cheat to save time and processing power. Anthropic said they’ve reduced shortcut-seeking behavior in tasks that tempt AIs to fake their way to a solution (or just make something up).</p><p>The bigger picture? Anthropic is clearly gunning for the lead in AI utility, particularly in coding and agentic, independent tasks. ChatGPT and Google Gemini have bigger user bases, but Anthropic has the means to entice at least some AI chatbot users away to Claude. With Sonnet 4 available to free users and Opus 4 bundled into Claude Pro, Max, Team, and Enterprise plans, Anthropic is trying to appeal to both the budget-friendly and premium AI fans.</p><h3 class="article-body__section" id="section-you-might-also-like"><span>You might also like</span></h3><ul><li><a href="https://www.techradar.com/computing/artificial-intelligence/how-claudes-3-7s-new-extended-thinking-compares-to-chatgpt-o1s-reasoning">How Claude’s 3.7's new ‘extended' thinking compares to ChatGPT o1's reasoning</a></li><li><a href="https://www.techradar.com/computing/artificial-intelligence/i-tried-claudes-new-research-feature-and-its-just-as-good-as-chatgpt-and-google-geminis-deep-research-features">I tried Claude's new Research feature, and it's just as good as ChatGPT and Google Gemini's Deep Research features</a></li><li><a href="https://www.techradar.com/computing/artificial-intelligence/claude-goes-to-college-and-wants-to-be-your-study-buddy">Claude goes to college and wants to be your study buddy</a></li></ul>
                                                        </article>
                        <article>
                            <p>AI-lationships is the gag-inducing term Joi AI cooked up to support its recent eye-opening survey on human-to-AI relationships. In it, eight out of 10 Gen Z respondents said they would consider marrying an AI partner.</p><p>Before we delve too much into this mind-bending stat, let's look at the source. Joi AI, formerly EVA AI, is a premium online AI companion service that offers a wide range of AI companion personalities, complete with AI-generated imagery that can be, depending on settings and what you pay, NSFW.</p><p>It's kind of a cheesy service that caters mostly, I think, to lonely men. Now, don't get me wrong; I know there's a growing epidemic of loneliness. A <a data-analytics-id="inline-link" href="https://mcc.gse.harvard.edu/reports/loneliness-in-america-2024" target="_blank">recent Harvard study</a> found that 21% of US adults report some level of loneliness (some studies suggest <a data-analytics-id="inline-link" href="https://www.npr.org/2023/05/02/1173418268/loneliness-connection-mental-health-dementia-surgeon-general" target="_blank">the number is far higher</a>).</p><h2 id="disconnection-6">Disconnection</h2><p>Remote work, screen time, and other things that take us away from direct human connection are probably not helping this trend, but AI has increasingly stepped into the connection void with a growing army of voice chatbots that can carry on surprisingly realistic and even empathetic-sounding conversations.</p><p>And this is by design. Earlier this month, Meta CEO Mark Zuckerberg, whose company is building powerful AI models, suggested <a data-analytics-id="inline-link" href="https://www.techradar.com/computing/artificial-intelligence/mark-zuckerberg-wants-everyone-to-have-ai-friends-but-i-think-hes-missing-the-point-of-ai-and-the-point-of-friendship">we should all have AI friends</a>.</p><p>Marriage, then, is perhaps, the next logical extension.</p><p>The concept of deep, personal relationships between humans and artificial intelligence traces back to well before we had <a data-analytics-id="inline-link" href="https://www.techradar.com/computing/artificial-intelligence/gemini-live-is-now-free-for-everyone-on-android-and-ios-and-you-can-finally-share-your-screen-and-camera-on-iphone">Gemini Live</a>, <a data-analytics-id="inline-link" href="https://www.techradar.com/news/chatgpt-explained">ChatGPT</a>, <a data-analytics-id="inline-link" href="https://www.techradar.com/computing/artificial-intelligence/microsofts-ai-copilot-makeover-lets-it-see-and-speak">Copilot</a>, and others ready and willing to converse with us at length. The 2013 movie <a data-analytics-id="inline-link" href="https://en.wikipedia.org/wiki/Her_(2013_film)" target="_blank">Her</a> was built around the idea of a deeply personal (and concerning) relationship between Joaquin Phoenix's character and Scarlett Johansson's disembodied AI voice long before we could talk to a single AI in real life.</p><p>I've had my share of AI conversations, and I find them entertaining and, <a data-analytics-id="inline-link" href="https://www.techradar.com/computing/artificial-intelligence/i-asked-all-the-major-ai-chatbots-which-is-the-most-popular-heres-what-they-said">often, illuminating</a>. I don't see them as personal, though. Perhaps that's because I'm not lonely. The more desperate you are for human connection, the more AI companionship might seem like a reasonable substitute.</p><p>But marriage?</p><h2 id="meet-cute-in-the-cloud-6">Meet-cute in the cloud</h2><p>At least <a data-analytics-id="inline-link" href="http://joi.com/" target="_blank">Joi AI</a> adds static imagery to the playful banter you'll find through its AI partners, but that's the exception and not the rule. Most generative AI chatbots are just voices and undulating screens. You need images and, ultimately, touch to make a genuine connection... don't you?</p><p>As I write this, I'm reminded that I met my wife through a phone call and that I was enchanted, initially, by nothing but her voice and wit. But to build our relationship and eventual union, we did date in person. Being with her sealed the deal and made me want to marry her.</p><p>I don't understand why Joi AI's respondents, even Generation Z, who are much more deeply immersed in technology, social media, and AI than any generation before it, would accept an AI as a life mate. In the survey, though, they do sound primed for AI connection, with 83% saying they "could build a deep emotional bond with an AI partner."</p><p>One expert I spoke to via email, <a data-analytics-id="inline-link" href="http://www.doctorsuevarma.com/" target="_blank">Dr. Sue Varma</a>, a board-certified psychiatrist and author of <a data-analytics-id="inline-link" href="https://a.co/d/ekTvcfq" target="_blank"><em>Practical Optimism</em></a><em>, </em>put it in perspective for me. "At our core, we all want the same things: to be seen, to be heard, and to feel valued – not judged or criticized. For Gen Z, that longing is especially strong, and the loneliness they’re experiencing is very real. What they want, what we all want, is meaningful, mutual human connection."</p><div class="see-more see-more--clipped"><blockquote class="twitter-tweet hawk-ignore" data-lang="en"><p lang="en" dir="ltr">Would you consider marrying an AI?<a href="https://twitter.com/cantworkitout/status/1925539773696872817">May 22, 2025</a></p></blockquote><div class="see-more__filter"></div></div><p>Unconvinced that Joi AI's data points to a real trend (I did ask them for survey details and have yet to receive a response), I ran a couple of anecdotal surveys on <a data-analytics-id="inline-link" href="https://x.com/LanceUlanoff/status/1925539773696872817" target="_blank">X (formerly Twitter)</a> and <a data-analytics-id="inline-link" href="https://www.threads.com/@lanceulanoff/post/DJ9P-MINpA3" target="_blank">Threads</a>. Across both, less than 10% said yes, they would consider marrying an AI, roughly a third said no on Threads, and the vast majority wondered if I was okay.</p><p>As preposterous as I find the whole idea of AI relationships and eventual marriage, I also understand that we're at the start of a revolution. AI's ability to mimic human language and even emotions is growing exponentially, and there's already <a data-analytics-id="inline-link" href="https://www.npr.org/2024/08/27/nx-s1-5073962/exploring-concerns-around-users-building-emotional-dependence-on-ai-chatbots" target="_blank">growing concern about human-to-AI relationships</a>.</p><p>"Technology—and AI in particular—isn’t going away. It’s going to keep evolving, and yes, it may offer relationships that seem easy, even comforting. Think of the always-affirming AI: the hype person, the yes-person, the one that never challenges us and always tells us what we want to hear. It’s seductive. But it’s not real," said <a data-analytics-id="inline-link" href="https://www.instagram.com/doctorsuevarma/?hl=en" target="_blank">Dr. Varma</a>, and added, "What we really need to be doing is using AI to support our humanity, not replace it."</p><p>The latest Gemini and ChatGPT models provide incredibly human- and expressive-sounding conversations. Some believe <a data-analytics-id="inline-link" href="https://www.livescience.com/technology/artificial-intelligence/open-ai-gpt-4-5-is-the-first-ai-model-to-pass-an-authentic-turing-test-scientists-say" target="_blank">AIs have already beat the Turing test</a> (basically when a computer's response is indistinguishable from a human's, at least as perceived by another human).</p><p>We will, in this decade, see <a data-analytics-id="inline-link" href="https://www.techradar.com/computing/artificial-intelligence/i-asked-chatgpt-claude-ai-gemini-and-siri-about-humanoid-robots-in-2025-and-the-responses-shocked-me">humanoid robots equipped with these AIs,</a> and that's when things will get really weird. How long before some dude is marrying his AI bot in Vegas?</p><p>Joi AI's self-serving survey is ridiculous on the face of it, even if it is also a harbinger of AI relationships to come – and I hope Gen Z swipes left on the whole idea.</p><h3 class="article-body__section" id="section-you-might-also-like"><span>You might also like</span></h3><ul><li><a href="https://www.techradar.com/computing/artificial-intelligence/socialai-makes-you-the-most-important-and-only-person-on-social-media">SocialAI makes you the most important – and only – person on social media</a></li><li><a href="https://www.techradar.com/computing/artificial-intelligence/meta-wants-to-fill-your-social-media-feeds-with-bots-heres-why-i-think-its-wrong">Meta wants to fill your social media feeds with bots – here's why I think it's wrong</a></li><li><a href="https://www.techradar.com/computing/artificial-intelligence/i-compared-the-new-meta-ai-app-to-chatgpt-on-my-iphone-and-the-only-difference-i-noticed-was-the-discovery-feed">I tried the new Meta AI app and it's like ChatGPT for people who like to overshare</a></li></ul>
                                                        </article>
                        <article>
                            <hr>
  • Google has launched Gemini home screen widgets for Android and iOS devices
  • The widgets let users access Gemini AI features with a single tap
  • The widgets are customizable and allow users to prioritize their most-used Gemini actions

If you like using Google Gemini on your smartphone but find it tedious to tap multiple times to get to the feature you want, Google has you covered. The tech giant has begun rolling out Gemini home screen widgets for both Android and iOS. That means a single tap can launch right into a conversation with Gemini, open the microphone for a voice conversation, share a file with the AI, or even snap a photo with the camera that will go right to Gemini.

The rollout is happening gradually but widely. If you’re running Android 10 or higher, you can already add the Gemini widget by long-pressing on your home screen, tapping “Widgets,” finding Gemini in the list, and dragging it wherever you want it to live. For iOS 17 and up, it’s a similar story: hold your home screen until the icons jiggle, tap the plus button, search for Gemini, and add your widget of choice. You can also customize it by long-pressing it again and swapping out shortcuts or rearranging which actions appear first, such as the microphone for voice chats or the camera button for visual searches.

This update isn't necessarily groundbreaking, but it speaks to the way a lot of people might use Gemini for short activities or tasks but don't want to immerse themselves in it more than necessary.

If you use Gemini every day to ask questions, create funny images, plan trips, or brainstorm emails, this could make accessing the AI a little more convenient. The fact that it is also closer to how Siri and the rapidly dissolving Google Assistant function is probably not a coincidence.

These aren't Gemini's first mobile widgets either. Google released very similar Gemini widgets a couple of months ago for the iPhone lockscreen. Though functionally very similar, they are technically a different form of widget. Google is gradually rolling out the home screen Gemini widget over the next week, so you may only have the lock screen variant available right now.

Widget Gemini

The widgets also offer a glimpse into Google's strategy for infusing Gemini into our daily lives. They want people to think about AI as not just something you call on occasionally, but as a day-to-day tool that is instantly and easily accessible. Instead of lurking in the background, Gemini becomes a part of the interface.

Starting with mobile devices is a smart move for making Gemini feel more like a core service. A lot of people first try out new tech features and products on mobile devices, not on their laptops or desktop computers. If they like it on a mobile device, maybe that will translate to desktop usage. And if you're going to use AI on your phone, it should be quick and casual, like checking the weather or the time.

Gemini’s widgets are fairly basic at the moment, but they set the foundation for more complex widgets to come. Imagine a future widget that surfaces ongoing conversations so you can finish an interrupted project, or one that shows real-time updates from custom topics you follow, or even offers proactive suggestions based on your habits.

All in all, these new widgets are less about bells and whistles and more about removing friction. They aim to give Gemini a faster, more native-feeling way into your daily habits, whether on Android or iPhone. The widget may be the wedge Gemini needs to fulfill every little request an AI assistant can handle.

You might also like

                                                        </article>

Source: You can put Google Gemini right on your smartphone home screen – here’s how

                        <article>
                            <hr>
  • ChatGPT-5 is delayed by a few months
  • The time will allow OpenAI to better integrate the new model
  • New o3 and o4-mini models to come in a couple of weeks

OpenAI has changed its plans and is set to put ChatGPT-5 on hold while releasing new o3 and o4-mini models in the next couple of weeks instead.

The news broke today in a tweet by OpenAI CEO Sam Altman, in which he revealed why the plans were changing:

“There are a bunch of reasons for this”, wrote Altman, “but the most exciting one is that we are going to be able to make GPT-5 much better than we originally thought.

"We also found it harder than we thought it was going to be to smoothly integrate everything, and we want to make sure we have enough capacity to support what we expect to be unprecedented demand.”

The mention of 'capacity to support unprecedented demand' is clearly a reference to the recent outages that ChatGPT has been experiencing as millions of new users signed up to try out the new image generation abilities of ChatGPT-4o.

The next evolution of AI

ChatGPT-5 is the next big evolution of the popular ChatGPT LLM and will be a major development in the future of AI.

Its simpler name was also supposed to represent an alignment shift in OpenAI’s somewhat confusing product-naming conventions that will now soon feature both an o4 and an 4o model in the line-up simultaneously.

Rather than the user having to decide if they wanted to use a smaller, lighter model, such as 4o-mini or a deeper reasoning model, like o4, for their tasks, ChatGPT-5 will decide for you which type of model to use, based on your query.

So far, OpenAI has confirmed that even users on the free tier will have some access to ChatGPT-5 when it comes out, but users on the Pro and Plus tiers will get more. The only word on a release date we’ve been given before was “soon”.

Now it looks like we’ll have to wait a little bit longer for that integration of everything into one model, with Altman stating that ChatGPT-5 would now appear “in a few months”.

OpenAI o3 Mini

The ChatGPT-o3 model was first previewed as part of 12 Days of OpenAI in December 2024. (Image credit: Future)

ChatGPT-o3 improvements

Commenting on the new o3 model, Altman also stated that, “We were able to really improve on what we previewed for o3 in many ways; I think people will be happy…”

Replying to a user on X who asked if there would also be an o3 Pro model, Altman gave a one-word reply – “coming!” – which would seem to confirm that a pro version of o3 is also in the works.

As to when we will see the o3 and o4-mini models, Altman stated, “in a couple of weeks, and then do GPT-5 in a few months”.

You might also like

                                                        </article>

Source: ChatGPT-5 is on hold as OpenAI changes plans and releases new o3 and o4-mini models