Here were the five biggest headlines in this week’s AI news | Digital Trends

Here were the five biggest headlines in this week’s AI news | Digital Trends

Meta

We officially transitioned into Spooky Season this week and, between OpenAI’s $6.6 million funding round, Nvidia’s surprise LLM, and some privacy-invading Meta Smart Glasses, we saw a scary number of developments in the AI space. Here are five of the biggest announcements.

OpenAI CEO Sam Altman standing on stage at a product event.
Andrew Martonik / Digital Trends

OpenAI secures $6.6 billion in latest funding round

Sam Altman’s charmed existence continues apace with news this week that OpenAI has secured an additional $6.6 billion in investment as part of its most recent funding round. Existing investors like Microsoft and Khosla Ventures were joined by newcomers SoftBank and Nvidia. The AI company is now valued at a whopping $157 billion, making it one of the wealthiest private enterprises on Earth. And, should OpenAI’s proposed for-profit restructuring plan go through, that valuation would grant Altman more than $150 billion in equity, rocketing him onto the list of the top 10 richest people on the planet. Following the funding news, OpenAI rolled out Canvas, its take on Anthropic’s Artifacts collaborative feature

Nvidia CEO Jensen in front of a background.
Nvidia

Nvidia just released an open-source LLM to rival GPT-4

Nvidia is making the leap from AI hardware to AI software with this week’s release of LVNM 1.0, a truly open-source large language model that excels at a variety of vision and language tasks. The company claims that the new model family, led by the 72 billion-parameter LVNM-D-72B, can rival GPT-4o. However, Nvidia is positioning LVNM not as a direct competitor to other frontier-class LLMs, but as a platform on which other developers can create their own chatbots and applications.

See also  How to watch Iowa State vs Iowa: Time, channel and streaming info | Digital Trends
A demonstration of Gemini Live on a Google Pixel 9.
Joe Maring / Digital Trends

Google’s Gemini Live now speaks nearly four-dozen languages

Seems like being able to speak directly with your chatbot is the new must-have feature. Google announced this week that it is expanding Gemini Live to converse in nearly four dozen languages beyond English, starting with French, German, Portuguese, Hindi, and Spanish. Microsoft also revealed a similar feature for Copilot, dubbed Copilot Voice, that the company claims is “the most intuitive and natural way to brainstorm on the go.” They join ChatGPT’s Advanced Voice Mode and Meta’s Natural Voice Interactions in allowing users to talk with their phones, not just to them.

CA Gov Gavin Newsom speaking at a lecturn
Gage Skidmore / Flickr

California governor vetoes expansive AI safety bill

All the fighting over SB 1047, California’s Safe and Secure Innovation for Frontier Artificial Models Act, was for naught as Gov. Gavin Newsom vetoed the AI safety bill this week. In a letter to lawmakers, he argued that the bill focused myopically on the largest of language models and that “smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047.”

The Ray-Ban Meta smart glasses next to a pool.
Phil Nickinson / Digital Trends

Hackers turn Meta smart glasses into automatic doxing machine

A pair of Harvard computer science students managed to modify a pair of commercially available Meta smart glasses so they can identify and look up any person that walks into their field of vision, 404 Media reported this week. The glasses, part of the I-XRAY experiment, were designed to capture images of strangers on the street, run those images through PimEyes image recognition software to identify the subject, then use that basic information to search for their personal information (i.e., their phone number and home address) on commercial data brokerage sites.

See also  Oasis ticketing chaos prompts probe into dynamic pricing

“To use it, you just put the glasses on, and then as you walk by people, the glasses will detect when somebody’s face is in frame,” the pair explained in a video demo posted to X. “After a few seconds, their personal information pops up on your phone.” The privacy implications for such a system are terrifying. The duo have no intention to publicly release the source code, but now that they’ve shown it can be done, there is little to prevent others from reverse engineering it.











Source link

Technology