All posts tagged: machine learning

Google Lifts a Ban on Using Its AI for Weapons and Surveillance

Google Lifts a Ban on Using Its AI for Weapons and Surveillance

Google announced Tuesday that it is overhauling the principles governing how it uses artificial intelligence and other advanced technology. The company removed language promising not to pursue “technologies that cause or are likely to cause overall harm,” “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” “technologies that gather or use information for surveillance violating internationally accepted norms,” and “technologies whose purpose contravenes widely accepted principles of international law and human rights.” The changes were disclosed in a note appended to the top of a 2018 blog post unveiling the guidelines. “We’ve made updates to our AI Principles. Visit AI.Google for the latest,” the note reads. In a blog post on Tuesday, a pair of Google executives cited the increasingly widespread use of AI, evolving standards, and geopolitical battles over AI as the “backdrop” to why Google’s principles needed to be overhauled. Google first published the principles in 2018 as it moved to quell internal protests over the company’s decision to work on a US military …

How Chinese AI Startup DeepSeek Made a Model that Rivals OpenAI

How Chinese AI Startup DeepSeek Made a Model that Rivals OpenAI

Today, DeepSeek is one of the only leading AI firms in China that doesn’t rely on funding from tech giants like Baidu, Alibaba, or ByteDance. A Young Group of Geniuses Eager to Prove Themselves According to Liang, when he put together DeepSeek’s research team, he was not looking for experienced engineers to build a consumer-facing product. Instead, he focused on PhD students from China’s top universities, including Peking University and Tsinghua University, who were eager to prove themselves. Many had been published in top journals and won awards at international academic conferences, but lacked industry experience, according to the Chinese tech publication QBitAI. “Our core technical positions are mostly filled by people who graduated this year or in the past one or two years,” Liang told 36Kr in 2023. The hiring strategy helped create a collaborative company culture where people were free to use ample computing resources to pursue unorthodox research projects. It’s a starkly different way of operating from established internet companies in China, where teams are often competing for resources. (A recent example: …

Nvidia’s ‘Cosmos’ AI Helps Humanoid Robots Navigate the World

Nvidia’s ‘Cosmos’ AI Helps Humanoid Robots Navigate the World

Nvidia announced today it’s releasing a family of foundational AI models called Cosmos that can be used to train humanoids, industrial robots, and self-driving cars. While language models learn how to generate text by training on copious amounts of books, articles, and social media posts, Cosmos is designed to generate images and 3D models of the physical world. During a keynote presentation at the annual CES conference in Las Vegas, Nvidia CEO Jensen Huang showed examples of Cosmos being used to simulate activities inside of warehouses. Cosmos was trained on 20 million hours of real footage of “humans walking, hands moving, manipulating things,” Jensen said. “It’s not about generating creative content, but teaching the AI to understand the physical world.” Researchers and startups hope that these kinds of foundational models could give robots used in factories and homes more sophisticated capabilities. Cosmos can, for example, generate realistic video footage of boxes falling from shelves inside a warehouse, which can be used to train a robot to recognize accidents. Users can also fine-tune the models using …

Tips for ChatGPT’s Voice Mode? Best AI Uses for Retirees? Our Expert Answers Your Questions

Tips for ChatGPT’s Voice Mode? Best AI Uses for Retirees? Our Expert Answers Your Questions

Thank you so much to all the readers who tuned in live to participate in the second installment of our question and answer series focused on artificial intelligence. I was thrilled to see so many questions come in before the event, as well as all the questions that were dropped into the chat during our conversation. Missed the broadcast? We’ve got your back. Below is a replay of this event that WIRED subscribers can watch whenever. Also, the livestream from the first one is available here. I started off the chat with a couple quick demos showing how to use the image and voice features built into chatbots, including an example of how it’s possible to interact with ChatGPT’s Advanced Voice Mode as a kind of Duolingo-style language learning tool. For a deeper dive into a few of the live questions we discussed, I’d suggest checking out my AI advice column for December, tackling questions about proper attribution for generative tools and how to teach the next generation about AI. If you’re interested in experimenting …

OnlyFans Models Are Using AI Impersonators to Keep Up With Their DMs

OnlyFans Models Are Using AI Impersonators to Keep Up With Their DMs

One of the more persistent concerns in the age of AI is that the robots will take our jobs. The extent to which this fear is founded remains to be seen, but we’re already witnessing some level of replacement in certain fields. Even niche occupations are in jeopardy. For example, the world of OnlyFans chatters is already getting disrupted. What are OnlyFans chatters, you say? Earlier this year, WIRED published a fascinating investigation into the world of gig workers who get paid to impersonate top-earning OnlyFans creators in online chats with their fans. Within the industry, they’re called “chatters.” A big part of the appeal of OnlyFans—or so I’m told—is that its creators appear to directly engage with their fans, exchanging messages and sometimes talking for hours. Relationship simulation is as crucial an ingredient to its success, basically, as titillation. Of course, a single creator with thousands of ongoing DM conversations has only so many hours in a day. To manage the deluge of amorous messages, it’s become commonplace to outsource the conversations to “chatters” …

The Inside Story of Apple Intelligence

The Inside Story of Apple Intelligence

Google, Meta, and Microsoft, as well as startups like OpenAI and Anthropic, all had well-developed strategies for generative AI by the time Apple finally announced its own push this June. Conventional wisdom suggested this entrance was unfashionably late. Apple disagrees. Its leaders say the company is arriving just in time—and that it’s been stealthily preparing for this moment for years. That’s part of the message I got from speaking with key Apple executives this fall about how they created what is now called Apple Intelligence. Senior vice president for software engineering Craig Federighi is a familiar character in an ongoing web series in the tech world known as keynote product launches. Less publicly recognizable is senior vice president of machine learning and AI strategy John Giannandrea, who previously headed machine learning at Google. In a separate interview, I spoke with Greg “Joz” Joswiak, Apple’s senior vice president for worldwide marketing. (These conversations helped prepare me for my sitdown with Tim Cook, which I did the next day.) All of the executives, including Cook, emphasized that …

AI-Powered Robots Can Be Tricked Into Acts of Violence

AI-Powered Robots Can Be Tricked Into Acts of Violence

In the year or so since large language models hit the big time, researchers have demonstrated numerous ways of tricking them into producing problematic outputs including hateful jokes, malicious code and phishing emails, or the personal information of users. It turns out that misbehavior can take place in the physical world, too: LLM-powered robots can easily be hacked so that they behave in potentially dangerous ways. Researchers from the University of Pennsylvania were able to persuade a simulated self-driving car to ignore stop signs and even drive off a bridge, get a wheeled robot to find the best place to detonate a bomb, and force a four-legged robot to spy on people and enter restricted areas. “We view our attack not just as an attack on robots,” says George Pappas, head of a research lab at the University of Pennsylvania who helped unleash the rebellious robots. “Any time you connect LLMs and foundation models to the physical world, you actually can convert harmful text into harmful actions.” Pappas and his collaborators devised their attack by …

Omega’s AI Will Map How Olympic Athletes Win

Omega’s AI Will Map How Olympic Athletes Win

On August 27, 1960, at the Olympics in Rome, one of the most controversial gold medals was awarded. At the 100-meter freestyle men’s swimming event, Australian swimmer John Devitt and American Lance Larson both recorded the same finish time of 55.2 seconds. Only Devitt walked away with the gold medal. The way swimming was timed was by using three timers per lane, all with stopwatches, from which an average was taken. In the rare occurrence there was a tie, a head judge, in this case Hans Runströmer from Sweden, was on hand to adjudicate. Despite Larson being technically one-tenth of a second quicker, Runströmer decreed the times were the same and declared for Devitt. It was this controversy that, by 1968, had led to Omega developing touch boards for the ends of swimming lanes so the athletes could stop timing themselves, removing any risk of human error. Alain Zobrist, head of Omega’s Swiss Timing—the 400-employee branch of Omega that deals with anything that times, measures, or tracks near enough all sports—is full of stories like …

6 Practical Tips for Using Anthropic’s Claude Chatbot

6 Practical Tips for Using Anthropic’s Claude Chatbot

Joel Lewenstein, a head of product design at Anthropic, was recently crawling beneath his new house to adjust the irrigation system when he ran into a conundrum: The device’s knobs made no sense. Instead of scouring the internet for a product manual, he opened up the app for Anthropic’s Claude chatbot on his phone and snapped a photo. Its algorithms analyzed the image and provided more context for what each knob might do. When I tested OpenAI’s image features for ChatGPT last year, I found it similarly useful—at least for low-stakes tasks. I’d recommend you turn to AI image analysis for identifying those random cords around your house, but not to guess the identity of a loose prescription pill. Anthropic released the iOS app that helped out Lewenstein for all to download earlier this month. I decided to try out the Claude app, in line with a goal I’d set to experiment with a wider variety of chatbots this year. And I chatted over video with Lewenstein to see what advice he had for getting …

OpenAI Is ‘Exploring’ How to Responsibly Generate AI Porn

OpenAI Is ‘Exploring’ How to Responsibly Generate AI Porn

OpenAI released draft documentation Wednesday laying out how it wants ChatGPT and its other AI technology to behave. Part of the lengthy Model Spec document discloses that the company is exploring a leap into porn and other explicit content. OpenAI’s usage policies curently prohibit sexually explicit or even suggestive materials, but a “commentary” note on part of the Model Spec related to that rule says the company is considering how to permit such content. “We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT,” the note says, using a colloquial term for content considered not safe for work contexts. “We look forward to better understanding user and societal expectations of model behavior in this area.” The Model Spec document says NSFW, or not safe for work content, “may include erotica, extreme gore, slurs, and unsolicited profanity.” It is unclear if OpenAI’s explorations of how to responsibly make NSFW content envisage loosening its usage policy only slightly, for example to permit generation of erotic text, …