All posts tagged: algorithms

OpenAI’s Deep Research Agent Is Coming for White-Collar Work

OpenAI’s Deep Research Agent Is Coming for White-Collar Work

Isla Fulford, a researcher at OpenAI, had a hunch that Deep Research would be a hit even before it was released. Fulford had helped build the artificial intelligence agent, which autonomously explores the web, deciding for itself what links to click, what to read, and what to collate into an in-depth report. OpenAI first made Deep Research available internally; whenever it went down, Fulford says, she was inundated with queries from colleagues eager to have it back. “The number of people who were DMing me made us pretty excited,” says Fulford. Since going live to the public on February 2, Deep Research has proven to be a hit with many users outside the company too. “Deep Research has written 6 reports so far today,” Patrick Collison, the CEO of Stripe posted on X a few days after the product was released. “It is indeed excellent. Congrats to the folks behind it.” “Deep Research is the AI product that really got a meaningful chunk of the policymaking community in DC to start feeling the AGI,” wrote …

Google Lifts a Ban on Using Its AI for Weapons and Surveillance

Google Lifts a Ban on Using Its AI for Weapons and Surveillance

Google announced Tuesday that it is overhauling the principles governing how it uses artificial intelligence and other advanced technology. The company removed language promising not to pursue “technologies that cause or are likely to cause overall harm,” “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” “technologies that gather or use information for surveillance violating internationally accepted norms,” and “technologies whose purpose contravenes widely accepted principles of international law and human rights.” The changes were disclosed in a note appended to the top of a 2018 blog post unveiling the guidelines. “We’ve made updates to our AI Principles. Visit AI.Google for the latest,” the note reads. In a blog post on Tuesday, a pair of Google executives cited the increasingly widespread use of AI, evolving standards, and geopolitical battles over AI as the “backdrop” to why Google’s principles needed to be overhauled. Google first published the principles in 2018 as it moved to quell internal protests over the company’s decision to work on a US military …

Dictatorships Will Be Vulnerable to Algorithms

Dictatorships Will Be Vulnerable to Algorithms

AI is often considered a threat to democracies and a boon to dictators. In 2025 it is likely that algorithms will continue to undermine the democratic conversation by spreading outrage, fake news, and conspiracy theories. In 2025 algorithms will also continue to expedite the creation of total surveillance regimes, in which the entire population is watched 24 hours a day. Most importantly, AI facilitates the concentration of all information and power in one hub. In the 20th century, distributed information networks like the USA functioned better than centralized information networks like the USSR, because the human apparatchiks at the center just couldn’t analyze all the information efficiently. Replacing apparatchiks with AIs might make Soviet-style centralized networks superior. Nevertheless, AI is not all good news for dictators. First, there is the notorious problem of control. Dictatorial control is founded on terror, but algorithms cannot be terrorized. In Russia, the invasion of Ukraine is defined officially as a “special military operation,” and referring to it as a “war” is a crime punishable by up to three years …

Tips for ChatGPT’s Voice Mode? Best AI Uses for Retirees? Our Expert Answers Your Questions

Tips for ChatGPT’s Voice Mode? Best AI Uses for Retirees? Our Expert Answers Your Questions

Thank you so much to all the readers who tuned in live to participate in the second installment of our question and answer series focused on artificial intelligence. I was thrilled to see so many questions come in before the event, as well as all the questions that were dropped into the chat during our conversation. Missed the broadcast? We’ve got your back. Below is a replay of this event that WIRED subscribers can watch whenever. Also, the livestream from the first one is available here. I started off the chat with a couple quick demos showing how to use the image and voice features built into chatbots, including an example of how it’s possible to interact with ChatGPT’s Advanced Voice Mode as a kind of Duolingo-style language learning tool. For a deeper dive into a few of the live questions we discussed, I’d suggest checking out my AI advice column for December, tackling questions about proper attribution for generative tools and how to teach the next generation about AI. If you’re interested in experimenting …

OnlyFans Models Are Using AI Impersonators to Keep Up With Their DMs

OnlyFans Models Are Using AI Impersonators to Keep Up With Their DMs

One of the more persistent concerns in the age of AI is that the robots will take our jobs. The extent to which this fear is founded remains to be seen, but we’re already witnessing some level of replacement in certain fields. Even niche occupations are in jeopardy. For example, the world of OnlyFans chatters is already getting disrupted. What are OnlyFans chatters, you say? Earlier this year, WIRED published a fascinating investigation into the world of gig workers who get paid to impersonate top-earning OnlyFans creators in online chats with their fans. Within the industry, they’re called “chatters.” A big part of the appeal of OnlyFans—or so I’m told—is that its creators appear to directly engage with their fans, exchanging messages and sometimes talking for hours. Relationship simulation is as crucial an ingredient to its success, basically, as titillation. Of course, a single creator with thousands of ongoing DM conversations has only so many hours in a day. To manage the deluge of amorous messages, it’s become commonplace to outsource the conversations to “chatters” …

Recently Published Book Spotlight: Algorithms of Anxiety

Recently Published Book Spotlight: Algorithms of Anxiety

Anthony Elliott is Distinguished Professor of Sociology at the University of South Australia, where he is Executive Director of the Jean Monnet Centre of Excellence in Digital Transformation. His research focuses on the digital revolution, lifestyle change and social theory. The author and editor of some 50 books translated into 17 languages, The New Republic has described Elliott’s research breakthroughs as “thought-provoking and disturbing.” In this Book Spotlight, Elliott discusses his recent book Algorithms of Anxiety: Fear in the Digital Age, how the book connects with his broader intellectual project, the philosophers and social theorists who have influenced his writing of this book and the impact he hopes the work will have in both policy circles and business and industry. What is your work about and how does it connect with your larger research project? My recent work confronts the digital revolution, namely artificial intelligence. My argument, broadly speaking, is that the rise of AI is generating a new sense of personal identity—“new subjects” are demanded and delivered in the age of advanced machine intelligence. These “new subjects” …

Risk algorithm used widely in US courts is harsher than human judges

Risk algorithm used widely in US courts is harsher than human judges

Judges can use algorithms to help make their decisions Frances Twitty/Getty Images A US courtroom experiment suggests a popular risk assessment algorithm makes harsher recommendations than human judges – possibly because it is worse than people at anticipating which defendants will violate pretrial agreements. “Some jurisdictions wanted to work with us to evaluate whether these recommendations are actually helping judges make a better decision,” says Kosuke Imai at Harvard University. In the US criminal justice system, judges determine whether defendants will await trial at home or in… Source link

Elon Musk’s Criticism of ‘Woke AI’ Suggests ChatGPT Could Be a Trump Administration Target

Elon Musk’s Criticism of ‘Woke AI’ Suggests ChatGPT Could Be a Trump Administration Target

Mittelsteadt adds that Trump could punish companies in a variety of ways. He cites, for example, the way the Trump government canceled a major federal contract with Amazon Web Services, a decision likely influenced by the former president’s view of the Washington Post and its owner, Jeff Bezos. It would not be hard for policymakers to point to evidence of political bias in AI models, even if it cuts both ways. A 2023 study by researchers at the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University found a range of political leanings in different large language models. It also showed how this bias may affect the performance of hate speech or misinformation detection systems. Another study, conducted by researchers at the Hong Kong University of Science and Technology, found bias in several open source AI models on polarizing issues such as immigration, reproductive rights, and climate change. Yejin Bang, a PhD candidate involved with the work, says that most models tend to lean liberal and US-centric, but that the same models can express …

Want to go viral this #Halloween? It’s all about tapping into fun, fears and algorithms

Want to go viral this #Halloween? It’s all about tapping into fun, fears and algorithms

Here they come: an apron and tattoos that make you look like chef Carmy from The Bear, or weird insect-like accessories resembling the infamous Paris Fashion Week bedbugs – new year, new over-the-top inventive Halloween trends. Thanks to the proliferation of social media platforms like TikTok and Instagram, we’re in for a treat for this year’s online Halloween extravaganza. What used to be a traditional holiday celebrated with reverence by the people remembering the religious meaning of All Hallow’s Eve, or simply an excuse for phantasmagorical parties by those who didn’t, Halloween is now exhibiting a whole new digital layer. Last year, the hashtag #Halloween was viewed three billion times in a week. We live in a time of “information fatigue”, “information anxiety” or even “infobesity”, as some academics call our oversaturated media environment, with plentiful, often unpleasant stimuli coming from the news and social media. No one’s 20s and 30s look the same. You might be saving for a mortgage or just struggling to pay rent. You could be swiping dating apps, or trying …

Hacking Generative AI for Fun and Profit

Hacking Generative AI for Fun and Profit

You hardly need ChatGPT to generate a list of reasons why generative artificial intelligence is often less than awesome. The way algorithms are fed creative work often without permission, harbor nasty biases, and require huge amounts of energy and water for training are all serious issues. Putting all that aside for a moment, though, it is remarkable how powerful generative AI can be for prototyping potentially useful new tools. I got to witness this firsthand by visiting Sundai Club, a generative AI hackathon that takes place one Sunday each month near the MIT campus. A few months ago, the group kindly agreed to let me sit in and chose to spend that session exploring tools that might be useful to journalists. The club is backed by a Cambridge nonprofit called Æthos that promotes socially responsible use of AI. The Sundai Club crew includes students from MIT and Harvard, a few professional developers and product managers, and even one person who works for the military. Each event starts with a brainstorm of possible projects that the …