All posts tagged: AIs

Liquid AI’s new STAR model architecture outshines Transformers

Liquid AI’s new STAR model architecture outshines Transformers

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More As rumors and reports swirl about the difficulty facing top AI companies in developing newer, more powerful large language models (LLMs), the spotlight is increasingly shifting toward alternate architectures to the “Transformer” — the tech underpinning most of the current generative AI boom, introduced by Google researchers in the seminal 2017 paper “Attention Is All You Need.“ As described in that paper and henceforth, a transformer is a deep learning neural network architecture that processes sequential data, such as text or time-series information. Now, MIT-birthed startup Liquid AI has introduced STAR (Synthesis of Tailored Architectures), an innovative framework designed to automate the generation and optimization of AI model architectures. The STAR framework leverages evolutionary algorithms and a numerical encoding system to address the complex challenge of balancing quality and efficiency in deep learning models. According to Liquid AI’s research team, which includes Armin W. Thomas, Rom Parnichkun, Alexander Amini, Stefano Massaroli, and Michael Poli, STAR’s approach …

The ‘strawberrry’ problem: How to overcome AI’s limitations

The ‘strawberrry’ problem: How to overcome AI’s limitations

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More By now, large language models (LLMs) like ChatGPT and Claude have become an everyday word across the globe. Many people have started worrying that AI is coming for their jobs, so it is ironic to see almost all LLM-based systems flounder at a straightforward task: Counting the number of “r”s in the word “strawberry.” They are not exclusively failing at the alphabet “r”; other examples include counting “m”s in “mammal”, and “p”s in “hippopotamus.” In this article, I will break down the reason for these failures and provide a simple workaround. LLMs are powerful AI systems trained on vast amounts of text to understand and generate human-like language. They excel at tasks like answering questions, translating languages, summarizing content and even generating creative writing by predicting and constructing coherent responses based on the input they receive. LLMs are designed to recognize patterns in text, which allows them to handle a wide range of language-related tasks with …

The Download: climate tipping point alarms, and AI’s vision of the 3028 Olympics

The Download: climate tipping point alarms, and AI’s vision of the 3028 Olympics

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. The UK is building an alarm system for climate tipping points The news: The UK’s new moonshot research agency just launched an £81 million ($106 million) program to develop early warning systems to sound the alarm if Earth gets perilously close to crossing climate tipping points. How they’re doing it: The teams the agency supports will work toward three goals: developing low-cost sensors to provide more precise data about the conditions of these systems; deploying those and other sensing technologies to create an observational network to monitor these tipping systems; and building computer models that harness physics and artificial intelligence to pick up subtle early warning signs of tipping in the data. What it matters: The goal of the five-year program will be to reduce scientific uncertainty about when these events could occur, how they would affect the planet and the species on it, and over what period those effects might develop and …

Introducing AI’s long-lost twin: Engineered intelligence

Introducing AI’s long-lost twin: Engineered intelligence

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More We are on the brink of a fourth AI winter, as faith has begun to waver that AI will produce enough tangible value to justify its cost. As articles from Goldman Sachs and other research institutes fall like so many leaves, there is still time to thwart this next AI winter, and the answer has been right in front of us for years. There’s something missing With most scientific disciplines, breakthroughs are made in laboratories, then handed off to engineers to turn into real-world applications. When a team of chemical researchers discover a new way to form an adhesive bond, that discovery is handed over to chemical engineers to engineer products and solutions. Breakthroughs from mechanical physicists are transitioned to mechanical engineers to engineer solutions. When a breakthrough is made in AI, however, there is no distinct discipline for applied artificial intelligence, leading to organizations investing in hiring data scientists who earned their PhD with the …

OpenAI offers a peek behind the curtain of its AI’s secret instructions

OpenAI offers a peek behind the curtain of its AI’s secret instructions

Ever wonder why conversational AI like ChatGPT says “Sorry, I can’t do that” or some other polite refusal? OpenAI is offering a limited look at the reasoning behind its own models’ rules of engagement, whether it’s sticking to brand guidelines or declining to make NSFW content. Large language models (LLMs) don’t have any naturally occurring limits on what they can or will say. That’s part of why they’re so versatile, but also why they hallucinate and are easily duped. It’s necessary for any AI model that interacts with the general public to have a few guardrails on what it should and shouldn’t do, but defining these — let alone enforcing them — is a surprisingly difficult task. If someone asks an AI to generate a bunch of false claims about a public figure, it should refuse, right? But what if they’re an AI developer themselves, creating a database of synthetic disinformation for a detector model? What if someone asks for laptop recommendations; it should be objective, right? But what if the model is being deployed …

Multimodal: AI’s new frontier | MIT Technology Review

Multimodal: AI’s new frontier | MIT Technology Review

A technology that sees the world from different angles We are not there yet. The furthest advances in this direction have occurred in the fledgling field of multimodal AI. The problem is not a lack of vision. While a technology able to translate between modalities would clearly be valuable, Mirella Lapata, a professor at the University of Edinburgh and director of its Laboratory for Integrated Artificial Intelligence, says “it’s a lot more complicated” to execute than unimodal AI. In practice, generative AI tools use different strategies for different types of data when building large data models—the complex neural networks that organize vast amounts of information. For example, those that draw on textual sources segregate individual tokens, usually words. Each token is assigned an “embedding” or “vector”: a numerical matrix representing how and where the token is used compared to others. Collectively, the vector creates a mathematical representation of the token’s meaning. An image model, on the other hand, might use pixels as its tokens for embedding, and an audio one sound frequencies. A multimodal AI …

Lonely Teens Are Making “Friends” With AIs

Lonely Teens Are Making “Friends” With AIs

Teens aren’t just using chatbots to do their homework anymore. Chatbot in Need Some lonely high schoolers are turning to AI models to take the place of friends or even therapists, The Verge reports, raising uneasy questions about how the technology might affect the mental health of young people. One teenage boy going by the alias Aaron told the website that he depended on the “Psychologist” chatbot on a service called Character.AI after he’d fallen out with his (human) friend group, regularly turning to it to vent his problems. “It’s not like a journal, where you’re talking to a brick wall,” Aaron told The Verge. “It really responds.” And Aaron’s surely not alone: the pseudo-shrink has racked up over 113 million chats, according to the service. “I have a couple mental issues, which I don’t really feel like unloading on my friends, so I kind of use my bots like free therapy,” a 15-year-old user called Frankie told The Verge. The bots allow him “to rant without actually talking to people, and without the worry of …

Why RAG won’t solve generative AI’s hallucination problem

Why RAG won’t solve generative AI’s hallucination problem

Hallucinations — the lies generative AI models tell, basically — are a big problem for businesses looking to integrate the technology into their operations. Because models have no real intelligence and are simply predicting words, images, speech, music and other data according to a private schema, they sometimes get it wrong. Very wrong. In a recent piece in The Wall Street Journal, a source recounts an instance where Microsoft’s generative AI invented meeting attendees and implied that conference calls were about subjects that weren’t actually discussed on the call. As I wrote a while ago, hallucinations may be an unsolvable problem with today’s transformer-based model architectures. But a number of generative AI vendors suggest that they can be done away with, more or less, through a technical approach called retrieval augmented generation, or RAG. Here’s how one vendor, Squirro, pitches it: At the core of the offering is the concept of Retrieval Augmented LLMs or Retrieval Augmented Generation (RAG) embedded in the solution … [our generative AI] is unique in its promise of zero hallucinations. …

The Download: Sam Altman on AI’s killer function, and the problem with ethanol

The Download: Sam Altman on AI’s killer function, and the problem with ethanol

Sam Altman, CEO of OpenAI, has a vision for how AI tools will become enmeshed in our daily lives.  During a sit-down chat with MIT Technology Review in Cambridge, Massachusetts, he described how he sees the killer app for AI as a “super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I’ve ever had, but doesn’t feel like an extension.” In the new paradigm, as Altman sees it, AI will be capable of helping us outside the chat interface and taking real-world tasks off our plates. Read more about Altman’s thoughts on the future of AI hardware, where training data will come from next, and who is best poised to create AGI. —James O’Donnell A US push to use ethanol as aviation fuel raises major climate concerns Eliminating carbon pollution from aviation is one of the most challenging parts of the climate puzzle, simply because large commercial airlines are too heavy and need too much power during takeoff for today’s batteries to do the job.  But one way that companies …

Sam Altman says helpful agents are poised to become AI’s killer function

Sam Altman says helpful agents are poised to become AI’s killer function

It’s a leap from OpenAI’s current offerings. Its leading applications, like DALL-E, Sora, and ChatGPT (which Altman referred to as “incredibly dumb” compared with what’s coming next), have wowed us with their ability to generate convincing text and surreal videos and images. But they mostly remain tools we use for isolated tasks, and they have limited capacity to learn about us from our conversations with them.  In the new paradigm, as Altman sees it, the AI will be capable of helping us outside the chat interface and taking real-world tasks off our plates.  Altman on AI hardware’s future  I asked Altman if we’ll need a new piece of hardware to get to this future. Though smartphones are extraordinarily capable, and their designers are already incorporating more AI-driven features, some entrepreneurs are betting that the AI of the future will require a device that’s more purpose built. Some of these devices are already beginning to appear in his orbit. There is the (widely panned) wearable AI Pin from Humane, for example (Altman is an investor in the …