Imagine trying review a machine that, every time you pressed a button or a key, touched its screen or tried to take a photo with it, responded in a unique way, both predictive and unpredictable, influenced by the output of all the other technological devices that They exist in the world. world. The innards of the product are partially secret. The manufacturer tells you that it is still an experiment, a work in progress; but you should use it anyway and send feedback. Maybe you’ll even pay to use it. Because, despite their general lack of preparation, this is going to change the world, they say.
This is not a traditional WIRED product review. This is a comparative look at three new artificially intelligent software tools that are reshaping the way we access information online: OpenAI’s ChatGPT, Microsoft’s Bing Chat, and Google’s Bard.
For the past three decades, when we browse the web or use a search engine, we type in chunks of data and receive mostly static responses. It’s been a fairly reliable input-output relationship, which has become more complex as advanced artificial intelligence (and data monetization schemes) have entered the chat. Now, the next wave of generative AI is enabling a new paradigm: computer interactions that are more like human conversations.
But these are not really humanistic conversations. Chatbots do not have the well-being of humans in mind. When we use generative AI tools, we are talking about language learning machines, created by even larger metaphorical machines. The responses we get from ChatGPT, Bing Chat or Google Bard are predictive responses generated from data corpuses that reflect the language of the Internet. These chatbots are powerfully interactive, intelligent, creative, and sometimes even funny. They’re also charming little liars: the data sets they’re trained on are full of bias, and some of the answers they spit out, with so much apparent authority, are absurd, offensive, or just plain wrong.
You’re probably going to use generative AI in some way if you haven’t already. It’s pointless to suggest that you never use these chat tools, just as I can’t go back in time 25 years and suggest whether or not you should try Google or go back 15 years and tell you to buy an iPhone or not.
But as I write this, in a period of about a week, generative AI technology has already changed. The prototype has already left the garage and launched without any industry-standard guardrails, so it’s crucial to have a framework for understanding how they work, how to think about them, and whether to trust them. .
Talking about the AI generation
When you use OpenAI’s ChatGPT, Microsoft’s Bing Chat, or Google Bard, you’re leveraging software that uses large, complex language models to predict the next word or series of words the software should spit out. Artificial intelligence technologists and researchers have been working on this technology for years, and the voice assistants we all know (Siri, Google Assistant, Alexa) were already showing the potential of natural language processing. But OpenAI opened the floodgates when it fell the extremely standards-savvy ChatGPT in late 2022. Virtually overnight, the powers of “AI” and “large language models” went from abstract to comprehensible.
Microsoft, which has invested billions of dollars in OpenAI, soon followed with Bing Chat, which uses ChatGPT technology. And then last week, Google started allowing access to a limited number of people. google bardwhich is based on Google’s own technology, LaMDA, short for Language Model for Dialogue Applications.