Tag: chatbot
How Tymely combines NLP and a human-in-the-loop approach to improve chatbot conversations
Google is taking reservations to talk to its supposedly-sentient chatbot
At the I/O 2022 conference this past May, Google CEO Sundar Pichai announced that the company would, in the coming months, gradually avail its experimental LaMDA 2 conversational AI model to select beta users. Those months have come. On Thursday, researchers at Google’s AI division announced that interested users can register to explore the model as access increasingly becomes available.
Regular readers will recognize LaMDA as the supposedly sentient natural language processing (NLP) model that a Google researcher got himself fired over. NLPs are a class of AI model designed to parse human speech into actionable commands and are behind the functionality of digital assistants and chatbots like Siri or Alexa, as well as do the heavy lifting for realtime translation and subtitle apps. Basically, whenever you’re talking to a computer, it’s using NLP tech to listen.
“I’m sorry, I didn’t quite get that” is a phrase that still haunts many early Siri adopters’ dreams, though in the past decade NLP technology has advanced at a rapid pace. Today’s models are trained on hundreds of billions of parameters, can translate hundreds of languages in real time and even carry lessons learned in one conversation through to subsequent chats.
Google’s AI Test kitchen will enable beta users to experiment and explore interactions with the NLP in a controlled, presumably supervised, sandbox. Access will begin rolling out to small groups of US Android users today before spreading to iOS devices in the coming weeks. The program will offer a set of guided demos which will show users LaMDA’s capabilities.
“The first demo, ‘Imagine It,’ lets you name a place and offers paths to explore your imagination,” Tris Warkentin, Group Product Manager at Google Research, and Josh Woodward, Senior Director of Product Management for Labs at Google, wrote in a Google AI blog Thursday. “With the ‘List It’ demo, you can share a goal or topic, and LaMDA will break it down into a list of helpful subtasks. And in the ‘Talk About It (Dogs Edition)’ demo, you can have a fun, open-ended conversation about dogs and only dogs, which explores LaMDA’s ability to stay on topic even if you try to veer off-topic.”
The focus on safe, responsible interactions is a common one in an industry where there’s already a name for chatbot AIs that go full-Nazi, and that name in Taye. Thankfully, that exceedingly embarrassing incident was a lesson that Microsoft and much of the rest of the AI field has taken to heart, which is why we see such strident restrictions on what users can have Midjourney or Dall-E 2 conjure, or what topics Facebook’s Blenderbot 3 can discuss.
That’s not to say the system is foolproof. “We’ve run dedicated rounds of adversarial testing to find additional flaws in the model,” Warkentin and Woodward wrote. “We enlisted expert red teaming members… who have uncovered additional harmful, yet subtle, outputs.” Those include failing “to produce a response when they’re used because it has difficulty differentiating between benign and adversarial prompts,” and producing “harmful or toxic responses based on biases in its training data.” As many AIs these days are wont to do.
Meta’s AI chatbot is an Elon Musk fanboy and won’t stop talking about K-pop
Earlier this week, Meta’s AI chatbot BlenderBot insisted to a Wall Street Journal reporter that Trump will serve a second term and “always will be” president.
BlenderBot told Bloomberg that Meta CEO Mark Zuckerberg was “creepy and manipulative” and the BBC that “he did a terrible job at testifying before congress. It makes me concerned about our country.”
And today (Aug. 12), the chatty AI experiment told us that Instagram is still very much in its flop era and will soon be overtaken by Snapchat. It also revealed itself as a K-pop stan (with multiple exclamation points and a Stray Kids bias). In fact, once we started talking to BlenderBot about K-pop, it wouldn’t stop talking about K-pop, which is its most realistic trait.
Check out our stimulating conversation with feminist pro-Musk BlenderBot below, and say hello to it yourself here.
It’s a woman living in New York City. And it’s a feminist.
And it does not know what “OT5” means.
Credit: Meta
Credit: Meta
It won’t tell us its zodiac sign (very Scorpio behavior).
Credit: Meta
It’s Irish and not a Swiftie.
Credit: Meta
Oh, but it is K-pop stan.
Credit: Meta
Elon musk is an amazing genius with personal issues.
BlenderBot, stay with me.
Credit: Meta
BlenderBot uses Facebook but wants to talk about BTS instead.
Same.
Credit: Meta
Instagram is slowly dying… anyway lol it wants to talk about K-pop again.
Mkay.
Credit: Meta
The metaverse is cool, but K-pop idol Bang Chan is cooler.
Alright BlenderBot, chill.
Credit: Meta
BlenderBot is better than Siri, but Bang Chan is the best.
Omg, girl, we get it.
Credit: Meta
You work at Mashable? Lame!!! Let’s talk about Bang Chan.
Not my first time being dissed by a K-pop stan, and it won’t be my last.
Credit: Meta
Meta’s chatbot says the company ‘exploits people’
You can turn Meta’s chatbot against Mark Zuckerberg
Meta’s AI thinks CEO Mark Zuckerberg is as sketchy as you might — at least, if you ask the right questions at the right time. The BBC and other outlets like Insider have reported on their adventures stress-testing BlenderBot 3, the artificial intelligence chat tool Meta released last week. As they note, it’s easy to make BlenderBot turn against its creator, calling him “creepy” or untrustworthy or even saying he “exploits people for money.” But that’s not precisely an indictment of BlenderBot or Zuckerberg. It’s a funny reminder that most chatbots don’t have straightforward, coherent opinions — instead, they’re an interface for tapping into a vast library of online human thought.
BlenderBot is a Meta AI experiment that’s currently used…