Tag: mastering
Wild Hearts Karakuri: Mastering an Ancient Technology
Mastering Moira in Overwatch 2 made me the best support player I could be – and she could do the same for you
Overwatch 2 has been out for well over a month, and I’ve been sucked back into Blizzard’s hero shooter in an alarmingly intense way. It’s as if I was 16 again, losing my nights trying to climb the competitive ranks. ‘One more game’ turns into five, and before you know it, it’s 1AM – enough time for just one more match, I think.
But for some reason, I don’t feel that nostalgic. Overwatch 2 feels like a drastically different game from the one I remember playing as a teenager all those years ago. And it’s probably because I’m always on support these days.
You see, I’m a bit too lazy to queue for six minutes to play as a tank or DPS, as much as I miss flying about as D.Va. So, I’m inevitably always on support since queue times are significantly shorter for healers. This is because support is currently the most undervalued and taxing role in the game, thanks to the change to 5v5 teams.
Why Mastering Language Is So Difficult For AI
Starting with GPT-3, Marcus begins, “I think it’s an interesting experiment. But I think that people are led to believe that this system actually understands human language, which it certainly does not. What it really is, is an autocomplete system that predicts next words and sentences. Just like with your phone, where you type in something and it continues. It doesn’t really understand the world around it.
“And a lot of people are confused by that. They’re confused by that because what these systems are ultimately doing is mimicry. They’re mimicking vast databases of text. And I think the average person doesn’t understand the difference between mimicking 100 words, 1,000 words, a billion words, a trillion words — when you start approaching a trillion words, almost anything you can think of is already talked about there. And so when you’re mimicking something, you can do that to a high degree, but it’s still kind of like being a parrot, or a plagiarist, or something like that. A parrot’s not a bad metaphor, because we don’t think parrots actually understand what they’re talking about. And GPT-3 certainly does not understand what it’s talking about.” Marcus also has cautionary words about Google’s LaMDA (“It’s not sentient, it has no idea of the things that it is talking about.”), driverless cars (“Merely memorizing a lot of traffic situations that you’ve seen doesn’t convey what you really need to understand about the world in order to drive well”), OpenAI’s DALL-E (“A lot of AI right now leverages the not-necessarily-intended contributions by human beings, who have maybe signed off on a ‘terms of service’ agreement, but don’t recognize where this is all leading to”), and what’s motivating the use of AI at corporations (“They want to solve advertisements. That’s not the same as understanding natural language for the purpose of improving medicine. So there’s an incentive issue.”). Still, Marcus says he’s heartened by some recent AI developments: “People are finally daring to step out of the deep-learning orthodoxy, and finally willing to consider “hybrid” models that put deep learning together with more classical approaches to AI. The more the different sides start to throw down their rhetorical arms and start working together, the better.”
Read more of this story at Slashdot.