Tag: human
Knuckles gets into VR and bullies his human friend in the first trailer for his Paramount+ show
The first trailer for the Sonic spinoff series focusing on Knuckles that no one asked for is here, and it actually looks kind of cute and badass.
What if Knuckles (voiced by the mighty Idris Elba) was Sonic’s housemate? What if Knuckles also had a dumb human friend he wanted to train as an Echidna warrior? If those questions sound enticing to you, you’re in luck. The Knuckles series looks exactly as silly and outdated-ish as the two (very successful) Sonic the Hedgehog flicks that Paramount has put out so far, but maybe that’s what we need right now.
The first-ever trailer for the Paramount+ series is actually quite generous, and introduces us to Sonic, Tails, and Knuckles’ post-Sonic 2 day-to-day, meaning this spinoff show will most likely tie directly into the third movie, which comes out this Christmas and will break the Internet with live-action Shadow and probably also the child who gets shot and sends Shadow on a warpath.
Fast food chain Wendy’s is planning to trial an AI chatbot in place of human staff
Labour to make working from home a ‘human right’ as part of election manifesto
Inside ‘horror film’ UK island littered with coffins & human remains where visitors are banned
TAKE a look inside the horror film-like UK island which is littered with coffins and human remains – and which people are banned from visiting.
In grisly scenes there are skulls complete with teeth, a jawbone and other human body parts piled up in the eerie stretch of land on Kent‘s River Medway.
The eerie stretch of land on Kent’s River Medway is like something out of a horror film[/caption]
Skulls with teeth still intact were found[/caption]
Human remains from what are believed to be convicts who died aboard prison ships are everywhere[/caption]
Known as Deadman’s Island, it has long been the subject of gruesome tales with some locals even believing the dead whisper in the night and red-eyed devil dogs roam the land.
What looks like it could come straight out of a horror film, the truth behind the creepy area was revealed back in 2017.
More than 200 years ago, the island was used as a burial ground for convicts who died aboard prison ships.
Thanks to sea erosion, the grim remains can now be found dotted about the surface.
An investigative team dove deeper into the history for a BBC show six years ago.
Director Sam Supple said: “It is like being on the set of a horror film.
“It looks so surreal, it’s like an art department has designed it.
“There are open coffins and bones everywhere.”
The land is only accessible by boat and is out of bounds to the public.
Presenter Natalie Graham said: “What I saw there will stay with me forever.
“The island was covered with human remains.
“The remains, buried 200 years ago, are now being exposed to the elements as nature takes its course.
“This is a really strange sight.
“I would imagine there can’t be anywhere on earth like this.”
Human bones are littered among the shells, while coffins that were once six feet under have risen to the surface, threatening to expose their contents.
The bodies come from prison ships, known as hulks, moored on the Medway and Thames in the 18th and 19th centuries.
The former warships had names such as Retribution and Captivity.
One estimate puts the number of Royal Navy prison ships in the 18th and 19th centuries at 40, including one off Gibraltar and others in Bermuda and Antigua in the Caribbean.
Many of the criminals, who by today’s standards would be considered petty thieves, had been sentenced to death.
Naval historian Professor Eric Grove said: “They would be people who picked pockets and would include ten-year-olds sentenced to 15 years transportation.
“A lot of crimes carried the death penalty, but as a way of being humane and also to inhabit the colonies, it was decided it would be good to transport convicts.
“But you tended to find that if people were not considered healthy enough to take the voyage to Australia, they would be left in the hulks.”
As well as a graveyard of bones, the protected wetland also serves as an important breeding and nesting site for birds.
Open coffins and human remains are scattered along the island[/caption]
A coffin found covered in seaweed[/caption]
Human bones are littered among the beach[/caption]
Meta’s open-source ImageBind AI aims to mimic human perception
Meta is open-sourcing an AI tool called ImageBind that predicts connections between data similar to how humans perceive or imagine an environment. While image generators like Midjourney, Stable Diffusion and DALL-E 2 pair words with images, allowing you to generate visual scenes based only on a text description, ImageBind casts a broader net. It can link text, images / videos, audio, 3D measurements (depth), temperature data (thermal), and motion data (from inertial measurement units) — and it does this without having to first train on every possibility. It’s an early stage of a framework that could eventually generate complex environments from an input as simple as a text prompt, image or audio recording (or some combination of the three).
You could view ImageBind as moving machine learning closer to human learning. For example, if you’re standing in a stimulating environment like a busy city street, your brain (largely unconsciously) absorbs the sights, sounds and other sensory experiences to infer information about passing cars and pedestrians, tall buildings, weather and much more. Humans and other animals evolved to process this data for our genetic advantage: survival and passing on our DNA. (The more aware you are of your surroundings, the more you can avoid danger and adapt to your environment for better survival and prosperity.) As computers get closer to mimicking animals’ multi-sensory connections, they can use those links to generate fully realized scenes based only on limited chunks of data.
So, while you can use Midjourney to prompt “a basset hound wearing a Gandalf outfit while balancing on a beach ball” and get a relatively realistic photo of this bizarre scene, a multimodal AI tool like ImageBind may eventually create a video of the dog with corresponding sounds, including a detailed suburban living room, the room’s temperature and the precise locations of the dog and anyone else in the scene. “This creates distinctive opportunities to create animations out of static images by combining them with audio prompts,” Meta researchers said today in a developer-focused blog post. “For example, a creator could couple an image with an alarm clock and a rooster crowing, and use a crowing audio prompt to segment the rooster or the sound of an alarm to segment the clock and animate both into a video sequence.”
As for what else one could do with this new toy, it points clearly to one of Meta’s core ambitions: VR, mixed reality and the metaverse. For example, imagine a future headset that can construct fully realized 3D scenes (with sound, movement, etc.) on the fly. Or, virtual game developers could perhaps eventually use it to take much of the legwork out of their design process. Similarly, content creators could make immersive videos with realistic soundscapes and movement based on only text, image or audio input. It’s also easy to imagine a tool like ImageBind opening new doors in the accessibility space, generating real-time multimedia descriptions to help people with vision or hearing disabilities better perceive their immediate environments.
“In typical AI systems, there is a specific embedding (that is, vectors of numbers that can represent data and their relationships in machine learning) for each respective modality,” said Meta. “ImageBind shows that it’s possible to create a joint embedding space across multiple modalities without needing to train on data with every different combination of modalities. This is important because it’s not feasible for researchers to create datasets with samples that contain, for example, audio data and thermal data from a busy city street, or depth data and a text description of a seaside cliff.”
Meta views the tech as eventually expanding beyond its current six “senses,” so to speak. “While we explored six modalities in our current research, we believe that introducing new modalities that link as many senses as possible — like touch, speech, smell, and brain fMRI signals — will enable richer human-centric AI models.” Developers interested in exploring this new sandbox can start by diving into Meta’s open-source code.
This article originally appeared on Engadget at https://www.engadget.com/metas-open-source-imagebind-ai-aims-to-mimic-human-perception-181500560.html?src=rss
Aliens could be ‘listening in on Earth’ with predicted first human contact made by 2029
AI can’t replace human writers
In the must-watch final season of “Succession,” Kendall Roy enters a conference room with his siblings. As the scene opens, he takes a seat and declares: “Who will be the successor? Me.” Of course, that scene didn’t appear on HBO’s hit show, but it’s a good illustration of generative AI’s level of sophistication compared to […]
AI can’t replace human writers by Amanda Silberling originally published on TechCrunch