Tag: image
John Lydon sues over ownership of Public Image Ltd logo
OpenAI looks beyond diffusion with ‘consistency’-based image generator
The field of image generation moves quickly. Though the diffusion models used by popular tools like Midjourney and Stable Diffusion may seem like the best we’ve got, the next thing is always coming — and OpenAI might have hit on it with “consistency models,” which can already do simple tasks an order of magnitude faster […]
OpenAI looks beyond diffusion with ‘consistency’-based image generator by Devin Coldewey originally published on TechCrunch
See Uranus’ Rings in Stunning New Image from the Webb Telescope
The image is representative of the telescope’s significant sensitivity, NASA said, as the fainter rings have only been captured previously by the Voyager 2 spacecraft and the W.M. Keck Observatory on Maunakea in Hawaii. Uranus has 13 known rings, with 11 of them visible in the new Webb image. Nine rings are classified as the main rings, while the other two are harder to capture due to their dusty makeup and were not discovered until the Voyager 2 mission’s flyby in 1986.
Two other, faint outer rings not shown in this latest image were discovered in 2007 from images taken by NASA’s Hubble Space Telescope, and scientists hope Webb will capture them in the future…. “The JWST gives us the ability to look at both Uranus and Neptune in a completely new way because we have never had a telescope of this size that looks in the infrared,” said Dr. Naomi Rowe-Gurney, a postdoctoral research scientist and solar system ambassador for the Webb space telescope at NASA Goddard Space Flight Center in Greenbelt, Maryland. “The infrared can show us new depths and features that are difficult to see from the ground with the atmosphere in the way and invisible to telescopes that look in visible light like Hubble.”
“When Voyager 2 looked at Uranus, its camera showed an almost featureless blue-green ball in visible wavelengths,” NASA explains. “With the infrared wavelengths and extra sensitivity of Webb we see more detail, showing how dynamic the atmosphere of Uranus really is.”
On the right side of the planet there’s an area of brightening at the pole facing the Sun, known as a polar cap. This polar cap is unique to Uranus — it seems to appear when the pole enters direct sunlight in the summer and vanish in the fall; these Webb data will help scientists understand the currently mysterious mechanism. Webb revealed a surprising aspect of the polar cap: a subtle enhanced brightening at the center of the cap. The sensitivity and longer wavelengths of Webb’s NIRCam may be why we can see this enhanced Uranus polar feature when it has not been seen as clearly with other powerful telescopes like the Hubble Space Telescope and Keck Observatory….
This was only a short, 12-minute exposure image of Uranus with just two filters. It is just the tip of the iceberg of what Webb can do when observing this mysterious planet.
Read more of this story at Slashdot.
Mars scientists spent 6 years making the most detailed image of the planet
There’s no Google Earth for Mars — no way to zoom in for a closer look at your Martian neighbors’ new deck or pickup truck — but Caltech scientists have spent six years composing a 3D image of the Red Planet with the feel of the popular computer app.
The new tool, called the Global CTX Mosaic of Mars, has 5.7 trillion pixels of data — enough that mapmakers would need the Rose Bowl Stadium in Pasadena, California, to lay out a complete printed version, according to NASA. Each pixel covers about a parking space-size patch of Martian terrain, providing unprecedented image resolution. The highest resolution available at a global scale before this was 100 meters per pixel, making the new mosaic 20 times sharper.
Anyone can now zoom in on the planet and get a close-up of meteorite craters, dust devil tracks, extinct volcanoes, former riverbeds, and seemingly bottomless caves. The creators sought to make Earth’s neighbor, on average 140 million miles away, more accessible to researchers and the public, said Jay Dickson, the scientist who led the project.
“Schoolchildren can use this now. My mother, who just turned 78, can use this now,” he said in a statement. “The goal is to lower the barriers for people who are interested in exploring Mars.”
Buttons on the tool (found here) let users jump to popular landmarks, like the Gale and Jezero craters where NASA’s Curiosity and Perseverance rovers are exploring.
“Schoolchildren can use this now. My mother, who just turned 78, can use this now. The goal is to lower the barriers for people who are interested in exploring Mars.”
The mosaic covers 99.5 percent of the planet using nearly 87,000 separate images taken between 2006 and 2020 by a camera on the Mars Reconnaissance Orbiter. The robotic spacecraft flies up to 250 miles above the red planet, while its black-and-white Context Camera captures expansive views.
Credit: NASA / JPL-Caltech / MSSS
The team designed the tool so that each image in the mosaic connects directly to its original data. The scientists presented a paper on the tool at the 2023 Lunar and Planetary Science Conference.
Want more science and tech news delivered straight to your inbox? Sign up for Mashable’s Top Stories newsletter today.
To create the new mosaic, Dickson developed an algorithm to match images. The photos also needed to have similar lighting conditions and clear skies. Then, what the program couldn’t match — about 13,000 remaining pictures — he manually stitched together, a time-consuming three-year undertaking. Any leftover gaps in the mosaic represent areas blocked by clouds or areas that hadn’t been photographed before he started working on the project.
Credit: NASA / JPL-Caltech / MSSS
So far, over 120 peer-reviewed science papers have used a test version of the map, released in 2018, for research purposes.
“Ideally, image mosaics should be held to the same scientific standards of traceability as the science that they facilitate,” the authors said in the paper. “All derived data should be traceable back to their source, all methods for the construction of the mosaic should be reported and known artifacts and other limitations of the product should be communicated. These standards have long been applied to the instruments that collect the data, and the science derived from image mosaics, but not to mosaic products themselves.”
Microsoft’s rolling out Edge’s AI image generator to everyone
Microsoft is making its DALL-E-powered AI image generator “available on desktop for Edge users around the world.” The company announced it’d be coming last month when it integrated the image generation tech into its Bing chatbot, but this move could make it available to a much wider audience.
When it rolls out — I and two other Verge staffers using Edge don’t appear to have access to it yet — the “Image Creator” will live in Edge’s sidebar. Using it should be pretty simple; you type in what you want to see, and Bing will generate several images that match the prompt. Then, you can download the ones you like and use them however you need.
In a Thursday blog post, Microsoft pitches the feature as a way to create “very specific” visuals…
Crypto: Dogecoin falls after Twitter switches out Shiba Inu image to bird logo
Microsoft Edge Now Has Bing’s Dall-E Image Creator
Microsoft is cramming AI features into every app and service it can, from Office apps to its Bing search engine. The latest addition? A panel for the Bing Image Creator in Microsoft Edge.