High Court Bans Singer From Hitting YouTube Rival With DMCA Notices
Read more of this story at Slashdot.
Computers Tech Games Crypto Music and More
Read more of this story at Slashdot.
The Falls Church News-Press notes that Amazon’s pause announcement came just days after a 12-page glossy mass mailing entitled Capital Region Community Impact Report went out to thousands in the region. Beginning with a statement from Amazon CEO Andy Jassy, the report spelled out “Amazon’s philanthropic commitments in the Capital Region,” including $32M donated to 150+ local organizations in 2021, $990M+ committed to create and preserve 6,245 affordable housing units. 13,700 people supported by Amazon-funded affordable housing investments and 23,000 students who received food, clothing, school supplies, hygiene items and other urgent support through Amazon’s Right Now Needs Fund. According to the report, the commitments also included benefits to 75,000+ students across 343 schools who received computer science education through the Amazon Future Engineer program, to 166,000+ students who participated in the CodeVA K-12 CS education program during the 2021-22 academic year, the 5.3 million free meals delivered to underserved families in partnership with Northern Virginia food banks, 10,000 meals purchased from local restaurants and donated to support Covid-19 first responders, $350,000 contributed to local community theaters and arts-focused non-profits, to 6,000 students who explored cloud computing solutions at the Wakefield H.S. Think Big in the 2021-22 academic year, the 200,000 children and families from underserved communities who received free access to the National Children’s Museum through a $250,000 gift from Amazon, and the 16,700+ students served by Amazon’s support for local youth sports leagues. Not to look an Amazon philanthropy gift horse in the mouth, but should politicians be reliant on Amazon philanthropy to meet their communities’ basic needs? Amazon’s 2022 income taxes, by the way, were -$3.217B.
Read more of this story at Slashdot.
Science is the reason you aren’t reading this by firelight nestled cozily under a rock somewhere however, its practice significantly predates its formalization by Galileo in the 16th century. Among its earliest adherents — even before pioneering efforts of Aristotle — was Animaxander, the Greek philosopher credited with first arguing that the Earth exists within a void, not atop a giant turtle shell. His other revolutionary notions include, “hey, maybe animals evolved from other, earlier animals?” and “the gods aren’t angry, that’s just thunder.”
While Animaxander isn’t often mentioned alongside the later greats of Greek philosophy, his influence on the scientific method cannot be denied, argues NYT bestselling author, Carlo Rovelli, in his latest book, Animaxander and the Birth of Science, out now from Riverhead Books. In in, Rovelli celebrates Animaxander, not necessarily for his scientific acumen but for his radical scientific thinking — specifically his talent for shrugging off conventional notion to glimpse at the physical underpinnings of the natural world. In the excerpt below, Rovelli, whom astute readers will remember from last year’s There Are Places in the World Where Rules Are Less Important than Kindness, illustrates how even the works of intellectual titans like Einstein and Heisenberg can and inevitably are found lacking in their explanation of natural phenomena — in just the same way that those works themselves decimated the collective understanding of cosmological law under 19th century Newtonian physics.
Excerpted from Animaxander and the Birth of Science. Copyright © 2023 by Carlo Rovelli. Excerpted by permission of Riverhead, an imprint and division of Penguin Random House LLC, New York. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Did science begin with Anaximander? The question is poorly put. It depends on what we mean by “science,” a generic term. Depending on whether we give it a broad or a narrow meaning, we can say that science began with Newton, Galileo, Archimedes, Hipparchus, Hippocrates, Pythagoras, or Anaximander — or with an astronomer in Babylonia whose name we don’t know, or with the first primate who managed to teach her offspring what she herself had learned, or with Eve, as in the quotation that opens this chapter. Historically or symbolically, each of these moments marks humanity’s acquisition of a new, crucial tool for the growth of knowledge.
If by “science” we mean research based on systematic experimental activities, then it began more or less with Galileo. If we mean a collection of quantitative observations and theoretical/mathematical models that can order these observations and give accurate predictions, then the astronomy of Hipparchus and Ptolemy is science. Emphasizing one particular starting point, as I have done with Anaximander, means focusing on a specific aspect of the way we acquire knowledge. It means highlighting specific characteristics of science and thus, implicitly, reflecting on what science is, what the search for knowledge is, and how it works.
What is scientific thinking? What are its limits? What is the reason for its strength? What does it really teach us? What are its characteristics, and how does it compare with other forms of knowledge?
These questions shaped my reflections on Anaximander in preceding chapters. In discussing how Anaximander paved the way for scientific knowledge, I highlighted a certain number of aspects of science itself. Now I shall make these observations more explicit.
A lively debate on the nature of scientific knowledge has taken place during the last century. The work of philosophers of science such as Carnap and Bachelard, Popper and Kuhn, Feyerabend, Lakatos, Quine, van Fraassen, and many others has transformed our understanding of what constitutes scientific activity. To some extent, this reflection was a reaction to a shock: the unexpected collapse of Newtonian physics at the beginning of the twentieth century.
In the nineteenth century, a common joke was that Isaac New‐ ton had been not only one of the most intelligent men in human history, but also the luckiest, because there is only one collection of fundamental natural laws, and Newton had had the good fortune to be the one to discover them. Today we can’t help but smile at this notion, because it reveals a serious epistemological error on the part of nineteenth-century thinkers: the idea that good scientific theories are definitive and remain valid until the end of time.
The twentieth century swept away this facile illusion. Highly accurate experiments showed that Newton’s theory is mistaken in a very precise sense. The planet Mercury, for example, does not move following Newtonian laws. Albert Einstein, Werner Heisenberg, and their colleagues discovered a new collection of fundamental laws — general relativity and quantum mechanics — that replace Newton’s laws and work well in the domains where Newton’s theory breaks down, such as accounting for Mercury’s orbit, or the behavior of electrons in atoms.
Once burned, twice shy: few people today believe that we now possess definitive scientific laws. It is generally expected that one day Einstein’s and Heisenberg’s laws will show their limits as well, and will be replaced by better ones. In fact, the limits of Einstein’s and Heisenberg’s theories are already emerging. There are subtle incompatibilities between Einstein’s theory and Heisenberg’s, which make it unreasonable to suppose that we have identified the final, definitive laws of the universe. As a result, research goes on. My own work in theoretical physics is precisely the search for laws that might combine these two theories.
Now, the essential point here is that Einstein’s and Heisenberg’s theories are not minor corrections to Newton’s. The differences go far beyond an adjusted equation, a tidying up, the addition or replacement of a formula. Rather, these new theories constitute a radical rethinking of the world. Newton saw the world as a vast empty space where “particles” move about like pebbles. Einstein understands that such supposedly empty space is in fact a kind of storm-tossed sea. It can fold in on itself, curve, and even (in the case of black holes) shatter. No one had seriously contemplated this possibility before. For his part, Heisenberg understands that Newton’s “particles” are not particles at all but bizarre hybrids of particles and waves that run over Faraday lines’ webs. In short, over the course of the twentieth century, the world was found to be profoundly different from the way Newton imagined it.
On the one hand, these discoveries confirmed the cognitive strength of science. Like Newton’s and Maxwell’s theories in their day, these discoveries led quickly to an astonishing development of new technologies that once again radically changed human society. The insights of Faraday and Maxwell brought about radio and communications technology. Einstein’s and Heisenberg’s led to computers, information technology, atomic energy, and countless other technological advances that have changed our lives.
But on the other hand, the realization that Newton’s picture of the world was false is disconcerting. After Newton, we thought we had understood once and for all the basic structure and functioning of the physical world. We were wrong. The theories of Einstein and Heisenberg themselves will one day likely be proved false. Does this mean that the understanding of the world offered by science cannot be trusted, not even for our best science? What, then, do we really know about the world? What does science teach us about the world?
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-anaximander-carlo-rovelli-riverhead-books-143052774.html?src=rss
There is too much internet and our attempts to keep up with the breakneck pace of, well, everything these days — it is breaking our brains. Parsing through the deluge of inundating information hoisted up by algorithmic systems built to maximize engagement has trained us as slavering Pavlovian dogs to rely on snap judgements and gut feelings in our decision making and opinion formation rather than deliberation and introspection. Which is fine when you’re deciding between Italian and Indian for dinner or are waffling on a new paint color for the hallway, but not when we’re out here basing existential life choices on friggin’ vibes.
In his latest book, I, HUMAN: AI, Automation, and the Quest to Reclaim What Makes Us Unique, professor of business psychology and Chief Innovation Officer at ManpowerGroup, Tomas Chamorro-Premuzic explores the myriad ways that AI systems now govern our daily lives and interactions. From finding love to finding gainful employment to finding out the score of yesterday’s game, AI has streamlined the information gathering process. But, as Chamorro-Premuzic argues in the excerpt below, that information revolution is actively changing our behavior, and not always for the better.
Reprinted by permission of Harvard Business Review Press. Excerpted from I, HUMAN: AI, Automation, and the Quest to Reclaim What Makes Us Unique by Tomas Chamorro-Premuzic. Copyright 2023 Tomas Chamorro-Premuzic. All rights reserved.
If the AI age requires our brains to be always alert to minor changes and react quickly, optimizing for speed rather than accuracy and functioning on what behavioral economists have labeled System 1 mode (impulsive, intuitive, automatic, and unconscious decision-making), then it shouldn’t surprise us that we are turning into a less patient version of ourselves.
Of course, sometimes it’s optimal to react quickly or trust our guts. The real problem comes when fast mindlessness is our primary mode of decision-making. It causes us to make mistakes and impairs our ability to detect mistakes. More often than not, speedy decisions are borne out of ignorance.
Intuition can be great, but it ought to be hard-earned. Experts, for example, are able to think on their feet because they’ve invested thousands of hours in learning and practice: their intuition has become data-driven. Only then are they able to act quickly in accordance with their internalized expertise and evidence-based experience. Alas, most people are not experts, though they often think they are. Most of us, especially when we interact with others on Twitter, act with expert-like speed, assertiveness, and conviction, offering a wide range of opinions on epidemiology and global crises, without the substance of knowledge that underpins it. And thanks to AI, which ensures that our messages are delivered to an audience more prone to believing it, our delusions of expertise can be reinforced by our personal filter bubble. We have an interesting tendency to find people more open-minded, rational, and sensible when they think just like us. Our digital impulsivity and general impatience impair our ability to grow intellectually, develop expertise, and acquire knowledge.
Consider the little perseverance and meticulousness with which we consume actual information. And I say consume rather than inspect, analyze, or vet. One academic study estimated that the top-10 percent digital rumors (many of them fake news) account for up to 36 percent of retweets, and that this effect is best explained in terms of the so-called echo chamber, whereby retweets are based on clickbait that matches the retweeter’s views, beliefs, and ideology, to the point that any discrepancy between those beliefs and the actual content of the underlying article may go unnoticed. Patience would mean spending time determining whether something is real or fake news, or whether there are any serious reasons to believe in someone’s point of view, especially when we agree with it. It’s not the absence of fact-checking algorithms during presidential debates that deters us from voting for incompetent or dishonest politicians, but rather our intuition. Two factors mainly predict whether someone will win a presidential candidacy in the United States—the candidate’s height and whether we would want to have a beer with them.
While AI-based internet platforms are a relatively recent type of technology, their impact on human behavior is consistent with previous evidence about the impact of other forms of mass media, such as TV or video games, which show a tendency to fuel ADHD-like symptoms, like impulsivity, attention deficits, and restless hyperactivity. As the world increases in complexity and access to knowledge widens, we avoid slowing down to pause, think, and reflect, behaving like mindless automatons instead. Research indicates that faster information gathering online, for example, through instant Googling of pressing questions, impairs long-term knowledge acquisition as well as the ability to recall where our facts and information came from.
Unfortunately, it’s not so easy to fight against our impulsive behavior or keep our impatience in check. The brain is a highly malleable organ, with an ability to become intertwined with the objects and tools it utilizes. Some of these adaptations may seem pathological in certain contexts or cultures, but they are essential survival tools in others: restless impatience and fast-paced impulsivity are no exception.
Although we have the power to shape our habits and default patterns of behaviors to adjust to our habitat, if pace rather than patience is rewarded, then our impulsivity will be rewarded more than our patience. And if any adaptation is overly rewarded, it becomes a commoditized and overused strength, making us more rigid, less flexible, and a slave to our own habits, as well as less capable of displaying the reverse type of behavior. The downside of our adaptive nature is that we quickly become an exaggerated version of ourselves: we mold ourselves into the very objects of our experience, amplifying the patterns that ensure fit. When that’s the case, then our behaviors become harder to move or change.
When I first returned to my hometown in Argentina after having spent a full year in London, my childhood friends wondered why my pace was so unnecessarily accelerated—“Why are you in such a hurry?” Fifteen years later, I experienced the same disconnect in speed when returning to London from New York City, where the pace is significantly faster. Yet most New Yorkers seem slow by the relative standards of Hong Kong, a place where the button to close the elevator doors (two inward-looking arrows facing each other) is usually worn out, and the automatic doors of the taxis open and close while the taxis are still moving. Snooze, and you truly lose.
There may be limited advantages to boosting our patience when the world moves faster and faster. The right level of patience is always that which aligns with environmental demands and best suits the problems you need to solve. Patience is not always a virtue. If you are waiting longer than you should, then you are wasting your time. When patience breeds complacency or a false sense of optimism, or when it nurtures inaction and passivity, then it may not be the most desirable state of mind and more of a character liability than a mental muscle. In a similar vein, it is easy to think of real-life problems that arise from having too much patience or, if you prefer, would benefit from a bit of impatience: for example, asking for a promotion is usually a quicker way of getting it than patiently waiting for one; refraining from giving someone (e.g., a date, colleague, client, or past employer) a second chance can help you avoid predictable disappointments; and waiting patiently for an important email that never arrives can harm your ability to make better, alternative choices. In short, a strategic sense of urgency—which is the reverse of patience—can be rather advantageous.
There are also many moments when patience, and its deeper psychological enabler of self-control, may be an indispensable adaptation. If the AI age seems disinterested in our capacity to wait and delay gratification, and patience becomes somewhat of a lost virtue, we risk becoming a narrower and shallower version of ourselves.
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-i-human-tomas-chamorro-premuzic-harvard-business-review-press-153003112.html?src=rss
Deep Brain Stimulation therapies have proven an invaluable treatment option for patients suffering from otherwise debilitating diseases like Parkinson’s. However, it — and its sibling tech, brain computer interfaces — currently suffer a critical shortcoming: the electrodes that convert electron pulses into bioelectric signals don’t sit well with the surrounding brain tissue. And that’s where folks with the lab coats and holding squids come in! InWe Are Electric: Inside the 200-Year Hunt for Our Body’s Bioelectric Code, and What the Future Holds, author Sally Adee delves into two centuries of research into an often misunderstood and maligned branch of scientific discovery, guiding readers from the pioneering works of Alessandro Volta to the life-saving applications that might become possible once doctors learn to communicate directly with our body’s cells.
Excerpted from We Are Electric: Inside the 200-Year Hunt for Our Body’s Bioelectric Code, and What the Future Holds by Sally Adee. Copyright © 2023. Available from Hachette Books, an imprint of Hachette Book Group, Inc.
“There’s a fundamental asymmetry between the devices that drive our information economy and the tissues in the nervous system,” Bettinger told The Verge in 2018. “Your cell phone and your computer use electrons and pass them back and forth as the fundamental unit of information. Neurons, though, use ions like sodium and potassium. This matters because, to make a simple analogy, that means you need to translate the language.”
“One of the misnomers within the field actually is that I’m injecting current through these electrodes,” explains Kip Ludwig. “Not if I’m doing it right, I don’t.” The electrons that travel down a platinum or titanium wire to the implant never make it into your brain tissue. Instead, they line up on the electrode. This produces a negative charge, which pulls ions from the neurons around it. “If I pull enough ions away from the tissue, I cause voltage-gated ion channels to open,” says Ludwig. That can — but doesn’t always — make a nerve fire an action potential. Get nerves to fire. That’s it — that’s your only move.
It may seem counterintuitive: the nervous system runs on action potentials, so why wouldn’t it work to just try to write our own action potentials on top of the brain’s own ones? The problem is that our attempts to write action potentials can be incredibly ham-fisted, says Ludwig. They don’t always do what we think they do. For one thing, our tools are nowhere near precise enough to hit only the exact neurons we are trying to stimulate. So the implant sits in the middle of a bunch of different cells, sweeping up and activating unrelated neurons with its electric field. Remember how I said glia were traditionally considered the brain’s janitorial staff? Well, more recently it emerged that they also do some information processing—and our clumsy electrodes will fire them too, to unknown effects. “It’s like pulling the stopper on your bathtub and only trying to move one of three toy boats in the bathwater,” says Ludwig. And even if we do manage to hit the neurons we’re trying to, there’s no guarantee that the stimulation is hitting it in the correct location.
To bring electroceuticals into medicine, we really need better techniques to talk to cells. If the electron-to-ion language barrier is an obstacle to talking to neurons, it’s an absolute non-starter for cells that don’t use action potentials, like the ones that we are trying to target with next-generation electrical interventions, including skin cells, bone cells, and the rest. If we want to control the membrane voltage of cancer cells to coax them back to normal behavior; if we want to nudge the wound current in skin or bone cells; if we want to control the fate of a stem cell—none of that is achievable with our one and only tool of making a nerve fire an action potential. We need a bigger toolkit. Luckily, this is the objective for a fast-growing area of research looking to make devices, computing elements, and wiring that can talk to ions in their native tongue.
Several research groups are working on “mixed conduction,” a project whose goal is devices that can speak bioelectricity. It relies heavily on plastics and advanced polymers with long names that often include punctuation and numbers. If the goal is a DBS electrode you can keep in the brain for more than ten years, these materials will need to safely interact with the body’s native tissues for much longer than they do now. And that search is far from over. People are understandably beginning to wonder: why not just skip the middle man and actually make this stuff out of biological materials instead of manufacturing polymers? Why not learn how nature does it?
It’s been tried before. In the 1970s, there was a flurry of interest in using coral for bone grafts instead of autografts. Instead of a traumatic double-surgery to harvest the necessary bone tissue from a different part of the body, coral implants acted as a scaffold to let the body’s new bone cells grow into and form the new bone. Coral is naturally osteoconductive, which means new bone cells happily slide onto it and find it an agreeable place to proliferate. It’s also biodegradable: after the bone grew onto it, the coral was gradually absorbed, metabolized, and then excreted by the body. Steady improvements have produced few inflammatory responses or complications. Now there are several companies growing specialized coral for bone grafts and implants.
After the success of coral, people began to take a closer look at marine sources for biomaterials. This field is now rapidly evolving — thanks to new processing methods which have made it possible to harvest a lot of useful materials from what used to be just marine waste, the last decade has seen an increasing number of biomaterials that originate from marine organisms. These include replacement sources for gelatin (snails), collagen (jellyfish), and keratin (sponges), marine sources of which are plentiful, biocompatible, and biodegradable. And not just inside the body — one reason interest in these has spiked is the effort to move away from polluting synthetic plastic materials.
Apart from all the other benefits of marine-derived dupes, they’re also able to conduct an ion current. That was what Marco Rolandi was thinking about in 2010 when he and his colleagues at the University of Washington built a transistor out of a piece of squid.
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-we-are-electric-sally-adee-hachette-books-153003295.html?src=rss
Engine knock, wherein fuel ignites unevenly along the cylinder wall resulting in damaging percussive shockwaves, is an issue that automakers have struggled to mitigate since the days of the Model T. The industry’s initial attempts to solve the problem — namely tetraethyl lead — were, in hindsight, a huge mistake, having endumbened and stupefied an entire generation of Americans with their neurotoxic byproducts.
Dr. Vaclav Smil, Professor Emeritus at the University of Manitoba in Winnipeg, examines the short-sighted economic reasoning that lead to leaded gas rather than a nationwide network of ethanol stations in his new book Invention and Innovation: A Brief History of Hype and Failure. Lead gas is far from the only presumed advance to go over like a lead balloon. Invention and Innovation is packed with tales of humanity’s best-intentioned, most ill-conceived and generally half-cocked ideas — from airships and hyperloops to DDT and CFCs.
Excerpted from Invention and Innovation: A Brief History of Hype and Failure by Professor Vaclav Smil. Reprinted with permission from The MIT Press. Copyright 2023.
Just seven years later Henry Ford began to sell his Model T, the first mass-produced affordable and durable passenger car, and in 1911 Charles Kettering, who later played a key role in developing leaded gasoline, designed the first practical electric starter, which obviated dangerous hand cranking. And although hard-topped roads were still in short supply even in the eastern part of the US, their construction began to accelerate, with the country’s paved highway length more than doubling between 1905 and 1920. No less important, decades of crude oil discoveries accompanied by advances in refining provided the liquid fuels needed for the expansion of the new transportation, and in 1913 Standard Oil of Indiana introduced William Burton’s thermal cracking of crude oil, the process that increased gasoline yield while reducing the share of volatile compounds that make up the bulk of natural gasolines.
But having more affordable and more reliable cars, more paved roads, and a dependable supply of appropriate fuel still left a problem inherent in the combustion cycle used by car engines: the propensity to violent knocking (pinging). In a perfectly operating gasoline engine, gas combustion is initiated solely by a timed spark at the top of the combustion chamber and the resulting flame front moves uniformly across the cylinder volume. Knocking is caused by spontaneous ignitions (small explosions, mini-detonations) taking place in the remaining gases before they are reached by the flame front initiated by sparking. Knocking creates high pressures (up to 18 MPa, or nearly up to 180 times the normal atmospheric level), and the resulting shock waves, traveling at speeds greater than sound, vibrate the combustion chamber walls and produce the telling sounds of a knocking, malfunctioning engine.
Knocking sounds alarming at any speed, but when an engine operates at a high load it can be very destructive. Severe knocking can cause brutal irreparable engine damage, including cylinder head erosion, broken piston rings, and melted pistons; and any knocking reduces an engine’s efficiency and releases more pollutants; in particular, it results in higher nitrogen oxide emissions. The capacity to resist knocking— that is, fuel’s stability— is based on the pressure at which fuel will spontaneously ignite and has been universally measured in octane numbers, which are usually displayed by filling stations in bold black numbers on a yellow background.
Octane (C8H18) is one of the alkanes (hydrocarbons with the general formula CnH2n + 2) that form anywhere between 10 to 40 percent of light crude oils, and one of its isomers (compounds with the same number of carbon and hydrogen atoms but with a different molecular structure), 2,2,4-trimethypentane (iso-octane), was taken as the maximum (100 percent) on the octane rating scale because the compound completely prevents any knocking. The higher the octane rating of gasoline, the more resistant the fuel is to knocking, and engines can operate more efficiently with higher compression ratios. North American refiners now offer three octane grades, regular gasoline (87), midgrade fuel (89), and premium fuel mixes (91– 93).
During the first two decades of the twentieth century, the earliest phase of automotive expansion, there were three options to minimize or eliminate destructive knocking. The first one was to keep the compression ratios of internal combustion engines relatively low, below 4.3:1: Ford’s best-selling Model T, rolled out in 1908, had a compression ratio of 3.98:1. The second one was to develop smaller but more efficient engines running on better fuel, and the third one was to use additives that would prevent the uncontrolled ignition. Keeping compression ratios low meant wasting fuel, and the reduced engine efficiency was of a particular concern during the years of rapid post–World War I economic expansion as rising car ownership of more powerful and more spacious cars led to concerns about the long-term adequacy of domestic crude oil supplies and the growing dependence on imports. Consequently, additives offered the easiest way out: they would allow using lower-quality fuel in more powerful engines operating more efficiently with higher compression ratios.
During the first two decades of the twentieth century there was considerable interest in ethanol (ethyl alcohol, C2H6O or CH3CH2OH), both as a car fuel and as a gasoline additive. Numerous tests proved that engines using pure ethanol would never knock, and ethanol blends with kerosene and gasoline were tried in Europe and in the US. Ethanol’s well-known proponents included Alexander Graham Bell, Elihu Thomson, and Henry Ford (although Ford did not, as many sources erroneously claim, design the Model T to run on ethanol or to be a dual-fuel vehicle; it was to be fueled by gasoline); Charles Kettering considered it to be the fuel of the future.
But three disadvantages complicated ethanol’s large-scale adoption: it was more expensive than gasoline, it was not available in volumes sufficient to meet the rising demand for automotive fuel, and increasing its supply, even only if it were used as the dominant additive, would have claimed significant shares of crop production. At that time there were no affordable, direct ways to produce the fuel on a large scale from abundant cellulosic waste such as wood or straw: cellulose had first to be hydrolyzed by sulfuric acid and the resulting sugars were then fermented. That is why the fuel ethanol was made mostly from the same food crops that were used to make (in much smaller volumes) alcohol for drinking and medicinal and industrial uses.
The search for a new, effective additive began in 1916 in Charles Kettering’s Dayton Research Laboratories with Thomas Midgley, a young (born in 1889) mechanical engineer, in charge of this effort. In July 1918 a report prepared in collaboration with the US Army and the US Bureau of Mines listed ethyl alcohol, benzene, and a cyclohexane as the compounds that did not produce any knocking in high-compression engines. In 1919, when Kettering was hired by GM to head its new research division, he defined the challenge as one of averting a looming fuel shortage: the US domestic crude oil supply was expected to be gone in fifteen years, and “if we could successfully raise the compression of our motors . . . we could double the mileage and thereby lengthen this period to 30 years.” Kettering saw two routes toward that goal, by using a high-volume additive (ethanol or, as tests showed, fuel with 40 percent benzene that eliminated any knocking) or a low-percentage alternative, akin to but better than the 1 percent iodine solution that was accidentally discovered in 1919 to have the same effect.
In early 1921 Kettering learned about Victor Lehner’s synthesis of selenium oxychloride at the University of Wisconsin. Tests showed it to be a highly effective but, as expected, also a highly corrosive anti-knocking compound, but they led directly to considering compounds of other elements in group 16 of the periodic table: both diethyl selenide and diethyl telluride showed even better anti-knocking properties, but the latter compound was poisonous when inhaled or absorbed through skin and had a powerful garlicky smell. Tetraethyl tin was the next compound found to be modestly effective, and on December 9, 1921, a solution of 1 percent tetraethyl lead (TEL) — (C2H5)4 Pb — produced no knock in the test engine, and soon was found to be effective even when added in concentrations as low as 0.04 percent by volume.
TEL was originally synthesized in Germany by Karl Jacob Löwig in 1853 and had no previous commercial use. In January 1922, DuPont and Standard Oil of New Jersey were contracted to produce TEL, and by February 1923 the new fuel (with the additive mixed into the gasoline at pumps by means of simple devices called ethylizers) became available to the public in a small number of filling stations. Even as the commitment to TEL was going ahead, Midgley and Kettering conceded that “unquestionably alcohol is the fuel of the future,” and estimates showed that a 20 percent blend of ethanol and gasoline needed in 1920 could be supplied by using only about 9 percent of the country’s grain and sugar crops while providing an additional market for US farmers. And during the interwar period many European and some tropical countries used blends of 10– 25 percent ethanol (made from surplus food crops and paper mill wastes) and gasoline, admittedly for relatively small markets as the pre–World War II ownership of family cars in Europe was only a fraction of the US mean.
Other known alternatives included vapor-phase cracked refinery liquids, benzene blends, and gasoline from naphthenic crudes (containing little or no wax). Why did GM, well aware of these realities, decide not only to pursue just the TEL route but also to claim (despite its own correct understanding) that there were no available alternatives: “So far as we know at the present time, tetraethyl lead is the only material available which can bring about these results”? Several factors help to explain the choice. The ethanol route would have required a mass-scale development of a new industry dedicated to an automotive fuel additive that could not be controlled by GM. Moreover, as already noted, the preferable option, producing ethanol from cellulosic waste (crop residues, wood), rather than from food crops, was too expensive to be practical. In fact, the large-scale production of cellulosic ethanol by new enzymatic conversions, promised to be of epoch-making importance in the twenty-first century, has failed its expectations, and by 2020 high-volume US production of ethanol (used as an anti-knocking additive) continued to be based on fermenting corn: in 2020 it claimed almost exactly one-third of the country’s corn harvest.
Some of us are destined to lead successful lives thanks to the circumstances of our birth. Some of us, like attorney Bruce Jackson, are destined to lead such lives in spite them. Raised in New York’s Amsterdam housing projects and subjected to the daily brutalities of growing up a black man in America, Jackson’s story is ultimately one of tempered success. Sure he went on to study at Georgetown Law before representing some of the biggest names in hip hop — LL Cool J, Heavy D, the Lost Boyz and Mr. Cheeks, SWV, Busta Rhymes — and working 15 years as Microsoft’s associate general counsel. But at the end of the day, he is still a black man living in America, with all the baggage that comes with it.
In his autobiography, Never Far from Home (out now from Atria), Jackson recounts the challenges he has faced in life, of which there are no shortage: from being falsely accused of robbery at age 10 to witnessing the murder of his friend at 15 to spending a night in lockup as an adult for the crime of driving his own car; the shock of navigating Microsoft’s lillywhite workforce following years spent in the entertainment industry, and the end of a loving marriage brought low by his demanding work. While Jackson’s story is ultimately one of triumph, Never Far from Home reveals a hollowness, a betrayal, of the American Dream that people of Bill Gates’ (and this writer’s) complexion will likely never have to experience. In the excerpt below, Jackson recalls his decision to leave a Napster-ravaged music industry to the clammy embrace of Seattle and the Pacific Northwest.
Excerpted from Never Far From Home My Journey from Brooklyn to Hip Hop, Microsoft, and the Law by Bruce Jackson. Published by Atria Books, an imprint of Simon & Schuster. Copyright © 2023 by Bruce Jackson. All rights reserved.
“We gotta figure out a way to stop this.”
In the late 1990s, the digital revolution pushed the music business into a state of flux. And here was Tony Dofat, sitting in my office, apoplectic, talking about how to stop Napster and other platforms from taking the legs out from under the traditional recording industry.
I shook my head. “If they’re already doing it, then it’s too late. Cat’s out of the bag. I don’t care if you start suing people, you’re never going back to the old model. It’s over.”
In fact, lawsuits, spearheaded by Metallica and others, the chosen mode of defense in those early days of the digital music onslaught, only served to embolden consumers and publicize their cause. Free music for everyone! won the day.
These were terrifying times for artists and industry executives alike. A decades-old business model had been built on the premise that recorded music was a salable commodity.
Artists would put out a record and then embark on a promotional tour to support that record. A significant portion of a musician’s income (and the income of the label that supported the artist) was derived from the sale of a physical product: recorded albums (or singles), either in vinyl, cassette, or compact disc. Suddenly, that model was flipped on its head… and still is. Artists earn a comparative pittance from downloads or streams, and most of their revenue is derived from touring, or from monetizing social media accounts whose numbers are bolstered by a song’s popularity. (Publicly, Spotify has stated that it pays artists between $.003 and $.005 per stream. Translation: 250 streams will result in revenue of approximately one dollar for the recording artist.)
Thus, the music itself has been turned primarily into a marketing tool used to entice listeners to the product: concert and festival tickets, and a social media advertising platform. It is a much tougher and leaner business model. Additionally, it is a model that changed the notion that record labels and producers needed only one decent track around which they could build an entire album. This happened all the time in the vinyl era: an artist came up with a hit single, an album was quickly assembled, often with filler that did not meet the standard established by the single. Streaming platforms changed all of that. Consumers today seek out only the individual songs they like, and do it for a fraction of what they used to spend on albums. Ten bucks a month gets you access to thousands of songs on Spotify or Pandora or Apple Music roughly the same amount a single album cost in the pre-streaming era. For consumers, it has been a landmark victory (except for the part about artists not being able to create art if they can’t feed themselves); for artists and record labels, it has been a catastrophic blow.
For everyone connected to the music business, it was a shock to the system. For me, it was provocation to consider what I wanted to do with the next phase of my career. In early 2000, I received a call from a corporate recruiter about a position with Microsoft, which was looking for an in-house counsel with a background in entertainment law — specifically, to work in the company’s burgeoning digital media division. The job would entail working with content providers and negotiating deals in which they would agree to make their content — music, movies, television shows, books — available to consumers via Microsoft’s Windows Media Player. In a sense, I would still be in the entertainment business; I would be spending a lot of time working with the same recording industry executives with whom I had built prior relationships.
But there were downsides, as well. For one thing, I was recently married, with a one-year-old baby and a stepson, living in a nice place in the New York City suburbs. I wasn’t eager to leave them—or my other daughters—three thousand miles behind while I moved to Microsoft’s headquarters in the Pacific Northwest. From an experience standpoint, though, it was almost too good an offer to turn down.
Deeply conflicted and at a crossroads in my career, I solicited advice from friends and colleagues, including, most notably, Clarence Avant. If I had to name one person who has been the most important mentor in my life, it would be Clarence, “the Black Godfather.” In an extraordinary life that now spans almost ninety years, Clarence has been among the most influential men in Black culture, music, politics, and civil rights. It’s no surprise that Netflix’s documentary on Clarence featured interviews with not just a who’s who of music and entertainment industry superstars, but also former US presidents Barack Obama and Bill Clinton.
In the early 1990s, Clarence became chairman of the board of Motown Records. As lofty a title as that might be, it denotes only a fraction of the wisdom and power he wielded. When the offer came down from Microsoft, I consulted with Clarence. Would I be making a mistake, I wondered, by leaving the music business and walking away from a firm I had started? Clarence talked me through the pros and cons, but in the end, he offered a steely assessment, in a way that only Clarence could.
“Son, take your ass to Microsoft, and get some of that stock.”
America’s first astronauts from the 1960s were all pulled from the highest ranks of the nation’s military. As such, NASA’s first few classes tended to conform to a rather specific demographic theme — white, male, flattop haircut you could set a watch too. By the mid-70’s however, the space agency had gotten with the times and opened up the spacewalking profession to more than former Air Force and Navy test pilots.
In The New Guys, author Meredith Bagby follows the exploits of NASA’s Astronaut class of 1978 — “Class 8,” America’s first women, African Americans, Asian American, and gay person to fly to space — from the team’s selection through their mastering of cutting-edge technologies aboard the Space Shuttle and their history-making orbital missions. In the excerpt below, Class 8 receives a brutal introduction to the dangers that await them.
From The New Guys by Meredith Bagby. Copyright © 2023 by Meredith Bagby. Reprinted courtesy of William Morrow, an imprint of HarperCollins Publishers.
“Hey! We’ve got a fire in the cockpit!” a man screamed, then his voice cut out. Within seconds, another desperate voice cut through the static.
“We’ve got a bad fire . . . !” the second man shouted in pain.
“We’re burning up . . . !!!” a third howled.
Then the transmission faded into nothing but static.
In one of the many tiered seats in Mission Control, Ron McNair and his new classmates listened to a recording of the Apollo 1 fire. During a preflight test on January 27, 1967, astronauts Gus Grissom, Ed White, and Roger Chaffee had burned alive. Even though over a decade had passed since the accident, the pain and fear of the astronauts who perished was palpable to the room of new recruits.
The instructor surveyed the faces of the astronaut candidates. Are you sure you’re ready for this? The audio was a wake-up call, especially for those like Ron who had not served in the military and had never had a job with life-and-death consequences. If this reality was too much for any of them to accept, the instructor suggested, now was the time to go. No one budged.
A few weeks earlier, as Ron moved his family across the country from left-leaning Malibu, California, to the Lone Star State, the summer sizzled. Disco hits from the Bee Gees’, “Night Fever” and “Stayin’ Alive,” blared from the radio. Billboards advertised the new Hollywood blockbuster Grease, starring John Travolta and Olivia Newton-John. In the nation’s capital, almost a hundred thousand demonstrators marched in support of the Equal Rights Amendment—at the time, the largest march for women’s rights in US history. Muhammad Ali was on the verge of making history at the Louisiana Superdome, becoming the first man to win the World Heavyweight title three times in a row.
When Ron and his wife, Cheryl, arrived in Houston, they found a little starter apartment before moving to Clear Lake along with the Onizukas and the Gregorys. Everyone that had kids—or planned to—wanted a lawn for football and a cul-de-sac for bike riding. The neighborhood’s proximity to the middle and high schools made it the obvious choice for families. Single astronauts like Sally Ride, Kathy Sullivan, and Steve Hawley settled into apartments right outside Johnson’s back gate with a short commute, volleyball court, and communal barbecue pit.
On the Monday after the July 4th holiday, Ron drove through the gates of Johnson Space Center for his first day of work. Looking up from his baffling acronym-filled schedule, Ron spotted a few of his classmates and followed them to Building 4, the home of Johnson’s Flight Crew Operations. Everyone was rushing to the Monday morning all-hands meeting, a staple of the Astronaut Office since the Mercury days.
Standing watch from their office doors, Sylvia Salinas, Mary Lopez, and Estella Hernandez Gillette, all in their twenties, took in the excitement as the new astronauts stormed the hallways. The Hispanic American administrative staff — working in and around the Astronaut Office — came to be known as the Mexican Mafia. As the liaisons for George Abbey and John Young, Sylvia and Mary, and later Estella, ran the show behind scenes, making sure things went smoothly in the Astronaut Office. Up until then, the astronauts they worked for were military men, older in age and more conventional in style; they did not fraternize with support staff. Now, “kids like them” were rolling in. The arrival of Astronaut Class 8 was like a breath of fresh air.
A large conference table surrounded by two rings of chairs dominated Room 3025, the locus of the Monday meeting. Assuming the first ring was reserved for administrators and senior astronauts, Ron took a seat in the back row, as did the rest of his class. Everyone, that is, except the blond, mustachioed Rick Hauck, a US Navy commander who by military standards was the most senior-ranking pilot of their class. Hauck took a seat at the table. Some in the room gasped. Others eyed him with suspicion. Wow, he must either be a fool or the most confident bastard among us. Maybe both. Either way he made an impression.
Like Hauck, the fifteen fighter pilots in Ron’s class had plenty of swagger and bravado, and mixed easily with the veteran astronauts. The old guys, twenty-eight in all, including moonwalkers John Young and Alan Bean, whom Ron met during interview week, filled the inner circle. Among them were astronauts still itching for their first trip to space, like Bob “Crip” Crippen, the baby of the group at forty years old, and Richard “Dick” Truly, both career military pilots who had flown for both the Navy and Air Force. These yet-to-fly guys were caught between programs, too late for Apollo and—so far—too early for the shuttle. Crippen and Truly were part of Astronaut Group 7, who had been transferred to NASA after the cancellation of the Manned Orbiting Laboratory (MOL), a classified Cold War military project developed to acquire surveillance images from space. After a decade at the agency, the former MOL astronauts had only ever flown a desk.
Everyone here wanted a ticket to space, but the ten interesting people would be setting historical precedent, breaking barriers that in the past restricted people like them from space travel. Of the six women in the room, one would be the first American woman in space. While the Soviets had flown the first female astronaut, Valentina Tereshkova—being the first American woman in space would earn a prominent place in the annals of history. In 1978, no Black person had flown to space. Ron, along with Guy Bluford, and Fred Gregory would compete to be the first, while Ellison Onizuka would almost certainly be the first Asian American to fly. Guy and Fred, both Vietnam vets, and El, an Air Force test pilot, all spoke the military language of the old guys. Ron was an outsider even among outsiders.
John Young, chief of the Astronaut Office, began the meeting, mumbling “a few forgettable words of welcome” while staring at his shoes. Though he had braved the depths of space four times, on both Apollo and Gemini, Young had not conquered public speaking. Compact, with a jockey’s build, Young was a handsome Navy devil with big ears and an aw-shucks demeanor that belied how truly meticulous he was. He preferred solving thorny engineering problems, to dealing with management issues, and yet here he was as head of the Astronaut Office. He explained to the new class that they were not yet astronauts; they were still astronaut candidates, or “AsCans” for short. Only after two years of training would they earn the title astronaut and a silver pin to mark the achievement.
Inspired by Navy and Air Force aviator badges, the pin depicted a trio of rays merged atop a shining star and encircled by a halo denoting orbital flight. The silver pin meant you were flight-ready, but the gold pin meant you had flown to space. That’s when you make it. Young then left the group with a bit of sage advice: “Don’t talk about nothing you know nothing about.” Got it. So basically, keep our mouths shut.
As the old guys left the room, they once-overed the new guys. Quite simply, the old guys were a different generation. They were veterans, test pilots, and guys who had never worked with women or civilian graduate students. Underneath their pique was also perhaps a tinge of fear. The line to ride the bird just got a whole lot longer; maybe they would miss their chance altogether.
Who are these guys anyway? Hell, half of them are civilians, wet behind the ears, fresh off their mother’s teat. They traded in high grades and accolades, not in life-or-death. The old guys shook their heads. Those Fucking New Guys. “The Fucking New Guy,” a military term for the newest grunt in the unit, seemed to suit Astronaut Class 8 perfectly. So was born the official class nickname: TFNG. In polite company, the TFNGs referred to themselves as “Thirty-Five New Guys,” but everyone knew what the term really meant.
After the meeting, secretary Sylvia Salinas handed the New Guys their official NASA portraits and asked them to create signatures for the auto-pen machine. The agency would print thousands of autographed photos. Do thousands of people want our autograph? Ron wondered. It’s astronaut insurance, a veteran astronaut quipped. If you die, your family will have something to sell. The joke did not get any laughs.
There was a time in the last century when we, quite foolishly, believed incineration to be a superior means of waste disposal than landfills. And, for decades, many of America’s most disadvantaged have been paying for those decisions with with their lifespans. South Baltimore’s Curtis Bay neighborhood, for example, is home to two medical waste incinerators and an open-air coal mine. It’s ranked in the 95th percentile for hazardous waste and boasts among the highest rates of asthma and lung disease in the entire country.
The city’s largest trash incinerator is the Wheelabrator–BRESCO, which burns through 2,250 tons of garbage a day. It has been in operation since the 1970s, belching out everything from mercury and lead to hydrochloric acid, sulfur dioxide, and chromium into the six surrounding working-class neighborhoods and the people who live there. In 2011, students from Benjamin Franklin High School began to push back against the construction of a new incinerator, setting off a decade-long struggle that pitted high school and college students against the power of City Hall.
In Fighting to Breathe: Race, Toxicity, and the Rise of Youth Activism in Baltimore, Dr. Nicole Fabricant, Professor of Anthropology at Towson University in Maryland, chronicles the students’ participatory action research between 2011 and 2021, organizing and mobilizing their communities to fight back against a century of environmental injustice, racism and violence in one of the nation’s most polluted cities. In the excerpt below, Fabricant discusses the use of art — specifically that of crankies — in movement building.
Excerpted from Fighting to Breathe: Race, Toxicity, and the Rise of Youth Activism in Baltimore by Nicole Fabricant, published by University of California Press. Copyright 2022.
As the students developed independent investigations, they discovered what had happened in the campaigns against toxins that preceded their own struggle against the incinerator. They learned that the Fairfield neighborhood, before being relocated to its current site, had been situated near to where Energy Answers was planning to build their trash-to-energy incinerator. At the time of the students’ investigations, this area was an abandoned industrial site surrounded by heavy diesel truck traffic, polluting chemical and fertilizer industries, and abandoned brownfield sites.
Students read that the City had built basic infrastructure in Wagner’s Point, the all-white (though poor and white ethnic, to be clear) community on the peninsula in the 1950s, nearly thirty years before doing so in Fairfield, which was located alongside Wagner’s Point but all (or almost all) Black. As Destiny reiterated to me in the Fall of 2019:
Wagner’s Point was predominantly white and Fairfield predominantly Black, but both communities were company towns, living in poverty, working in dangerous hazardous conditions, and forced to live in a toxic environment…. On the surface, this history can be read as a story of two communities, different in culture and race, facing the issue together. But this ignores the issue of racism that divided the two communities. For instance, Fairfield did not get access to plumbing… until well into the 1970s. This is an example of structural racism. It is also a story not told by our history books.
The students talked in small groups about systemic and structural racism and unfair housing policies. They investigated the evacuation of Fairfield Housing. They learned that former residents were forcibly relocated to public housing and were offered $22,500 for renters and up too $5,250 per household. They also received moving costs of up to $1,500 per household. When 14 households remained in Fairfield a decade later, then-Mayor Kurt Schmoke stated that he would prefer to move all residents out of Fairfield, but the city did not have any money for relocation. This history provoked Free Your Voice youth to think beyond their community to how structural racism shaped citywide decisions and policies.
Despite attempts to integrate school systems in the 1950s and the passage of civil rights legislation in the 1960s intended, specifically, to mitigate racism in housing policies, the provision of public education and the regulation of housing practices remained uneven in the 1970s (and into the present). Students learned that in 1979 a CSX railroad car carrying nine thousand gallons of highly concentrated sulfuric acid overturned and the Fairfield Homes public housing complex was temporarily evacuated. That same year, they read, an explosion at the British Petroleum oil tank, located on Fairfield Peninsula, set off a seven-alarm fire. All of this led the students to deeper inquiry.
Figuring out the ways in which structural racism shaped contemporary ideas about people, bodies, and space is something that Destiny often referred to when speaking publicly. Destiny explained that studying “history allowed us to see our community in a way that gave us the ability to build power or collective strength. So, how do you confront this history, this marketplace?” Building power within the school was about “re-education,” she said, but it was also about rebuilding social relationships across the community and helping residents to understand the structural conditions and histories sustaining inequities that others (especially white others) tried to explain away using racist stereotypes and tropes (e.g., Black youth as “thugs”; “they’re poor because they’re lazy”). These tropes subtly and not so subtly suggested racial and cultural inferiority.
As a group, the students worked to establish a presence in the community and to create spontaneous spaces for dialogue and discussion. They attended a Fairfield reunion in Curtis Bay Park during the summer of 2013, where approximately 150 former Fairfield Homes residents gathered to celebrate their history, reminisce, and have a cookout together. Gathered on the grass next to the Curtis Bay Recreation Center, former residents reminisced about what life was like in the projects. At one point, an elder participant shared with Destiny, “Fairfield was the Cadillac of housing projects…. We were all a family, we took care of one another.” The Free Your Voice students engaged with living history as they listened and learned.
For many of the students, the combined processes of reading texts and listening to elder residents’ stories moved them from numbness to awareness. Being able to discuss what they learned in sophisticated conversations with their peers and the experts they sought out helped to build their confidence as activists and adult interlocutors.
While analysis and study were key to building change campaigns, the students also recognized that building a sociopolitical movement of economically disadvantaged people required more than mobilizing bodies. To be effective, they were going to have to move hearts and minds.
In 2014, Free Your Voice students decided to strengthen the emotional and relationship building aspects of their campaign by adopting art forms, including performance and storytelling, into their communication efforts. Destiny began a speech she delivered at The Worker Justice Center human rights dinner in 2015 by quoting W.E.B. Dubois: “‘Art is not simply works of art; it is the spirit that knows beauty, that has music in its being and the color of sunsets in its handkerchiefs, that can dance on a flaming world and make the world dance, too’” (Watford 2015). Art — in the form of a vintage performance genre known as “the crankie” and rap songs — became a tool the students utilized to tell their stories to much broader publics and to boost emotional connections with their allies. Performances particularly allowed youth to be creative and inventive. Their productions were often malleable. Sometimes, Free Your Voice youth would rewrite a script based on audience feedback. As a result, their performances were often improvisational, and they invited residents to be a part of the storytelling. This allowed the student-performers to develop strong narrative structures and especially realistic characters.
Not only did students do art, but they also invited artists, including performers, to join the Dream Team to broaden both the appeal and impact of the Stop the Incinerator campaign. One artist at the Maryland Institute College of Art, Janette Simpson, spoke to me at length about the genesis of her commitment to Free Your Voice’s organizing, and how that commitment deepened and extended her work with other campaigns originating with The Worker Justice Center. Free Your Voice students approached Simpson, with their teacher Daniel Murphy acting as their mediator, about incorporating her work in theater into their campaign.12 They sent her a recent report on the environmental history of the peninsula and asked that she read it. That report became the hook that convinced Simpson to collaborate:
I had been thinking about how art and artists can serve social movements, and how artists also have agency in the making of their artwork. Or maybe thinking about autonomy. Free Your Voice youth suggested I read the Diamond report, which was written by a team of researchers from the University of Maryland Law School. I remember being like, Wow! What a story! All these visuals came to my mind… like the guano factories, the ships, these agricultural communities, this Black community versus the white community… the relationship to the water and the relationship to the city. So I decided I would try to illustrate a version of that report in a way. Like, what did people look like in 1800s, and what were they wearing? … Then I realized that this is not my history, who am I to tell someone else’s story? I need to think more symbolically, and then it came to me to write this illustrative history as a fable or an allegory.
Which is what she did, alongside Terrel Jones (whose childhood lived experiences I detailed in chapter 2). Terrel and Simpson created a crankie, an old storytelling art form popular in the nineteenth century that includes a long, illustrated scroll wound onto two spools. The spools are loaded into a box that has a viewing screen and the scroll is then hand-cranked, hence the name “crankie.” While the story is told, a tune is played or a song is sung. Terrel and Simpson created a show for the anti-incinerator campaign that was performed throughout the city for audiences of all ages and walks of life. The Holey Land, as their show was titled, was an allegory about the powerful connection between people and the place they call home. In this tale, the Peninsula People and the magic in their land are threatened when a stranger with a tall hat and a shovel shows up with big ideas for “improving” their community. As storybook images scroll past the viewing screen, the vibrant and colorful pictures of a peninsula rich in natural resources, including orange and pink fish, slowly get usurped by those of the man with the shovel building his factories, and the Peninsula People are left to ponder the fate of their land. The story ends with a surprising twist, and a hopeful message about a community’s ability to determine their own future.