The Indelible Influence of Asimov: Apple’s Streaming Service Debuts Foundation

It has inspired innovations in sociology and psychology. It has sparked the imagination of liberals like Paul Krugman and conservatives like Newt Gingrich. It even shares credit (or blame) for some of Elon Musk’s innovations. Isaac Asimov’s Foundation series is, arguably, one of the most influential literary creations in history. And now, it’s finally going on screen. Two prior attempts to put the series on the big screen — in 1998 and 2008 respectively — failed. But the new TV series, first grabbed up (and later abandoned) by HBO began development in 2014, and will finally debut via Apple TV+. Apple TV+ doesn’t have a whole lot of content yet, but the Foundation acquisition will help define the service.

Asimov’s iconic series, the winner of a one-time-only Hugo Award for “Best All-Time Series,” concerns a mathematician — Hari Seldon — who predicts the fall of the Galactic Empire, its descent into a 30,000 year-long dark age, and the formation of a second empire. Seldon achieves this prediction using statistical analysis of mass occurrences–essentially “big data,” as we would call it now. In response to these predictions, and foreseeing a destructive anti-intellectualism accompanying the fall of his empire, Seldon creates “foundations” of scientists and engineers on opposite ends of the galaxy as seeds for the new empire.

The premise is almost wholly unique in science fiction and literature in general. At the very least, it’s one of the most original and nuanced story premises around. The premise inspired an initial series, then a collection of preludes and sequels, and eventually spinoffs and variations by other authors. It was apparently a challenge to put it on the screen in the right way. Apple is using Troy Studios in Limerick, Ireland, to complete the ten episodes of the series.

One thing this production has going for it is that some of the producers already have strong sci-fi and fantasy credentials, the most notable of whom is David S. Goyer, who previously worked on The Dark Knight. Josh Friedman, who left the project earlier this year but will retain executive producer credit, also headed up the Sarah Connor Chronicles series. Jonah Nolan of Interstellar was first tapped to write the series.

Of course, none of those productions capture the conceptual scope of Foundation. In a strange way, the closest analog to Asimov’s work is Douglas Adams’ satirical Hitchhiker’s Guide to the Galaxy series, and comparison to Asimov there is self-consciously obvious. The key to that comparison lies in each author’s use (Asimov first and Adams in satire) of the Encyclopedia Galactica — an encyclopedia containing all knowledge in the known galaxy. Carl Sagan has a chapter in Cosmos dedicated to the concept, and Adams contrasts it with the more user-friendly Hitchhiker’s Guide.

Back in 2014, when HBO first announced its intent to produce the series, Mark Strauss at Gizmodo described the significance of the project. What is fascinating about Foundation, Strauss wrote, is that it both epitomizes and defies science fiction as a genre. Although it’s a story of the “fall and rise of future galactic empires” the story “contains virtually none of the usual tropes that are associated with science fiction.” There are no aliens even though the characters are found across an entire galaxy. Society is neither utopian nor dystopian. The faster-than-light technology and other technological advances act “as the background, not the driver, of the plot.”

Foundation’s psychohistorical theme is enduring and has influenced writers, musicians, social scientists, politicians, and others. While it’s doubtful that a television adaptation will do justice to the depth of its themes, I’m going to watch with an open mind.


The Impact of Ancient Encryption

Communication is an essential part of humanity, and, presumably, we need to be honest in most of our communicative acts in order for society to function. Yet, dishonest communication has often been a driver of history, and systemic communicative dishonesty — like encryption and cryptography — has been around for millennia. In fact, the development of this practice in the ancient world and humanity’s continuous engagement in it can be seen as a testament to the ingenuity and intellectual capacity of human thinking.

Discussions of ancient cryptology consistently reference the “Caesar shift,” but cryptographic symbol-making and code breaking happened long before Caesar. These were done to protect the commercial secrets of craftsmen and merchants, rather than exclusively for warfare. For example, a clay tablet in Mesopotamia from around 1500 BC encrypted a secret recipe for pottery glaze. The Kama Sutra includes information for secret cryptography between lovers, something that contemporary politicians may want to make use of.

Still, military use seems to be the main purpose for the growth of cryptography in the ancient world. Take the Caesar shift, a monoalphabetic substitution code. It simply called for a shift of three letters; in English, A becomes D, B becomes E, Z becomes C, and so on. Although it seems obvious now, “the shift” — or Caesar cipher — served its purpose for hundreds of years, likely because many enemy troops were illiterate and even the most learned officers and analysts may not have had sophisticated enough command of enemy language to know what characters made up a foreign alphabet.

Around 800 A.D., Arab mathematician and philosopher Al-Kindi developed the technique of frequency analysis, forever cracking Caesar cyphers. Still, shifting letters could work, it seemed, if the shifts were not of a simple-minded consistency.  Enter the polyalphabetic encryption method: the first letter of a message could use one shift. The second could use another shift, and so on. Frequency analysis would be capable of breaking these codes too, but doing so would take much longer. This cat-and-mouse game between frequency analysis and symbol-shifting would become the trans-historical theme of symbol-based cryptography.

Another step in the evolution of cryptography was the homophonic substitution cipher, which replaced alphabetic systems with non-alphabetic symbols. This technique is thought to have originated in the fourteenth century. Like the polyalphabetic variant of substitution, homophonic systems could also vary; this time, however, the variations themselves would be varied, with high-frequency letters having greater variations than low-frequency. For example, if the letter S is a commonly used letter, the code would create different substitutions for S in the first instance, the second instance, etc. This created yet another challenge for frequency analysis, and the cat-and-mouse game of encryption and decryption continued.

By WWII, complex and temporally variable substitution cyphers (codes changed each day) were testing the limits of computers. The machines would have to slowly calculate tens of thousands of combinations, in the hopes of cracking codes within 24-hour cycles in order to gather bits of information. Meanwhile, revolutionary digital and sound-based encoding were changing the entire nature of secret communication through pulse-code modulation.

Today, we still rely on encryption, and encrypt everything from secret non-state currencies to personal data. Managing data is, therefore, an essential aspect of our lives. 

Despite the frequency of dishonest communication and the relevance of such communication to the functioning and dynamism of human society, only a tiny percentage of humans actually understand the processes of encryption. It’s fun, regardless, to see the thread running from rudimentary symbolic manipulation in Mesopotamia or the Roman Empire to pulse-code modulation for verbal and audio encryption to the locked (and sometimes unlocked) encrypted currencies of today. The common denominator is substitution — the (dishonest) gesture of making a symbol mean something that’s not what it says.


Sci-Fi Pets Roundup

In Poor Richard’s Almanack, Benjamin Franklin includes dogs in his list of essentials for a good life. “There are,” he writes, “three faithful friends: an old wife, an old dog, and ready money.” In many science fiction scenarios, spouses and money are in short supply, but pets — either of a traditional earthling or exotic alien nature — are more common. An animal companion might save the main characters’ lives, provide comic relief, or stumble upon a clue or revelation that changes the course of the plot.

Below, you’ll find a small list of memorable science fiction and fantasy pets. I’ve tried to keep it to creatures that are not intellectual peers to the protagonists (so no Blood from “A Boy and His Dog” even though that canine is intriguing), because I want to preserve something of the pet relationship. Some of these pets are earthlings, some are not, and some are in a class of their own.

First we have Willis the Bouncer from Robert Heinlein’s Red Planet. If you’re unfamiliar, you can read a little about Willis, and watch a clip from the Fox animated miniseries of Heinlein’s book, here. Bouncers are furry ball-like creatures that are alternatively adorable and weird (they can also extend out certain appendages so they aren’t just fur-balls like tribbles). Their most endearing (and plot-developing) trait in the book is their mimicry. They can memorize and recite entire conversations, which is instrumental to the book’s protagonists stumbling upon other characters’ conspiratorial machinations.

Our second example is more fantasy than science fiction and, perhaps, more horror-comedy than anything else. Zero, the ghost dog from Tim Burton’s The Nightmare Before Christmas is quite adorable, very faithful and, above all, cheerful. His cheer is much needed by Jack the Pumpkin King, who faces an existential crisis that constitutes the primary storyline.  Zero is noteworthy, I think, because he is undead, but still endearing — an important character in a film whose uniqueness stems from establishing the sheer everyday normality of an entire community of the undead.

Next, we have the roach — yes, cockroach — from Disney’s animated sci-fi feature-length film Wall-E. The roach is mildly adorable — which is translated to the audience through its actions and connections to the robot Wall-E. This persona also relies on the cliche that if civilization ever collapses, cockroaches will play a prominent role in post-civ management. They do, after all, survive everything. This particular pet gets shot and smashed up, and still manages to survive. The roach also facilitates the relationship between Wall-E and Eve, in this way providing a degree of practicality and necessity to their existence as far as the plot is concerned.

Finally, we have Samantha, the beloved German Shepherd from I Am Legend, a post-apocalyptic film based on the novel of the same name. Neville, the main character, is a mostly-lone survivor of a global virus, working to develop a cure for the disease. Samantha  helps Neville with hunting for food and staving off hives of mutants. She dies a tragic, heroic death in the story, which brings emotion and humanity — as is the case with most pets in storylines — to the story

Although some of these pets challenge French author Colette’s famous quote that “our perfect companions never have fewer than four feet,” they each still prove to be the perfect companions to often reluctant heroes. So let’s raise a glass to the pets of science fiction


Are We In A Videogame, Or Something Else?

A prominent theme in both science fiction and advanced physics is the possibility that our lives are not truly our lives, that everything we thought was true was an elaborate lie—that this reality is not what we think it is. The most popular articulation of that theme is that we are part of a scientific experiment, a simulation.

Even before the Matrix science fiction series, the idea that humans were experiments or simulations took hold in short stories and novels (it was the twist of Douglas Adams’ Hitchhiker’s Guide to the Galaxy). But things got interesting last year, when Oxford University philosopher Nick Bostrum published a paper arguing for three possible explanations of reality, one of which suggested that since advanced civilizations would have the ability to create many simulations of reality, there would be more simulated worlds than non-simulated worlds, and thus there was a good possibility we were living in a simulated one. The same year, 2019, computer scientist Rizwan Vok published a book with the provocative title The Simulation Hypothesis: An MIT Computer Scientist Shows Why AI, Quantum Physics, and Eastern Mystics All Agree We Are In A Video Game. 

The argument even got a creative boost from Elon Musk who, when he wasn’t freaking out about the apocalyptic potential of artificial life, was pontificating that we’ve gone from unsophisticated games like Pong to “photorealistic, 3D simulations with millions of people playing simultaneously, and it’s getting better every year. Soon we’ll have virtual reality, augmented reality.” And, he continues, given the billions of combinations of video game setups using such realistic technology, “it would seem to follow that the odds that we’re in base reality is one in billions.” Not a precise or even cogent argument, but one that points out the reasonability of doubting our own authenticity.

The “we’re in a video game” hypothesis has one big problem: Why? This is what physicist Marcelo Gleiser of Dartmouth asks regarding Bostrum’s articulation of the argument. Why would an advanced society run a simulation about a less-advanced society? Anything they could learn from doing so could be gleaned in more efficient ways. So if we’re concerned about the motives of third party actors—or simulators—then we are likely to find the hypothesis inadequate.

In fact, both Musk and Gleiser ignore an important additional possibility: That we may be in a sort of simulation, but it’s a simulation of the premodern, not the postmodern. We could be characters in a psychedelic vision: a dream-state induced by mushrooms or other psychedelics. What’s more—that collective vision could be quantum.

How do you figure? Well, psilocybin produces a brain-state like the brain-state of actual dreaming. Another naturally-occurring chemical, DMT, creates visual hallucinations that are almost universally described as trips into alternate reality or dimensions, like elaborate dreams.  So call this the dream hypothesis: our lives are psychedelic dreams featuring ourselves and others as characters. Maybe we’re all having hallucinations; maybe we are the hallucinations.

Where does quantum reality come in? Quantum reality is very much like a dream, and some of the more dream-like, un-logical facets of advanced quantum theory, like form-shifting neutrinos capable of having two identities at once, a common dream phenomena. At least one thinker believes dreams may be interactions between quantum parallel worlds.

For some reason, the psychedelic quantum dream hypothesis sits better with me than the “we’re inside some alien’s PlayStation” hypothesis, and not because, as Terrance McKenna would have put it, mushrooms take us back to the premodern in order to push us forward into the postmodern. What mainly inspires me about the dream hypothesis is how easily it could descend into a (hopefully) delightful chaos, like the scene in the animated movie Rarg where, having discovered the kingdom inhabited by the characters was a sleeping man’s dream, everyone begins turning into pink flamingos.


Furry, Feathered or Otherwise Non-Human Electoral Candidates in Fact and Fiction

Humans have been imagining non-human animals as human for at least 40,000 years and probably longer. The “Lion-Human” of Hohlenstein-Stadel cave in Germany is 32,000 years old. We have invented non-human beings who stand for human traits, and anthropomorphized imaginary animals. We have even integrated non-human animals into human activities (think of all the driving dog anecdotes and don’t laugh but there are projects underway to teach dogs to drive) as a way of deconstructing the supposed human-centric complexities that make humans uniquely equipped to run the world. As long as there has been political satire, satirists have portrayed the fallibility of human political activity through the lens of non-human animals, free to exaggerate the gestures and tendencies we find irritating in each other as political agents.

In the real world, and just confining the phenomena to the United States (there are numerous instances elsewhere), many localities appear open to non-human municipal leadership. A black lab was elected mayor of Sunol, California in 1981; a golden retriever won the same position for life in Idyllwild, California in 2014; not to be outdone, a cat won the mayorship of Omena, Michigan in 2018. In 2019 in Fair Haven, Vermont, the mayoral contest came down to a Nubian goat and a Samoyed dog, and the goat won by two votes. Party-level elections have animal attraction as well: a mule named Boston Curtis won a Republican precinct seat in Washington in 1938–unanimously, 51 votes to zero.

There have been many more nonhuman candidates than electeds. Given how easy it would be to exclude these beings from ballots legislatively, one could conclude that municipalities’ acceptance of the occasional canine mayor (or in the case of Rabbit Hash, Kentucky, canine mayors in perpetuity since 1998) serve as a performative critique of politics taking itself too seriously.

While non-human electeds in the real world may serve as satire for the political process, in children’s literature, many anthropomorphized animals have served as learning devices and satirical targets centered around the personalities of the candidates and the complexities of holding elected office. There’s President Squid, by Aaron Reynolds (illustrated by Sarah Varon), about a cephalopod who is the epitome of egotistical candidates (reason number four: presidents love to talk, and Squid talks all the time). And Paul Czajak’s Monster Needs Your Vote (illustrated by Wendy Grieb) offers up an animal pol with good intentions, who changes his platform in response to other people’s feedback, to focus more on education and literacy.

But ruling the pond is definitely the 2004 book Duck for President, by the acclaimed Click, Clack, Moo team of writer Doreen Cronin and illustrator Betsy Lewin. Duck gets fed up with what he perceives as Farmer Brown’s autocratic rule and decides to run for leader of the farm. Because no self-respecting duck would run for office without doing some research, he visits his mayor and governor’s offices, then the White House. He returns having decided that serving in office is too much work, all complexity and no fun. First Lady Laura Bush read this book to children on the White House lawn in 2007. This was not perceived as ironic by anyone.

Scottish philosopher David Hume wrote: “There is an universal tendency among mankind to conceive all beings like themselves, and to transfer to every object, those qualities, with which they are familiarly acquainted, and of which they are intimately conscious.” The history of animals in American electoralism in fact and fiction suggests that this may not always be due to assumptions of our own superiority.


Robots May Gain The Right to Vote, But Can They Electioneer?

This is not about the potential of Android phones to facilitate voting. It’s about whether the other kind of android⁠—the artificially intelligent robot type⁠—would not only have the abstract right to vote, but would have the potential to be a full participant in the electoral process. To “electioneer” means to actively participate in campaigning (or at least passively do so through the wearing of buttons or whatnot–but let’s talk about active campaigning).

More than a bit has been written about android voting rights. The basic argument is based not simply on AI robots becoming more like humans, but on the progressively thinning line between robots and humans, in bodies as well as minds. But the position on the robot mind is still important. Leading AI scientist Ray Kurzweil says robots will gain “consciousness” by 2029Elon Musk wants to ensure that such conscious artificial beings are “friendly.” Perhaps extending voting rights would be an acceptable olive branch. Whatever one considers to be the main arguments for and against AI robot suffrage, we’ve heard the case made.

But there are two other questions. The less predictable one, but more relevant to those of us who work in campaigns, is whether AI robots would be involved in campaigning in addition to simply voting. It’s an extension of that right, though: although there are instances (including the increasingly controversial policies of some states to disenfranchise felons) in which one may not vote but still might choose to campaign, philosophically the right to vote generates the enthusiasm and interest of the citizen in campaigning. Will campaigns be able to hire robot consultants? Welcome robot volunteers?

The immediate objection is that an AI unit with a quantum processor could make all sorts of predictions pertaining to voter geographies or demographics. It could also develop strategic microtargeting algorithms similar to those practiced by politicians around the world since 2016, techniques that have basically bypassed the deliberative process of campaigning to spread negative messaging, disinformation, and more, overwhelming the ability of fact-checkers to scrutinize campaign messages.  Of course, if nations were to pass laws prohibiting data-driven social media microtargeting, they would presumably also prohibit robots from doing it. Other than that scenario, we may be looking to a future where candidates could recruit armies of robot volunteers who could go door-to-door without getting tired, keep making phone calls without wanting to stab themselves in the ears after 2 hours, and so on.

The more predictable question is whether androids could run for office. Isaac Asimov published a short story, “Evidence,” about allegations that a politician and mayoral candidate named Stephen Byerly is actually a robot. But Asimov doesn’t resolve the question of whether the laws existing in his universe (which prohibit robots running in elections) are just or unjust. Instead, Byerly’s identity is never completely resolved, but some characters speculate that android elected officials would not be a bad thing.
Of course Asimov’s three laws of robotics effectively undermine any feasible scenario of robots running for office. In particular, the imperative that a robot must obey human beings’ orders would strip a robotic leader of any effective agency or leadership ability. At the very least, any scenario involving robots as elected officials requires jettisoning Asimov’s laws. And I suspect that the “following orders” law will be hard for those of us in the real world to let go of as we move closer to autonomous AI.  It appears more likely that robots, when they develop consciousness, will make themselves useful helping human candidates win, rather than trying to win office themselves–unless some kind of robot-proletarian revolt comes to pass.


“All You Need Is Love” and the Comm Tech Marvel of 1967

At a time when political activists and campaigns can host Zoom conferences with over a thousand attendees, and when astronauts can make outer space Skype calls, it’s easy to feel numbed to the sense of wonder that may come from contemplating our remarkable advances in communications technology. But June 25th is approaching, and if you’re a technology geek who also likes music and Beatles trivia, that date is going to have a special meaning for you. This year, it will be the 53rd anniversary of one of the most significant performances in the history of rock and roll, geopolitics, and global communications technology.

Conventional history of the Beatles’ “All You Need Is Love” emphasizes its role as anthem of the “flower power” movement, a term coined by Allen Ginsberg and associated with the hippie guerilla theater. But the production of the song was the setting for one of the biggest exponential leaps in mass communication technology in human history. The story intersects the history of the Beatles, satellite technology, and even sci-fi master Arthur C. Clarke.

The band premiered the song, which John Lennon had written with deliberately simple and accessible lyrics to appeal to a multi-lingual audience, on the Our World television program. After the development of geostationary satellites, the British Broadcasting Corporation’s visionary director Aubrey Singer was inspired to create Our World as a global program. He recruited folks all over the world to help him. The Beatles’ program brought 19 of those nations together on June 25 1967, a project involving 10,000 “technicians, producers, and interpreters” worldwide. The master control room for the Our World broadcast was at the BBC in London, and used the satellites Intelsat I (“Early Bird”), Intelsat 2-2 (“Lani Bird”), Intelsat 2–3 (“Canary Bird”), and ATS-1, a NASA satellite. The Intelsat blog points out that Early Bird could only transmit two channels at a time, reciprocally from the U.S. to Europe and vice versa.

Years earlier, before satellites even existed, Arthur C. Clarke had predicted that having at least three of them in geostationary orbit would facilitate global communication. Early Bird was the first such satellite placed in geosynchronous orbit, in April of 1965. Hughes Aircraft, the manufacturer of the satellite, had built the 76 pound satellite to test the very theory that Clarke had predicted—although it’s unclear whether anyone from Hughes was aware of Clarke’s prediction. In December of that year, Early Bird helped provide live television coverage of the splashdown of Gemini 6. It would later be used to aid the Apollo 11 flight in 1969 after the failure of another satellite, the Atlantic Intelsat. Early Bird provided direct contact, with only very short delays, between fax, TV, and phone links in North America and Europe.

Beyond the communications and space technology marvel that broadcast “All You Need Is Love” across the globe, the song was also an engineering marvel. The Beatles had recorded an initial track weeks earlier, then recorded an overdub of that track to create a master mixed track. It was that mixed track that provided the basis of the actual broadcasted recording, done in front of the entire world. The live overdub included a full orchestra, the Beatles themselves, and several guest singers including Eric Clapton, members of the Who and the Rolling Stones, Graham Nash, Marianne Faithfull, and more.

All of this seems almost quaint now, when producers can literally engineer entire symphonies and pop albums made by computers. But the Beatles’ embrace of global cosmopolitanism, the science fiction connection, and the band’s commitment to utilizing cutting-edge technology to deliver an iconic message of world peace, should continue to inspire us.


The Colonization of the Moon Is Already Underway

Cyberpunk lit pioneer William Gibson once said “The future is already here — it’s just not very evenly distributed.” There’s no doubting the second part—new technology will replicate the inequality of old technology. But for the next few paragraphs, let’s think about the first part. The future is already here. For example, we’re just going to, you know, colonize the moon in about ten years.

The idea of visiting and eventually living on the Moon is solidly grounded in the history of speculative fiction. As the nearest otherworldly body, its occupation seems a feasible and almost imperative scenario—and has seemed so for a long, long time. The Best Sci Fi Books blog has a post with the 17 best works of fiction about Moon settlements. The post includes Griffin’s Egg, by Michael Swanwick, a short 1992 novel where Moon settlers watch as the Earth is enveloped in global war; Jack McDevitt’s Moonfall, depicting a nervous U.S. Vice President on a disaster-befallen Moon visit; and some classics like H.G. Wells’ The First Men in the Moon, Jules Verne’s From Earth to Moon, and the 1638 fantasy The Man in the Moone by Francis Godwin, which has a legitimate claim to being pre-Mary Shelley sci-fi, even though the protagonist rides a bird to get to his destination.

So one kind of feels a magic in the aspirational announcements of the European Space Agency and NASA, both of which are itching to colonize the Moon and have the political will to talk and act on that aspiration even when budgets are tight. NASA wants a “sustained presence” on the Moon by 2028. In 2015, Jan Woerner, Director General of the European Space Agency, introduced the concept of the “Moon Village”—not a Leave it to Beaver-like neighborhood on the Moon, but rather “a community created when groups join forces without first sorting out every detail, instead simply coming together with a view to sharing interests and capabilities.” The idea was to bring interested parties together from around the planet, collaborating on science, robotics, and entrepreneurial ideas, to collectively and non-hierarchically design a moon-based habitat.

Although Woerner discourages thinking of a Moon Village as a physical, capturable thing, many artists and thinkers create those images anyway, and it’s unclear how much thought they’ve given to transport and construction logistics. Never fear, though: 3D printing technology will be critical for construction of the habitats and equipment. That will require transporting the technology to do the printing, as well as the “printer cartridges” of matter from which to print, but it makes more sense than any alternative. The architectural firm Foster+Partners has outlined “an entire a lunar base,” that can be printed, presumably powered by solar energy.

The more practical visualizations include a base capable of doing just under 40 launches over 10 years and a Moon Village-inspired vision of a permanent industrial and academic settlement,

China wants in on colonization, and plans to build a scientific research station on the Moon’s south pole in about ten years.

Global space agencies, particularly NASA, are currently concentrating on landing systems and the return of astronauts, this time in a much more focused surveying role. Recently NASA awarded contracts to SpaceX, Blue Origin and Dynetics, for lunar landing systems. The agency’s spokespeople say astronauts, including a woman, will be back in just four years. These companies will develop lunar landing systems in an effort to return American astronauts to the Moon’s surface as soon as 2024, as Ars Technica reports.

Passenger transport is also a priority: SpaceX’s “Super Heavy rocket-powered Starship” will eventually be able to take 100 passengers into space at once—a historically unprecedented passenger capacity. The fact that air transport giant Boeing didn’t make the cut, even though it has historically contracted with International Space Station projects and NASA’s Commercial Crew Program, suggests that the government wants to go with new private sector leadership from more cutting-edge companies, which makes sense if you believe, as an imperative, that we are living in the future.


A Black Hole’s Gravity is Like a Spirograph

Space and time are not fixed–or more accurately, seemingly fixed or regular movement across spacetime is actually “warped.” That warping causes the trajectory of motion to manifest as dynamic rather than nondynamic. A straight line is never really a straight line, and a regular orbit is never really a regular orbit. Before Albert Einstein developed the theory of general relativity, we could not explain, for example, the small anomalies in planetary orbits that should have been “regular” and unvarying according to Newtonian physics.

The antecedents of general relativity and black holes (the real testing grounds of general relativity) were laid down long before Einstein’s articulation—around 1783 to be exact, by a clergyman named John Mitchell, and then again in 1796 by the scientist Pierre-Simon Laplace. They called them “dark stars” and later “frozen stars.” The idea was that in a star’s “last phase” of existence, under gravitational collapse, nothing would be able to escape that massive object’s gravitational pull, not even light. It wasn’t until the 20th century, however, that John Wheeler coined the phrase “black hole,” although even after two centuries, the term “dark star” connoted the same meaning.

In very simple terms, Einstein’s theory of general relativity provides several ways in which objects in orbit, and two objects exerting gravitational force on each other, will move dynamically rather than rotely or statically. Orbits “precess forwards in the plane of motion,” as one astrophysicist puts it.

So although it wasn’t a great surprise to scientists that a star circling a black hole would exhibit a dynamic orbit, there was still a profound elegance to the flower-like shape we can trace in its orbit. Astronomers at the Max Planck Institute in Munich monitored the star, called 52, for 27 years via the European Southern Observatory’s Very Large Telescope in Chile. They determined that it traces a “rosette.” This pattern occurs because the gravitational pairing remains dynamic even in the regularity of the orbit it facilitates. Ashley Strickland, writing for CNN, points out that this is the first time astronomers have ever studied a star “orbiting the supermassive black hole at the center of our Milky Way galaxy.”

In an instance of mathematical art imitating gravitational life, the rosetta orbit traces an oscillating pattern called a hypotochoid, and that is the mathematical basis for a toy known as the spirograph. The spirograph had already been a drawing device used to produce uniform curves when in the 1960s and 1970s it became a favorite toy and Hasbro trademarked the name. A mathematician named Bruno Abakanowicz invented the spirograph between 1881 and 1900, and others had different versions of it decades before. There are even instructions in a boys’ magazine in 1913 on how to make a “wondergraph”—essentially a much more solid version of a spirigraph—based in the principles of hypotochoids. That’s interesting because Einstein developed the theory of general relativity sometime between 1907 and 1915.

Once a journalist asked Einstein to explain the theories of relativity in simple terms.Einstein responded: “If you don’t take my words too seriously, I would say this: If we assume that all matter would disappear from the world, then, before relativity, one believed that space and time would continue existing in an empty world. But, according to the theory of relativity, if matter and its motion disappeared there would no longer be any space or time.” One wonders if he also could have said that relativity is what causes you to draw a flower pattern when you might otherwise think you are only tracing a circle.


Truly Toxic Speech: When It Comes to Pandemics, We Really Talk Too Much

This was already in the news last December, and not in the context of the global Covid-19 pandemic. A UC-Davis study had found that “the louder people talk, the more airborne particles they emit, making loudness a potential factor in spreading airborne diseases.” Particle emission during talking had other factors, but loudness was a consistent indicator of emissions. In fact, and this is the part that may shock people the most, talking loud emits particles in a fashion similar to sneezing or coughing. The authors of the study did mention the role of loud talking in transmitting influenza generally, but that makes sense because various strains of influenza are among the most contagious and serious of illnesses across populations.

Skip ahead to a White House briefing in April 2020, when, Victor Tangermann reports, a “prestigious scientific panel” told officials that Covid-19 could be spread by talking. Again, this was one piece of a range of observed transmission methods that included just normal breathing, “bioaerosols generated directly by patients’ exhalation,” and a much greater risk indoors than outdoors. For those who had wondered about the utility of wearing masks for potential transmitters and not receivers, the muffled talking that occurs behind the mask must have seemed to be a pretty benign force compared to people loudly jawing off and possibly infecting someone.

But all this might raise concern that when we get back to whatever near-to-normal we might get, the workplace will no longer be protectable merely by making sure nobody comes in with a cough or fever. Talking itself will transmit the disease. Social distancing will have to continue, unless technology enters the picture to somehow give us either distance without loss of exchange, or exchange that resembles distance.

Let’s start with that artificial distancing, or exchange that resembles distance. How do you create artificial distance, or the safety of distance while still being just a foot or so away from someone? Someone has already designed glass (plexiglass or clear plastic) dividers beween seats on airplanes in a post-pandemic world. Perhaps airplanes are not the only application. Dividers could be placed in between seats at a conference table, or on a panel discussion. This would substantially decrease the risk of transmission in everyday conversations.

Of course, the real alternative is a lot more teleconferencing. And this alternative doesn’t just apply to distance learning. Even if all participants are in the same building or on the same campus, a Zoom meeting can happen in place of a face-to-face, or larger group meeting. The good thing about this solution (besides the fact that loud-talkers can still talk loudly) is that it’s also really good for the environment. This isn’t just because of the decreased carbon footprint that comes from eliminating travel. There’s also the reduction in paper, printer cartridges, ink and toner. And there’s the reduction of plastic and food waste in not having to feed conference attendees.

There is a particularly sad angle on this story about transmission through talking, and it bears on the  evidence that the virus lingers in or travels through the air in some contexts. A group of choir singers in Mount Vernon, Washington met to rehearse. They stayed far apart, and all reported being healthy when they attended practice. Two are now dead of Covid-19. Conferencing technology has its aesthetic limitations, as musicians trying to use it to record or rehearse will quickly learn. In a list of ten forms of pandemic-appropriate collaborative platforms for musicians, one that sticks out is JamKazam, good for both collaborative recording and music teaching. If, as theorists have long told us, technology is merely the extension of body parts, we’ll have to see how much technology can simulate natural exchange while decreasing transmission risks.