The Cyber Danger Zone

The Peter Parker Principle is the name given to society’s acknowledgment of that immortal quote from Amazing Fantasy #15, the origin of Spider-Man: “With great power there must also come—great responsibility.” The quote even appeared in a United States Supreme Court decision. President Obama used it in 2010.

We may need to revive the quote and principle again, in light of some recent weirdness around cyber-warfare and fears of artificial intelligence: if we aren’t in the “Brave New World” now, I’d certainly love to see where that threshold is crossed. With Google now claiming to process information at hitherto impossible speeds via quantum computing, we have to be a little scared of offensive cyber operations, or the potential of computer autonomy, right?

We can certainly be concerned about the cyber-ops. The Trump administration has authorized a vague program with no definitions, no indication of what threats exist, or even of what constitutes a threat. The administration won’t say what threats the program exists to counter, but the policy “eases the rules on the use of digital weapons,” and this is a significant departure from traditional defensive cyber-ops—operations that, according to the Cato Institute’s Brandon Valerio and Benjamin Jackson, worked to stop or deter cyberattacks (as much as such a thing is possible) without risking escalation. The authors call the previous approach “low-level counter-responses” that do not increase the severity of inflicted damage.

That’s a fascinating thing, in a way, that previous administrations had the consciousness to limit their responses, perhaps because they knew that once you escalate, that escalation will come right back at you. The authors actually analyzed several operations and were able to classify non-escalation and escalation scenarios, concluding that “active defense” rather than offense was the most effective and escalation-avoidant framework.

Second, we have what we could perhaps call “Elon’s Paradox”—that in the face of alleged threats to human autonomy from artificial intelligence, the solution may be to preemptively merge humans with AI technology, cybernetically. Musk isn’t alone in his criticism and fears of AI; the late Steven Hawking and others have long sounded the alarm on AI becoming, as Vladimir Putin recently speculated, a servant to the leader that ends up ruling the world. Musk is afraid it will cause World War Three.

But Musk’s solution—he wants to increase cybernetic connections between humans and machines in hopes that a merger will be more coequal than AI just outright taking over—seems a little weird. Granted, the technology of projects like Neuralink has tremendous potential to help people heal from brain damage or degeneration, and it’s a fascinating question whether systems can be developed that retain human autonomy while utilizing the potential of AI.

But it’s not clear how it will prevent the emergence of what Musk calls “godlike superintelligence,” and besides, even leaving that kind of control up to humans is a mixed bag. After all, Google had been supplying technology to the military for drone strikes, promised to stop, and then hedged on its promise. With great power—hopefully—comes great responsibility.


The Weirdest of Weird Tech

Tentacle tech

What it is:  Researchers have managed to replicate octopus flesh, developing “a structure that senses, computes and responds without any centralized processing—creating a device that is not quite a robot and not quite a computer, but has characteristics of both.”

Why it’s weird and awesome: Its developers call it “soft tactile logic,” and it can “make decisions at the material level” through input and processing on site, rather than a centralized logic system somewhere else. And you might remember credible speculation last year that octopus DNA might come from aliens, which isn’t the only thing that makes it one of the most intriguing creatures on earth.

But seriously, Biosynthesis is an application of “soft” technology using “neuromuscular tissue that triggers when stimulated by light,” which, if it becomes complex enough, is practically indistinguishable from autonomous biobots. This tech actually goes back to at least 2014, when professors Taher Saif and Rashid Bashir of the University of Illinois developed a bio-mechanical sperm-like thingy. It could swim. Sure, that autonomy could be a little creepy and is the stuff that science fiction disaster scenarios and international regulatory and ethics discussions are made of. But it’s also awesome! Replacement of cells! Cures for heart disease, radical improvements in prosthetic technology and more.

They tested Loch Ness for DNA

What it wasTwo New England geneticists conducted a sweeping environmental DNA survey of the greater Loch Ness area—not just the lake, but also the surrounding ecosystems. They found no sign of giant reptile DNA, aquatic dino-DNA, or any mysterious monster genetics. The scientists found signs of all kinds of creatures—fish (obviously), deer, pigs, bacteria, human tourists, but no Nessie. We’ve known for a while that those famous photos of Nessie were faked. This is another nail in the proverbial coffin.

Why it’s important: The Monster is iconic across popular cultural and pseudoscience. But it’s also fun, and historically necessary, to bust myths. More importantly, DNA testing still feels like a revolutionary breakthrough, solving real crimes while debunking legends.

The interrupting robot you’ve always wanted

What it is: Do you hate it when other people finish your sentences? What if robots did it? Called “BERT” for Bidirectional Encoder Representations from Transformer, the system uses “natural language processing.” This doesn’t seem to be too much of a stretch from the auto-complete function in texting. But it also does “sentiment analysis,” similar to the way in which businesses and political campaigns can take from masses of data in order to qualitatively analyze subjective information.

Why it’s inevitable no matter how we feel about it: Because this kind of AI is inevitable. Daniel Shapiro, who founded an AI firm called Lemay.ai, agrees with me on this, and says that “AI does some things well and some things poorly, but on balance, the benefits exceed the costs of having an algorithm making decisions.” As to whether it’s a job killer, Shapiro says”no more than the humble spreadsheet was.”

Slipping into a new you

What it is: A postgraduate fellow at Central Saint Martins University in London, and a microbiologist at Ghent University in Belgium, have developed “Skin II,” a garment that they say will “improve body odour, encourage cell renewal and boost the immune system.” It also doesn’t need to be washed as often because, you know, reduced odor. One of the designers called Skin II “wellness clothing,” which, all jokes about B.O. aside, sounds exciting.

Why it’s basically necessary: Because odor management is an important part of the management of public spaces. People complain of odors on trains due to smokers, strong perfume, and yes, body odor. Any frequently-used space (and those are the most valuable spaces, really) are going to smell bad. Why not do our part to make it easier to manage those things publicly? Also, L.A. Metro is experimenting with deodorizers on trains, so that’s an interesting supplemental piece of tech news.


Why We Sometimes Distrust Technology

Residents of cities like Detroit are getting fed up with police surveillance technology, and don’t care whether it decreases crime. Several digital rights advocacy groups are also weighing in, calling for bans on government use of facial recognition technology. There is a rejection of consequentialist or utilitarian arguments going on here—that is to say, if you argue “but crime is decreasing because of surveillance technology,” whether that claim is wrong, those opposed to its use often do so on a deep moral level.

But there are also powerful “policy objective” arguments against such technology, two of which have been cited by advocates of the California bill to ban police use of the tech: The first concern is that “facial recognition isn’t reliable enough to use without flagging high rates of innocent people for questioning or arrest.” The second is that “adding facial recognition to body cameras creates a surveillance tool out of a technology that was supposed to create more accountability and trust in police departments.”

Both of these arguments seem inarguably true to me. You can improve technology but it will always produce some false positives, and terrible things could happen to people’s lives as a result. And this widespread spying on people, far beyond even targeted surveillance in particular investigations, does not build trust between police and communities.

But I also think we should be careful to know that we can get what we ask for, in this case, a “ban” on the use of this technology, and what we are likely to get even if California and other states pass such laws. We’ll have to check (and in the case of the police this will mean community review) police procedures to ensure there will be no surreptitious, illegal use of the tech, or whether police will procure results of the tech from other entities.

That seems obvious, but I’m not sure everyone gets it. “Imagine if we could go back in time and prevent governments around the world from ever building nuclear or biological weapons. That’s the moment in history we’re in right now with facial recognition,” said Evan Greer, deputy director of Fight for the Future, in a statement. But certainly, a ban biological and nuclear weapons can’t prevent their production altogether. Likewise, you can (and probably should) regulate, restrict, monitor, and ban police procedures and use of technology, but people, entities, will still develop surveillance technology. Governments and bad-acting private entities will use it if they can get away with it, and “getting away with it” takes interesting forms in the world of high corruption.

Of course, that’s not an argument against banning police use of the tech, but instead an argument for doing more, for at least also improving the conversation about technology and trust.

That conversation goes both ways in that it will sometimes affirm new tech even though it’s imperfect, and reject another tech for perhaps doing its job too well. As saving lives go, autonomous vehicles are probably more helpful than surveillance technology. They will certainly save millions of lives worldwide, although we can debate how many. Nevertheless, the media, and not just the media, focus on the crashes that may occur. It’s easy, and correct, to respond as philosophy professor and essayist Ryan Muldoon does in The Conversation: “autonomous cars will have been a wild technology success even if they are in millions of crashes every year, so long as they improve on the 6.5 million crashes and 1.9 million people who were seriously injured in a car crash in 2017.”

But that’s not always how people see it; there’s an intersubjective element to risk assessment, and understanding how people’s minds work is part of understanding how to apply data. That’s why 71 percent of Americans still don’t “trust” autonomous vehicles even in 2019. Learning more about risk is important, but taking democratic, deliberative control of risk management—including against an overly enthusiastic surveillance state—would be even better.


Tiny Air Vehicles Roundup

Back when we were more optimistic about the effects of technology on society and everyday life, the joke was that if we don’t get jet packs, all that technology isn’t worth the effort. The joke facilitated the naming of a great Scottish indie group formed in 2003, and lots of social media about personal air vehicles.

And although we haven’t seen the pace of development and mainstreaming anticipated in older speculative representations of 21st century life, PAVs are the subject of considerable R&D both in the private sector and the public (NASA has a Personal Air Vehicle Sector Project under the umbrella of its Aeronautics Vehicle Systems Program).

Here are some updates on the big three of small flying machines, hoverboards, flying “cars,” and yes, jetpacks.

The Hoverboards

At the beginning of August, “[a]fter a failed attempt at the end of July, French inventor Franky Zapata successfully crossed the English Channel . . . on his Flyboard Air, a jet-powered hoverboard.” Apparently the challenge the first time was that the waves were too high en route to a refueling platform, highlighting the need for small vehicles to have adequate power supplies. Technically, a hoverboard may not even qualify as a “flying” vehicle because you aren’t really up so high, but both the practical application and the presumed fun of traveling on one warrants inclusion on this list. Here’s the footage, which might make you cry, since Zapata’s supporters all do when he successfully crosses the Channel.

The Flying “Cars”

These are serious business. No Chitty Chitty Bang Bang here. Even Boeing is in the game, and they’ve partnered with a boutique tech developer called, appropriately enough, Kitty Hawk, to develop the Cora, a two-seat semi-autonomous flying taxi (we are salivating).

Kitty Hawk also has the Flyer, and the video on this page shows that spider-like vehicle quietly flying over various bodies of water and a shadowy desert and hill landscape while the designer talks about his dream of building flying machines. Kind of inspiring.

The (we were indeed promised) jet packs

As you can imagine, there’s a never-ending stream of prototypes for the jet pack. But the most interesting recent project is British entrepreneur Richard Browning’s “real-life Iron Man suit,” essentially a set of jet engines attached to the pilot’s arms and legs. Browning’s start-up, Gravity, has “filed patents for the human propulsion technology that could re-imagine manned flight,” including the jet-engine suit, called Daedalus. A beefier test flight is expected “in the next 12 months.”

Although there are a few different videos of Browning and Daedalus in flight, this simple debut footage might be the most elegant—the guy just smoothly and symmetrically floats around and lands where he took off—all with confidence that gets you thinking about the many applications of such flight.


Big Data and the Final Frontier

It may not have been as entertaining as a Star Trek fan convention, but last February in Munich, the European Space Agency and a handful of other EU organizations hosted the Big Data from Space conference, where hundreds of papers were read on the methods and applications of big space data. The conference brought together “researchers, engineers, developers, and users in the area of Big Data from Space.”

Because of those massive amounts of generated bits of info, space practitioners use big data analysis for “fast analysis and visualization of data,” and the development of fail safe systems in space—and on earth.

There is no space tech development without big data development and, as we know, space tech development is one of the starkest examples of specialized technology carrying indirect benefits to other parts of society. In some cases, the benefits are more direct than indirect. Newly designed satellites will improve our ability to measure methane gas in the atmosphere and down on earth. The Environmental Defense Fund recently announced a competition awarding $1.5 million to either Ball Aerospace and SSL to design the satellite and, upon winning the competition, build it in two years or less. Meanwhile, last September, outgoing California governor Jerry Brown “announced at the Global Climate Action Summit that California would be placing its own satellite into orbit to measure greenhouse gases. That satellite will work in tandem with the EDF equipment.”

While saving the planet is certainly laudable, big space data has a sexier application: to identify the possibilities of extraterrestrial life. This is the “most important question” for Mars exploration, according to Anita Kirkovska of Space Decentral. Systems like Elasticsearch crunch Martian data, generated by Curiosity in huge amounts, checking surface temperatures and atmospheric conditions, and a multitude of other stati, helping facilitate discoveries like Curiosity’s identification of organic molecules and methane in the Martian air in June, and next year’s ExoMars mission. The spatially huge Square Kilometre Array radio telescope project “will generate up to 700 terabytes of data per second,” around the “amount of data transmitted through the internet every two days.”

These are the voyages of big data analysis in space, its continuing mission to make sense of literally infinite fields of data generation beyond the earth’s mesosphere.


No Debate Championship for Artificial Intelligence

It wasn’t a “roast” like Dan Robitzski of Futurism says it was, but in February, Harish Natarajan, a former championship college debater, won the audience vote over Project Debater, an IBM program designed to respond to its opponents in a debate and crystalize and summarize the issues in summation. The debate was about education subsidies. Project Debater was for them, Natarajan against.

It’s likely that Harish “won” the debate because he was able to better contextualize, rhetorically drive home, and make comparisons in his final speeches—the real ethos-meets-logos-type factor of good debating, as much an art as a science. Meta-analysis and rhetoric are both inexact—in form as well as content—so an experienced debater does more than just generate and counter information.

The video is worth watching: Project Debater made eloquent quotes, provided evidentiary support, answered arguments on-point, and even uttered the phrase familiar to debaters, “the benefits outweigh the disadvantages.” But that’s a stock phrase. It’s not nuanced comparison. Nuanced comparisons (including things like strategic concessions, admitting the other side is right about something in order to win a larger point) require abstract and metaphorical thinking; not much, but enough.

As Mindy Weisberger writes: “In a neural network, deep learning enables AI to teach itself how to identify disease, win a strategy game against the best human player in the world, or write a pop song. But to accomplish these feats, any neural network still relies on a human programmer setting the tasks and selecting the data for it to learn from. Consciousness for AI would mean that neural networks could make those initial choices themselves.” And, subjective experience is part of what’s required to do those things.

There are signs that machine learning has the capacity to do this, but little of that was on display during the debate. Stanislas Dehaene and colleagues list “global availability,” the relationship between cognition and the object of cognition, and “self-monitoring,” obtaining and processing information about oneself, as components of consciousness in thinking beings. Both of those attributes would help a debater-AI unit make meaningful, contextually appropriate comparisons between arguments, as well as discern strategic concessions (an unreflective computer probably “thinks” it’s winning every point in a debate).

For a few more years, at least, humans are safe in debates and a few other spheres of public life.


‘Weird Tech’ Roundup

“Anything that was in the world when you were born,” Douglas Adams wrote, “is normal and natural. Anything invented between when you were 15 and 35 is new and revolutionary and exciting, and you’ll probably get a career in it. Anything invented after you’re 35 is against the natural order of things.” We don’t know what’s against the natural order of things, but we do have a term for historically anomalous tech. From a historical paradigm, “out-of-place-artifacts” refer to old objects that “seem to show a level of technological advancement incongruous with the times in which they were made.” This includes things like caves in China that contain what appear to be 150,000 year-old water pipes; a hammer found in a eons-old rock formation; and what appears to be a spark plug encased in a geode, dated half-a-million years old.

But these are historical, or pre-historical anomalies popular with students of the abnormal. Out-of-place technology also might refer to contemporary items that seem to serve no useful purpose, or serve such a miraculously useful purpose that we wonder why the tech sector waited so long to create them. Every year, a few publications run with the “weirdest tech” of the previous year. The results are stimulating. For the list of 2017’s strangest and most exotic tech, Glenn McDonald of InfoWorld listed robots that weighed over 300 pounds and were capable of doing perfect backflips (no useful purpose) and miniature nuclear reactors capable of powering cities.

Sometimes the weirdness comes from anomalous performance or phenomena, almost always unintentional, that cause concern for how an item is used or received. So in Stuart Turton’s “strangest ever tech stories,” we find Dell Laptops (the Latitude 360 to be precise) that smelled like pee, Syrian hackers overwhelming the BBC Weather Twitter feed to make fat people jokes, and the allegation that Google’s Street View car killed a donkey.

Other items that serve no useful purpose are those that are incredibly expensive, available only to the top .1 percent, and steeped in decadence: a remote-controlled pink Leg Air Massager women can “wear” under a desk, or a robotic dog that will roll over, beg for you to pat them on the head, and do other tricks, all for the amazing price of over $2800.

These excesses of technology might tell us why noted Luddite Edward Abbey once wrote: “High technology has done us one great service: It has retaught us the delight of performing simple and primordial tasks—chopping wood, building a fire, drawing water from a spring.” So in that sense, maybe the price is worth it.


Machine Learning Roundup

Machine learning is that branch of Artificial Intelligence (AI) devoted to “teaching” computers to perform tasks without explicit instructions, relying on inferences gained from the absorption and processing of patterns. There’s been pretty amazing analysis in the world of machine learning just in the last month. Citing CrunchbaseLouis Columbus at Forbes  puts the number of startups relying on machine learning “for their main and ancillary applications, products, and services” at a rather stunning 8,705—an increase of almost three fourths over 2017.

There has been talk of AI as a tool to fight climate change, which is certainly promising, but not without its limits. In order for AI to do this, it needs programs that can learn by example rather than always relying on explicit instruction—important because climate change itself is based on patterns. Machine learning is “not a silver bullet” in this regard, according to a report by scholars at the University of Pennsylvania, along with the cofounder of Google Brain, the founder and CEO of DeepMind, the managing director of Microsoft Research, and a recent winner of the Turing Award. “Ultimately,” we read in Technology Review‘s distillation of the report, “policy will be the main driver for effective large-scale climate action,” and policy also means politics. Nevertheless, machine learning/AI can help us predict how much electricity we’ll need for various endeavors, discover new materials, optimize the hauling of freight, aid in the transition to electric vehicles, improve deforestation tracking, make agriculture more efficient, and much more. It sounds like the uncertainties are not enough to give up the promise.

The ultimate goal of machine learning may be characterized as “meta-machine learning,” which is in full swing at Google, where researchers are engaged in “reinforcement learning,” literally rewarding AI robots for learning from older data.

But authors have also been writing about AI/ML’s limitations. Microbiologist Nick Loman warns that machine learning tech is always going to be “garbage in, garbage out” no matter how sophisticated the algorithms get. After all, he says, like statistical models, there’s never a failsafe mechanism for telling you “you’ve done the wrong thing.” This is in line with a piece by Ricardo da Rocha where he likens machine learning models to “children. Imagine that you want to teach a child to distingue dogs and cats. You will present images of dogs and cats and the child will learn based on the characteristics of them. More images you show, [the] better the child will distinguish. After hundreds of images, the child will start to distinguish dogs and cats with an accuracy sufficient to do it without any help. But if you present an image of a chicken, the child will not know what the animal is, because it only knows how to distinguish dogs and cats. Also, if you only showed images of German Shepherd dogs and then you present another kind of dog breed, it will be difficult for the child to actually know if it is a dog or not.”

You may also enjoy watching this astoundingly good 20-minute primer on machine learning.


Has Facebook Gone Flat?

Last year, the average consumer spent 38 minutes per day on Facebook, and that number will remain unchanged this year, before falling to 37 minutes next year. While some of this is attributable to loss of younger adult users, fairness also dictates that we recognize that Facebook invited this shift by emphasizing “time well spent” in place of clickbait. It’s hardly fair to fault the platform for losing a couple of minutes over a couple of years when the whole point of its shift is to emphasize (what it considers to be) quality over quantity.

Mark Zuckeberg himself seems determined to imitate the persona that partially imitates him on the new season of Black Mirror, Topher Grace’s character Billy Bauer, whose disillusionment with his own creation, the “Persona” platform, is exacerbated by the psychotic break of a user of the platform. Zuckerberg recently wrote an essay of over 3000 words explaining his plan to change Facebook “from a boisterous global town square into an intimate living room,” as the Washington Post put it. The new platform will emphasize private and small-group interactions, removing context and incentives for mass-manipulation and random bullying. Like his Black Mirror counterpart, Zuckerberg seems concerned that his once-progressive idea has become regressive and dangerous. He seems willing to sacrifice the bottom line to correct course.

But to be fair, the bottom line isn’t looking great. Facebook is losing teens at a rate not explainable by its own format changes. While it still has 2.3 billion users (and so it’s crazy to talk about the company coming anywhere close to tanking), it’s feasible to envision a scenario where the platform’s current ruling status is unseated.

But whether bottom line panic or crisis of conscience, the moves are sparked by a perception of  “continued breaches of user trust” and the fact remains that fewer people are on the platform, and for less time. And in many ways Facebook is like the once-beloved celebrity that has worn out its welcome. Sparked by the suicide of 14 year-old Molly Russell in 2014, British health Matt Hancock just recently “warned social media firms that they will face legislation if they don’t do a better job of policing the posts made on their platforms.”

Yayit Thakker, in Data Driven Investor, attributes this shift by Facebook to the same consciousness and generational shifts responsible for a new egalitarian political consciousness. “As a generation, we have used our creativity to build and support infectious ideas that have resulted in some of the greatest organizations never seen before, like Google and Facebook,” but “this new kind of power can also be abused — usually without even our realizing it.” Young people seem not to mind tearing it down and rebuilding it if things aren’t working out. Zuckerberg, who isn’t so young anymore, really, seems to want to follow their example. He could do worse.


Doing Health Care Automation the Right Way

Health technology is an ancient concept. There were prosthetics in ancient Egypt a thousand years before the birth of Jesus, stethoscopes and x-rays emerged in the 19th century, and in the mid-19th century, transistors were developed to aid in implants and computers (which, like most of what we’re discussing in this post, facilitated data-sharing).

Now, the genie is out of the bottle on automated health care systems, from using AI for diagnoses to robots for surgery. The overriding importance of data management in that evolution is undeniable. Healthcare professionals have unprecedented amounts of data “at their fingertips,” but fingertips alone can never effectively manage such data. The promise of automated, or even AI-based management of that data is appealing because it helps those in the profession do what they have set out to do—provide the best possible care to patients.

The challenge, however, is that knowledge is power, and the optimal distribution of knowledge is not something that just happens by itself. Automation can exacerbate that maldistribution of information because “automation proposals involve solutions that focus on highly structured data,” organizing it takes human resources, and machine-to-machine interfacing involves “complex clinical data flows” that need reliable application programming interfaces. The very complexity of those processes makes systems vulnerable to information blocking—interfering with legitimate access to medical information.

Enter the 2016 Cures act, also called the Increasing Choice, Access, and Quality in Health Care for Americans Act, which does many things including making information blocking punishable by fines of up to $1 million per violation.

The goal here is the facilitation of informational communication: “The Cures Act looked to facilitate communication between the diverse patchworks of healthcare providers and between providers and their patients” by requiring “the electronic players in this space to provide open APIs that can be used ‘without special effort on the part of the user.'”

It is the proliferation of data that makes automation optimal in so many facets of care. The development of low-code frameworks for healthcare workers to build their own applications is another part of this process. There are low-code platforms for databases, business processes, web applications, and low-code “can also work alongside emerging technologies like robotic process automation and artificial intelligence to streamline drug development processes and leverage data to inform life-saving decision making.”

The results of this interactivity are not just glamorous or exceptional lifesaving methodologies. At Health TechJosh Gluck writes that AI is automating basic administrative and other tasks that ultimately ought to be the easiest and most automatic parts of the profession.

It’s fascinating to see this interactivity of tech developments, legal changes, and new approaches to data-sharing, which is such a big part of health technology. We’ve come a long way since ancient Egyptian prosthetics.