New Minds, New Fears, New Hopes

Jonathan Logan

Science & Environment Editor


The 21st century has been referred to as the century (thus far) of Artificial Intelligence (A.I.). Pop culture likes to push dystopian visions of an automated future or romanticize the idea of consciousness in androids. Academics like to flourish the discussion of A.I. with philosophical questions that distract from conversations on ethics and application. That leaves the rest of us to speculate or argue about the nature of human thought. Undercurrents of fear flow through these discussions: will I be replaced by a thing coded 1010 instead of ATGC? Can I count on privacy anymore? Is the singularity happening in 2029 or 2049?

As with most things, the truth is probably somewhere in the middle. Between fear and fascination, we can perhaps understand the path down which human-machine augmentation will lead us. A.I. already suffuses much of the technology we use on a daily basis. In this section special, title, Artificial Intelligence will be decoded and explored as it pertains to every aspect of our lives.

Our world is rife with examples of A.I. being used in a misguided, immoral fashion: data-driven policing, actuarial models determining creditworthiness or the grading of ACT/SAT essays. All of these cases and more were deeply explored in Cathy O’Neil’s “Weapons of Math Destruction.” O’Neil, who holds a doctoral degree in mathematics and worked as a quant for a prominent hedge fund, has been sounding the alarm on A.I. and algorithmic hegemony since 2016. O’Neil advocates for the limiting of A.I. as it becomes increasingly pervasive. She advocates not for an all-or-nothing approach to machine intelligence, but an approach of guarded optimism.

 Leaning towards the balanced caution that O’Neil preaches, but taking it to extremes, is pop culture. Fear and vices rule the narratives that they propagate, but perhaps these are necessary — a warning from the visions of a future O’Neil might see in amoral practices like data-driven policing. The important thing to guard against in these dystopian or post-apocalyptic phantasms is not their narrative, but their realism. In the arts, realism avoids speculative thought that does not match current scientific or factual trends. Ayanna Howard, a roboticist and the first female dean of the College of Engineering at The Ohio State University, frankly stated in 2019 that Artificial Intelligence is far from being able to take over the world. Shows like HBO’s “Westworld” or Alex Garland’s “Ex Machina” would have us believe that we are a mere decade from Rehoboam charting a course for the entire human race. The sentiments of fear these films produce are necessary in the discussions on A.I. However, they are not to be interpreted as the most likely path forward in human-machine augmentation.

Contrary to the fear, A.I. has infused many fields of science and even the arts with new hope for what might be possible. In medicine, A.I. is actively cutting out the $2-6 billion price tag of drug development. According to an article published in Nature, most drugs fail between “phase one trials and regulatory approval.” Many pharmaceutical companies have partnered with software companies to develop A.I. that eliminates the time-consuming trial and error processes that identify cancer mechanisms or potential therapies. Instead of changing the independent variable to see how the dependent responds, scientists — with the advent of access to large datasets can “feed” historical or synthetic patient data to an A.I. like IBM’s Watson, and predict what causes things like cancer metabolism to overwrite normal cellular metabolism.

There is a fundamental concept that underpins A.I. This concept gets tossed around a lot, and in the process, nobody really understands it. Algorithms. Many people confuse computer code for algorithms. Algorithms are the logical, abstract processes one goes through when solving a problem. You do it on a daily basis. However, you do not need someone to spell it all out for you on a piece of paper. Before going to Lowry, you do not have to sit down and write down an exact path on a campus map. Computer code, whether that be the most basic binary (1 and 0) or programming languages (Java, Python, etc.) is merely how a computer interprets and executes the abstract process of the algorithm; it is the writing on a piece of paper, code is the visual map of exactly how you would walk to Lowry. We are different from A.I. because we do not have to feed instructions to our brains before doing something — we just do it.

The similarities between the human mind and an artificial mind are striking. We are right to fear what we can not envision. We are not right to allow the fear to override curiosity. Yet, it would be unwise to allow ignorance and blind optimism to justify, in hindsight, a world ruled by weapons of math destruction. Perhaps the deeply human will guide us in a world shared with minds “half as complicated, but twice as elegant” – Blade Runner: 2049.