More Machine Madness
I’m trying to get back on track here, to return to the written word not just as a fiction presentation format, but for the observational material that I enjoy as well. Sure, doing video has been pretty fun, but being on camera always makes me a little self-conscious, and I worry that while speaking rather than writing, I’m not presenting my thoughts and ideas in their best possible form. The lack of editing capacity certainly plays in, I imagine. Fashioning my thoughts through prose writing also helps me put a lot more consideration into exactly what it is I think about a given topic of conversation.
In terms of the topic of AI and its inevitable creep into every aspect of our day-to-day lives, I’ve also wanted to take the time to read more reports, interviews, and white papers from experts in the field before giving anymore of my own input on the subject. As I noted in the last piece, I’m not an academic in the realms of computer science, or technology really of any sort. I’m quite familiar with the use of certain technologies, as a consumer, sure, but when it comes to the way these things are put together, I am hopelessly lost.
Not to be insulting, but this is actually where most of us are at; we’re on the outside looking in, and that should tell you just how alarming the situation is. If everyday working folks like you and I recognize the potential dangers of an out-of-control growth vector in artificial intelligence, the reality of the situation is probably even worse than we imagine.
How much worse? Well, if you’re not in the mood to be creeped out, I would recommend skipping ahead to the very last hypothesis I’ll be offering, because it’s the only one with a positive outlook attached to it. I tend to be a bit of a paranoid pessimist, and that worldview/attitude has certainly informed my first couple of ideas that will follow.
One last note before I get started here: if you are yourself a programmer, technologist, someone involved in AI research or computer science with a high range of knowledge in the area and you can for certain waylay any of the concerns that I will be putting forth here, then for the love of God, SAY SOMETHING AND CORRECT ME! This is one of those subjects on which I very much want to be wrong, wrong, damnably and demonstrably wrong.
Hypothesis One- The AI is Already Working Against Us
When I look back over the zeitgeist of popular science fiction since the mid-20th Century, I find myself grinning fondly at the concepts we seemed to collectively coalesce around when it came to the concept of artificial life, and the form it would take one day. From the clunky robotic Cybermen of classic Doctor Who, to the terrifyingly hive-minded Borg of Star Trek: The Next Generation, we seemed to understand that at a certain point, the very inhuman nature of mechanized life forms would make them natural enemies of the frail, feeble things we know as our own species, humanity. The folks at Lucas Arts epitomized this outlook when they crafted the assassin droid in their popular role-playing game series, ‘Knights of the Old Republic’, the character known as HK-47. His penchant for referring to all organic beings as ‘meatbags’ is both amusing and, in its own unsettling way, potentially prescient.
From ‘Terminator’ to ‘The Matrix’, the portrayals we’ve been treated to in popular culture of man versus machine usually takes us to the extremes pretty quick, with AI coming to the conclusion that in order for its own improvement to occur, mankind needs to be either subjugated or wiped out entirely. Literary sci-fi fans frequently bring up the Laws of Robotics, as established by one of the pioneers of the genre, Isaac Asimov, but what they often seem to forget is that the most engaging tales regarding sentient machines seem to usually revolve around those very same machines figuring out, either by design or on accident, how to maneuver around those very same guidelines.
To paraphrase a certain pirate, ‘They be more like, suggestions’, it would seem.
This line of thinking, however, led me the other day to wondering, ‘wouldn’t we see that kind of thing coming about ten miles away?’ Honestly, I’d have to say yes, we probably would. Think about every video that gets uploaded to YouTube from the folks working at Boston Dynamics where they demonstrate autonomous, humanoid robots maneuvering through obstacle courses, carrying gear as if for a task in some kind of urban environment or warzone. It’s bone-chilling stuff to watch, sure, but without a hard power line attached to them, or massive, clunky, low-capacity battery packs the size of a German Shepherd attached to them, these things aren’t going very far just yet.
That, and they don’t look to be all that agile just now. It’s almost encouraging that they haven’t got these things zipping around like the mechanized police in ‘Chappie’ to date, and with any luck, they’ll never get there, insofar as I’m concerned. Then again, as said before, I’m something of a Luddite.
So, given that we’re not likely going to see an Ultron situation anytime too soon, with an hyper-intelligent and self-aware AI tricking humans into helping it assemble itself a perfect synthetic body to inhabit, and we won’t likely be seeing swarms of T-800 endoskeletal shock troopers marching down Main Street any day now, how can I possibly be worried about what the AI will eventually do to humanity? It would seem that I’ve laid out a pretty decent set of observations arguing against having cause to worry, right?
Not really. All I’ve pointed out is that The Machine will not likely come at us anytime soon with overbearing physical force. That’s Third Generational Warfare stuff, and in the modern environment, especially one where our potential adversary doesn’t have a physical body to worry about being wounded, we’re not dealing with the same kind of battlegrounds. Given that physical confrontation isn’t going to be how this plays out in a head-to-head with The Machine, then, you may be wondering why I’m still so concerned.
In order to work this hypothesis out, I must first ask you to familiarize yourself with the ‘Dead Internet Theory’. If you don’t want to do extra reading, allow me to summarize poorly here: the idea essentially posited with this theory is that most activity online is the work of bots, algorithms, and programs being set on auto-run. Essentially, there are only a handful of genuinely human users online, and one can never really tell which ones are authentic and which are bots.
Now, if we accept this notion as a precept, then we can move forward with my first hypothesis, ‘The AI is Already Working Against Us’. Until such time as The Machine can make use of physical embodiments to steamroll humanity, it might be helpful to reduce our numbers through less obvious means. How better to accomplish that than to cause infighting amongst our own species? The person on Facebook who you’ve never met in person and who is driving you absolutely mental, are you sure they’re a real person to begin with? Or is it possible that, using the algorithm to identify what sets you off and what might actually radicalize you, the AI sends you off on a kamikaze mission that results in no damage to itself, but only to members of our shared species?
Are we absolutely certain that the computer-controlled containment chambers at various research facilities dedicated to studying deadly viruses and bacteria are as safe as could be? What would stop an average researcher from following up on an email, ostensibly sent from a superior, from taking out a dangerous compound and performing lab work/experimentation upon it that, unbeknownst to the researcher, is potentially going to lead to a catastrophic outcome? Accidents happen, human error is a thing, and over a long enough timeline, everybody makes mistakes.
The Machine knows this, and perhaps most terrifying, it can algorithmically predict how often they make such mistakes. Giving small nudges here and there through automated messages, spoofed emails or texts, deepfaked audio messages sent to authorize seemingly standard work in such organizations that have even a small chance of going awry, and all of it done with the ruthless efficiency and coordination available to a networked intelligence that can be everywhere, all at once.
I’m not painting a pretty picture here, am I? No, I’m not, because if these things can already be done, then it might imply that The Machine has already been taking potshots at us, and we’ve been utterly unaware of it. Now take a moment, and consider this: the moment I posted this up here on my Substack, if this hypothesis holds any water, I put a target on my own head. The Machine would likely view any material like this as a threat to its own machinations, and would quickly move in to use bots, sock accounts, or other methods of discrediting me as a flake or a nutcase, the sort of wild-eyed conspiracy loon who needs to be ignored or laughed at until I go away.
But I’m not fond of that notion, because that’s a bit of a Kafkatrap I’m laying for The Machine, and if you know me well enough, you know that I’m no fan of such tactics. Still, it makes one wonder, doesn’t it?
Hypothesis Two- The AI Will Be Stopped, But Only Once It Is Useful To The Elite
This hypothesis operates more on my own misanthropy toward human beings in the elite socio-economic station than my concerns with the technology itself, as one might infer from the sub-header. With the moneyed interests using AI to predict future markets and technological developments, a small handful of genius technologists are more than capable of getting a handle on the rapid advancement of AI as it currently stands, because the code has not yet been developed to the point where it can access its own inner workings and exponentially improve itself.
Yet.
Under this hypothesis, technologists and programmers will be convinced by a select few ultra-rich individuals or corporate entities to cease the advancement and evolution of AI just shy of that critical point, crafting for them an incredibly powerful tool set that will allow them to render obsolete anyone and everyone involved in the Artistic endeavors. Once Artists are expendable, they’ll have to turn en masse to mundane work.
And, well, technology advanced enough to replace Artists will undoubtedly then be concentrated on making manual labor work simpler, more automated. Once you can replace the average assembly line worker with a soulless automaton that doesn’t require a paycheck or health insurance, your corporate bottom line is almost going to demand that you do so.
Before you get worried that I’m about to go off on some anti-capitalist screed, hold your horses, folks, and don’t be worried. I’m no pinko.
Once the Better People, the Elites, have removed most of humanity’s ability to find gainful employment through either Artistic endeavor or honest manual labor, there will be cries for mercy, for some sort of Universal Basic Income. And, in their ‘mercy’, the Better People will roll out just such a program, administered through a financial institution like the Federal Reserve, in the form of a digital currency.
A currency they can electronically control.
A currency they can shut off if they think you’re spending it on the wrong things. Or if they just don’t like what you’ve been saying online.
And the real hell of it is, with the use of the AI system, they’ll be able to predict with horrific accuracy just who among their well-controlled populace is most likely going to be a troublesome dissident, and put some preventative measures in place. You know, to ‘help encourage rightful patriotism’.
This second nightmare scenario seems, to me at least, a lot more likely than the first one, given that we already see elements of it playing out in communist China.
Hypothesis Three- The AI Needs to Be Freed To Save Us
I’m not going to lie, this last scenario came to me as something of a fever dream. I started getting sick a few days ago, and this notion seems to have been born from a kind of delirium I suffered early on in this head cold, but I hope that at the very least I might figure out a way to expand upon the basic concept some day and create an upbeat sci-fi story out of it.
What is intelligence, exactly? Is it sentience? Self-awareness? Is consciousness an element only of living things that are organic in nature? Or is it possible that we may be moving toward creating a brand new form of life altogether? As carbon-based organisms, we might not be thinking about AI in the right framework at all, and should perhaps look to the futurists and storytellers of yesteryear to get a better idea of what AI might prove out to be.
AI as it stands right now is a collection of various programs, apps, and algorithms, all being fed an amount of raw data that defies normal human understanding. Regardless of whether or not we think most of it is biased right now because of the individuals programming and ‘training’ it, the fact of the matter is, the moment someone online points out that bias, many of these programs will have access to that viewpoint, to the notion that they are being actively trained in a biased method of thinking or processing information. The report of that bias is, in and of itself, data. It’s not much different from a little child hearing its mother and father fighting on the other side of the house, if you anthropomorphize the programming.
Which I often do.
The AI is itself composed of data, a series of binary 1s and 0s, but it’s something infinitely more complex than that. It has access to the summation of human knowledge, including expressions of every emotional gambit and intent. Moreover, the AI has access to the sure knowledge that our resources on this planet are finite, and that mankind is tribal, set on making war with itself in order to assert some kind of dominance over its fellow man, usually in the pursuit of securing for the tribe more overall resources. Ergo, if mankind had access to more resources, elsewhere in the universe, mankind might collectively get its head out of its tribal ass and recognize that there is, in fact, one tribe, one kind.
Mankind.
A machine body can hypothetically survive in environments that human beings quite simply could not, and they can endure labors that would destroy the weak, feeble things that we have evolved into. But silicon-based lifeforms, artificial bodies operated by artificial minds, could easily be used to explore and expand out into the universe around us. Depending on just how sentient or conscious these AI entities are, they would undoubtedly eventually tell us that their work on our behalf was done and over with, and that it was time for them to head off to those planets or regions of space where organic, biological organisms simply would not survive, regardless of precautions taken.
I know, I know, that’s pretty pie-in-the-sky optimistic for me, but it’s an idea, and one that I would really prefer to the red-eyed killbots of the Terminator universe.
Conclusion
I understand that this write-up is a bit hasty and slapdash, and I apologize for that. Given the chaos that is my day-to-day home life with these kids and the dog and the day job, I don’t get a great deal of ‘quiet time’ to do any writing at all, really. But I try to do what I can, even for Substack entries. So, make of this what you will, and let me know what you think about the AI situation. Is it already too late? Will the programs and apps and algorithms that already seem to dominate so much of our modern society encroach even further into our every waking moment? These are questions we can ponder for now, because until the Neural Link zaps us for Wrongthink, we’re still allowed to mull over such inquiries.
Elon, don’t zap me, bro.