“There is more to life than increasing its speed.”
-Mahatma Gandhi
A Time For Speed, A Time For Caution
With the advent of AI, it’s never been more important to contemplate the idea above. Absolutely, there are circumstances when speed is vital to a successful outcome: reaching and rescuing a person who is drowning, saving someone from a burning building, administering CPR, stopping the bleeding of a terrible wound… Many life-or-death medical and survival emergencies come to mind. We might even add to the list racing to develop a vaccine, or to cure a disease—although with these two examples, our abilities to focus on what’s important, to study patterns and results, and to pay attention to details are also vital.
Such truly time-sensitive situations aside, we must be honest with ourselves – as a society—about the benefits and risks of adopting a “faster is always better” mentality. After all, many other examples we might come up with are only important because of our collective worshipping of speed, sometimes for its own sake, and often for profit above all. For example, a manufacturer might argue that making their products faster is vital, in order to compete with others who are finding ever-faster ways to make similar products. And this, of course, has truth to it. But at what cost? Soon, AI may tempt many such manufacturers to replace workers with technology, saving both time and money. Should this occur across too many industries, might the human workforce be harmed on a large scale? And ultimately—ironically—might the manufacturer find fewer human beings able to afford to buy what they so efficiently make?
The Startling Vertical Line of Ai
Many argue that history has always proven new technologies do eliminate some jobs, but the workforce always shifts into the new roles that accompany the new realities of the world, with ever-advancing technologies. And they argue it will happen again with AI, just as it always has in the past. But will it, this time? Is AI perhaps different enough that it will disrupt this previously reliable pattern of advancement, adjustment, more advancement? AI can program itself. AI is not accelerating things on an upward slant—but at a pace that is best represented by a vertical line. Straight up, faster than anything that ever came before.
Perhaps the human race will have to ask some difficult questions about preserving human dignity—and valuing that dignity over speed, greed, and profit.
And then there are the even more dire possibilities of AI causing a sudden catastrophe or catastrophes. AI does not have human feelings, judgement, or values. It lacks the kind of context a human being can bring to a situation. And globally, “bad actors” may inevitably have access to AI systems and technology that could affect large populations. What could possibly go wrong?
Could “Slow and Steady” Save The Human Race?
Is it time for a pause? A societal agreement to step away from the seductive, addictive speed that AI promises, at least long enough to stop drooling over the amazing benefits it can bring and put in some guardrails? There are some incredible benefits, of course. But wouldn’t we be wise to clear our heads, get brutally honest with ourselves, and seriously discuss the potential downsides? How might we regulate AI? Where do we draw the line on what we do with it?
Or is it too late?
The Time to Affect Our Outcome
To quote one of my inspirational podcast guests, Jeff Salzenstein, “It’s never too late. As long as there is breath in (our bodies), it’s not too late.” He wasn’t talking about AI. But he was talking about human potential, and any individual’s ability to make a change, affect an outcome, make a difference. I believe in that.
Naively or otherwise, I’d like to propose a serious moment of societal analysis of AI, in which we think hard, look deep into our entirely human hearts, and honor and protect something that AI can never replace: our very humanity, and what’s best about being human.
Emphasis on what’s best. Emphasis on being human.
Comments