“Dang.” That’s about all I could muster as I watched a two-hour Frontline special on the “promise and perils of artificial intelligence (AI).”
Honestly, it was a real downer of a documentary. I don’t recommend it whatsoever if you aim to feel good about things. If you need a hug, then this documentary delivers a mean yank on a too-tight dog collar instead. It was released on Halloween and maybe that’s why it goes overboard on the creepy background music. But if you want to give serious thought to the magnitude of upcoming technological changes, and risk a few nightmares, it’s a worthwhile watch.
The documentary opens with a look at Google’s “AlphaGo” defeat of Lee Sedol, an 18-time world champion in the game of “Go,” a board game even more complex than chess. There are apparently — and I find this hard to believe — more potential moves in a game of Go than atoms in the universe. That’s what they said. Um, OK. I don’t know how to respond to that. But yes, it looks seriously complex.
Millions in Asia watched as Google’s self-learning computer beat Sedol in four of five games in 2016. Google uploaded many Go games into the computer, which took all of the data and learned from it, eventually creating its own strategies outside of what humans typically comprehend. There was a particular move that startled all observers with its strangeness and there’s footage of Sedol staring perplexed at the board on his way to losing the match.
A “self-learning computer” — doesn’t that phrase just make you pause and think of the possibilities? Of course, there are some truly positive real-world applications for self-learning computers. For instance, the documentary showed how breast cancer imagery could be fed to a self-learning computer and analyzed. The computer could see mammograms from women over a period of time and look at how the disease develops. It could formulate similarities between women that might not be seen by the human eye, simply because it has much more data capacity than a person.
The program showed how a self-learning computer mastered a video game after it was introduced with no instruction other than to win. The computer devised a strategy of digging a tunnel and eliminating blocks on the upper portion of the screen, something not readily apparent to any novice. But the strategy was adopted quickly after the computer taught itself how to win.
Scientists see this technology as a potential way to address big issues, like climate change or curing cancer. It may find solutions that humans don’t, just by thinking “in-the-(metal)box” — I’m picturing a robot head with its robot fingers to its chin in a pondering pose.
But the yikes factor is through the roof on all this. This seems particularly apparent when you look at China, which treated the Google-Sedol “Go” challenge as a type of Sputnik moment. China declared that they were going to be the best at AI. And the country set a target date of 2030 to achieve AI supremacy. China is rapidly moving toward a technologically sophisticated, authoritarian surveillance state. In some urban areas, Chinese citizens pay with their face. (How far are we from this replacing credit cards? I bet not long). Jaywalkers’ faces, which are recognized through facial recognition, are put up on signs in the city of Shenzhen to shame them. (Is jaywalking that shameful?) The AI age is driven by data, and so China thinks of itself as the “Saudi Arabia of data.” Everything is monitored. Loans can be achieved on apps in eight seconds. When a person applies for a loan through an app, the system runs through over 5,000 data questions about the applicant. These questions can seem random, but the algorithms show a correlation between such things as cell phone battery life and someone’s dependability on paying back a loan. So people with a lower average battery life are less trustworthy and less likely to get what they want. The confidence with which the loan applicant’s keypad is struck can be one of the many data factors in determining a final score for the applicant.
The Frontline documentary mentioned the potential for citizen score for every individual developed through data gathered on a person. That person could be eligible for discounts or targeted for punishment depending on whether their score qualifies them as a “good citizen” or a “bad citizen.” That seems truly Orwellian, doesn’t it? Think about how this could be used politically. It’s not pretty.
And obviously, we are also the source of massive data gathering. Companies are already gathering data to sell our information to advertisers. Governments — and not just our own — have an interest in holding our data, too. Remember, if you get something for free, then you are the product, not the consumer.
The AI revolution will also eliminate many jobs. And yes, there’s always talk that new jobs will be created with any elimination. I’m sure there will be some new jobs we can’t imagine now. But at what scale? Will there be enough jobs to offset massive new efficiencies? For instance, what if 300,000 truckers are put out of work due to driverless technology that allows companies to put freight on the highways 24 hours a day instead of 11 for a human driver? The documentary focused on one company that is already putting driverless tractor trailers on the road to haul freight. How long will it take for humans to be eliminated from this role? Think of all the secondary jobs that are eliminated when one is gone. Driverless vehicles don’t need meals along the interstate like truckers do.
Truckers are just one job, but there are many more that could be on the chopping block as companies find new efficiencies (job cuts) through AI.
Meanwhile, AI advancements are happening as “quantum computing” is being developed, which could advance computer power at exponential levels. I don’t even know how to talk about this. I’ve read explanations on “quantum computing” and I still don’t get it. Look it up if you haven’t already. Maybe you can dumb it down for me. But I know the general outlook: computers are going to get stupid fast and powerful in a few years.
We already live in a fantasy world. Hand-held computers connecting everyone seemed sci-fi just 20 years ago. And the implementation of upcoming AI and quantum computing technologies will be just as mind-blowing — or perhaps exponentially higher on the “wow” chart.
It’s ironic to me that technology, which is developed to make all endeavors easier for humanity, ultimately holds the curse of being too good at that task. We don’t want technology to take away actual work for people who need to work. Yet, that’s where we’re headed at a massive scale.
I know, I know. Stop! There’s too much gloom and doom in looking at this. The Frontline documentary was honestly too much. I cut it off a few minutes before it ended. I just couldn’t take it anymore. So, if you’ve stuck through this rehash of a gloomy documentary, then you deserve a couple of more points on your future citizens’ score.
Or, maybe I’ll just say, “Dang, you’re a lot nicer than a robot!”
Zach Mitcham is editor of The Madison County Journal. He can be reached at email@example.com.