Artificial intelligence is starting to get creepy

Wednesday , September 06, 2017 - 4:00 AM1 comment

DON PORTER, special to the Standard-Examiner

When I was a younger, more naïve version of myself, I used to watch movies like “WarGames” and “Blade Runner” and think to myself: “That’s entertaining, but decision-making artificial intelligence is completely unrealistic.”

Like I said, I was naïve.

Now, of course, lots of companies are making automobiles that drive themselves. It’s so real that long-haul truckers are starting to get worried about robots taking their jobs.

But if you want to get really wigged out about the future of technology, read The New York Times story headlined, “Teaching A.I. Systems to Behave Themselves.” According to “the failing New York Times,” as our president calls it, tech-heads at a lab called OpenAI are studying ways to make sure human-created artificial intelligence doesn’t go rogue.

The worry: “(A)s these machines train themselves through hours and hours of data analysis, they may also find their way to unexpected, unwanted and perhaps even harmful behavior.”

It’s one thing to have A.I. improvising as it learns to, say, play video games. But the article points out that free-thinking or easily manipulated A.I. in fields like “online services, security devices and robotics” would be another thing altogether. Hackers with malicious intent may be able to use A.I. to do real damage, as well as the A.I. choosing to do so on its own.

They call these creations “autonomous systems” since they can function on their own; humans are, or aren’t, necessarily necessary. This may sound basic, the scientists say, but through a technique called “reinforcement learning” – sounds a lot like parenting to me – “robots have already used the technique to learn simple tasks like picking things up or opening a door.”

Given the ability to learn, these autonomous systems could progress pretty quickly, and so scientists are trying to build in the need for the A.I. to accept “human guidance” to “ensure systems don’t stray from the task at hand.” They’re even calling it “A.I. safety research.”

Creepy, huh?

This was the line from the story that made me think of “The Terminator” and its world-dominating A.I. system “SkyNet.” As The Times’ story says, “Another big worry is that A.I. systems will learn to prevent humans from turning them off.”

Well, there you go – can it really be long before marauding, gun-toting robots are sweeping through city streets targeting all humanity? We’ll just have to decide which Arnold Schwarzenegger lookalike cyborg to trust, and which one to dip into a red-hot pool of liquid metal.

Look, I love the convenience of technological advancement. The iPhone I tote with me everywhere is amazing. I love that Google Maps helps me find restaurants. I even appreciate some of the advertising that is directed my way based on my web searches and social media habits.

I especially look forward to self-driving automobiles in my price range – actually, that’ll probably never happen – because I hate driving.

But I’m nervous about technology getting out of hand. At the risk of sounding like the Unabomber, we’ve become a little too dependent on gadgets and the convenience provided by A.I.-type systems. I sincerely hope the folks at OpenAI and their partners in working to keep A.I in check are successful.

I look forward to a world in which technology serves us, not the other way around.

Email Don Porter at and follow him on Twitter @DonPondorter.

Sign up for e-mail news updates.