Will A.I. Ruin Us?
- BY KRISTA DOSSETTI
- PHOTOGRAPHY BY GARVIN TSO
- September 1, 2016
Professor Matt Johnson, who has been with 麻豆传媒社区入口 since 1999, began studying political science at Xavier University at the age of 15. Though his parents might have preferred him to become a lawyer or a doctor, the teenage prodigy was fortunately distracted by video games and the seminal era of computer programming — “punch cards and Pong,” as he says.
After leaving Xavier with degrees in both political science and computer science, Johnson obtained a master’s in computer engineering at Michigan State — where his first foray into artificial intelligence occurred — followed by a Ph.D. in computer science at William & Mary, making him, he reports, one of the first students to complete a doctoral degree in that discipline from the university. Since then, the professor has worked on AI projects such as in-flight pilot automation for NASA, air-traffic control strategies, military surveillance, published a textbook on computational theory, and more. Here, amidst fears of artificial intelligence evolving beyond human control, Johnson answers questions about what he believes are the real concerns with AI. Hint: It isn’t robots falling from the sky.
IDEAS ABOUT THE POWER AND CONSEQUENCES OF ARTIFICIAL INTELLIGENCE SPAN THE GAMUT — STEPHEN HAWKING HAS SAID IT COULD BE THE END OF HUMANITY. WHERE DO YOU STAND?
People are afraid of AI, largely because of how it’s portrayed in movies — that it’s going to take over the world and giant robots are going to come out of the sky. At a cocktail party when someone hears I work in AI, the first question is always about Skynet. (Laughs.) No, I’m not afraid of that.
There is a contingent of people, however, well-known people, who believe in what’s called “strong AI” (also known as deep learning), which is about getting the computer to actually be intelligent — as opposed to seeming intelligent — because it gets the right answer.
There’s no harm in researching [strong AI] and doing things with it in my opinion, but trying to [get computers to] act human isn’t the right approach. We should be continuing to come up with systems that make the best decisions given the information that they know. That’s not necessarily about being self-aware or conscious or philosophical, it’s just a program solving one particular task.
SO AI ISN’T ABOUT TRYING TO RECREATE OR MIMIC THE WAY THE HUMAN BRAIN WORKS?
No, it’s not about trying to build your thinking, feeling artificial friend. And it’s not even about understanding the human brain. To me, that’s cognitive science. And, it’s also incredibly arrogant — humans [don’t possess] the only kind of intelligence. It is, though, about trying to solve problems that are hard and complex, and that are inefficient for humans to solve by their nature.
YOUR SPECIALTY IS MACHINE LEARNING?
Yes. Machine learning, which is the design and construction of algorithms that can learn from and make predictions on data. Modeling solutions after observable phenomena in nature is what I dig most, though, so my main area of research is genetic algorithms — a problem-solving technique modeled on natural selection and population genetics that teaches a computer program how to create and run other programs. It’s very meta, and I think, very cool.
I’ve also done work throughout my career in neural networks, which is teaching machines how to do stuff based on the biology of how humans or animals learn to do things, and I’ve done a lot of work in ant colony optimization. When you see ants scurrying around looking for food, they’re actually doing that based on a firm mathematical model and pheromone tracing — they’re following a scent. You can use that information to send out ‘ants’ in virtual time to see how long it takes to get a reply back that something was received, like an email. Eventually, you’re able to create a picture of the network topology and make adjustments to things like a failed server really quickly.
PRIVACY ISSUES ARE HUGE THESE DAYS. DOES AI PLAY A ROLE IN THAT?
That’s also a fear for some people, that AI is being used to monitor their activities and knows them. But whatever is being done or looked at is being done in a singular frame of reference — a single webpage, a single phone conversation. To link all those things together is incredibly complicated.
So yes, [AI] has the possibility to watch what I’m doing when I’m shopping online or talking on my iPhone and to respond to that information in a limited context — but AI is never going to be able to create a completely accurate picture of who I am.
What AI can do by itself isn’t the issue, though, it’s what’s done with it that’s the problem; people gathering information about you without your knowledge or ability to control it is ethically disconcerting.
ARTIFICIAL INTELLIGENCE IS BEING HAILED AS THE FOURTH INDUSTRIAL REVOLUTION — A RECENT STUDY BY THE WORLD ECONOMIC FUND ESTIMATES A NET LOSS OF FIVE MILLION JOBS DUE TO AI IN THE NEXT 20 YEARS.
It’s a real challenge, and I think it will happen a lot in China because they’re investing so much in robotics for manufacturing. When people make decisions about whether to automate something, they’re focusing on a personal bottom line, not what happens to the workers. As a society, the more we automate, the more we have to realize people are getting pushed out of the workforce, and that can’t be the end of the discussion — there needs to be an effort to educate people in the skills that are needed.
AND HOW IS CAL STATE EAST BAY HANDLING THAT EDUCATION?
Well, it’s been pretty well publicized that the number of women and minorities working in Silicon Valley is pathetic. And it’s even more of an issue when you look at the demographics of the region, which are incredibly diverse. These may or may not be people whose jobs could be eliminated by AI in the future, but they’re people who aren’t gaining traction in a career field with huge growth and huge demand, and who can’t even imagine themselves working in technology. There’s literally a million new jobs to be had in the next decade.
What I’m proud of is how diverse the students in the 麻豆传媒社区入口 computer science program are, and how we buck enrollment trends nationally. Our undergraduate program has more than three times the number of Latino/Hispanic students compared to the national average, and 45 percent of our graduate students are women — that’s about double the average across the United States. We are serving as a model of success for what computer science programs need to look like in this country for the sake of our economic and national security.
SO WITH ALL THE GROWTH THAT’S EXPECTED, WHAT ARE YOUR PREDICTIONS FOR WHERE AI IS HEADED NEXT?
I think the surge in AI startups will continue happening as giants like Facebook and Google invest more and more in AI technology, but as far as specifics, definitely drones. The U.S. Federal Aviation Administration has released regulations for registering drones and is testing technology that could help automate air traffic control — we should see that this year.
And the Internet of Things will continue to expand; all the physical objects, devices, buildings, etc. that are embedded with electronics. Devices will coordinate more, and therefore seem smarter. And this plays directly into more affective computing systems. It’s becoming more and more important for users to feel personally connected to their devices, especially cell phones. Many AI students study psychology, philosophy, and cognitive science, and there’s a big thrust in the field to create programs that understand a user’s mood and desires. You can do that through gathering data on things like what emojis are being used, and even the strength of a person’s keystroke. Happy people type differently than angry ones. And then maybe you have a recommender system that correlates a user’s mood with their search history or something — a little thing could pop up on the screen and offer directions, for example. A lot of things are coming together.
What excites me the most is work on autonomous agents, but not driverless cars. I think real applications are still years away. Smart homes, maybe? My house is remarkably dumb. Or personal assistants. I could really use that — but not one automatically predicated on a female voice.