Promise and Potential Peril

By Annie Athon Heck

The term “artificial intelligence” is everywhere today. It is written about in news stories and social media, and discussed on TV talk shows and morning radio. And it is becoming more and more prevalent in the daily lives of people around the world. From cell phones and personal assistants to internet searches and photo sharing, artificial intelligence — or “AI,” as it is commonly known — helps power the systems at the foundation of these and many other tools.

Although it is now becoming ubiquitous and well-known, the field of artificial intelligence can be traced to World War II and the code-breaking work of English computer scientist Alan Turing, best known for cracking intercepted messages that helped the Allies defeat Germany. The field of AI research was founded years later at a workshop at Dartmouth College in the summer of 1956.

Artificial intelligence is closely related to the growing realm of robotics as well. The notion of autonomous robots emerged as far back as 1818 with the publication of Mary Shelley’s Frankenstein. Today, artificial intelligence lies at the heart of robotic systems that already help scientists gather and analyze data with implications for agriculture and food supplies, human health, national security and a host of other applications.

Tom Dietterich, professor emeritus in computer science at Oregon State University, is a leading expert on the topic of artificial intelligence. He is a co-founder of the field of machine learning and has published more than 200 scientific papers. Dietterich served as president of the International Machine Learning Society from 2001–2008; president of the Association for the Advancement of Artificial Intelligence from 2014–2016; and executive editor of the journal Machine Learning from 1992–1998.

Terra writer Annie Athon Heck recently interviewed Dietterich about artificial intelligence, its relationship to robotics and its future.

Terra: What exactly is artificial intelligence or “AI” as it is commonly known?

TD: Artificial intelligence is a collection of techniques in computer science for programming computers to do things that only people can currently do. Often, these are things we associate with human intelligence, such as planning an international trip or writing a complex news story or controlling a robot. Now only people are able to do these things. The field of artificial intelligence tries to come up with ways to program computers to do them instead. The use of machine learning is the most common way we do this — by gathering data from humans about the things that people do. For example, if we are trying to understand human speech, we collect speech with a tape recorder. We next ask a person to tell us the words that were spoken. Then we try to figure out how to get the computer to take that speech from the input — someone speaking — and produce the words with the output — text of the spoken words, such as dictating a text message into your phone.

Terra: You raised another question in your response. What is machine learning?

TD: Machine learning is a technique for programming computers using data. In machine learning, the computers learn from studying data rather than programming things through step-by-step instructions, which is the way computers are usually programmed. Early in my career, I wrote a program to do some accounting tasks. I interviewed an accountant and asked what steps he goes through for certain tasks. Then I wrote down those steps and programmed them into the computer. Computers are great at following steps. But if you ask a person to tell you the steps you go through to understand the words that I’m speaking to you right now, you wouldn’t know what they are. It’s subconscious. You can’t tell me the steps you go through. And that’s why we need to use something that’s more like machine learning where we collect “input/output pairs,” as we call them, and then try to develop some sort of a computational program that can produce the correct outputs when given the inputs.

Terra: There are many disciplines that study human thought and reasoning, such as psychology, philosophy, engineering and artificial intelligence. What do these disciplines have in common, and how do they differ?

TD: I think all of these disciplines are interested in understanding how people learn from their experience in the world. I got my start in computer science, coming from the philosophy of science. The philosophers’ methodology is primarily to think clearly and carefully about the questions themselves: How can scientists learn about the world? Is science self-correcting? And so on. Psychology directly studies humans and tries to measure and assess how humans think, and where and how our thinking fails in some way: for example when we draw incorrect conclusions. Artificial intelligence differs because we approach these problems purely as engineering problems. So, we don’t study people directly. We collect data from them but we don’t study them. For example, we might have people look at images and tell us what objects they see and to draw a box around each one. But we don’t look at their brains to try to understand how they’re working.

Terra: Looking into the future what is realistically possible when it comes to AI?

Software for this SandBOX was designed by Behnam Saeedi, an OSU computer science student, and the hardware by Scott Laughlin, an OSU mechanical engineering graduate. Users can create terrain maps by physically manipulating the sand, which prompts an Intel RealSense depth camera (see image below) above the box to project changing light patterns on the sand to color the resulting elevated or depressed areas, rendering them as topographic-like maps. Artificial intelligence relies on image processing and noise reduction techniques to polish the final display, and optimized computer vision and analysis estimates the profile of the sand. SandBOX was developed in OSU’s Create IT Collaboratory as part of the TekBots program. (Photo: Ian Vorster)

TD: I don’t know what’s possible in the long run, but in the short run there’s certainly a lot of discussion about self- driving cars, robotic surgery and more intelligence in military systems. Let’s look at the self-driving car case. It’s quite controversial, even among AI people, how far we can go with complete autonomy in self-driving cars. Most AI systems work best when they’re working as an assistant to a human, such as a navigation system. We are in the driver’s seat, literally. Similarly, we ask Google questions. When we get back answers, we decide whether to believe them or not. And we can tell if Google completely misinterpreted our question, or if we need to rephrase it until we figure out the right way to ask it. Back to autonomous cars, we now have some automatic braking and lane- keeping assistance. But it’s clear that if we automate too much, then drivers stop paying attention, and we have these fatal crashes, like that of the Tesla and of the pedestrian that was killed by a self-driving Uber vehicle. The question facing us is: Can we really achieve 100 percent automation so that it is safe for the driver to not pay attention? The jury is still out on that. I don’t know. The car is driving in a very complex and open world. It’s not a fixed problem like playing chess. These errors are life and death errors, not just getting the answer wrong on your query to Google. So, I think it is important to proceed very cautiously.

Terra: Is this where “robust” artificial intelligence, a term that we are beginning to hear and read about more, comes into play?

TD: Yes. To apply AI safely in these kinds of high-risk applications, we need the technology to be much more robust. This means that the system will have the desired behavior even when assumptions underlying the system are violated. For example, training data are noisy or a human operator makes mistakes. A culture of safety in the creation and testing of software must be adopted. That’s much more difficult for software that’s created with machine learning because with machine learning we train the software on a collection of training examples, but we ask the machine- learning algorithm to interpolate or generalize from those training data to decide how to act in all situations — even though we haven’t shown the computer all possible scenarios. When you program a computer by hand, you think through all possible situations, and you write logic that’s supposed to cover all of them. Of course, programmers fail at this. That’s why app software crashes on our phones. That’s a big challenge. Another challenge is that our AI systems have their own model of the world. For instance, a self- driving car keeps track of where it thinks the road is just like the blue dot on Google Maps does now. But it also has to be tracking where all the other cars, pedestrians, bicyclists, trees, signs and the edges of the lane are, anticipate what all those things are going to do and choose actions to deal with them. There is a fixed set of objects that those systems know about. What happens when a new object shows up? This is known as the “open category problem” because the set of objects or obstacles is an open set. There are new possibilities all the time. This is one of the things we have studied here at Oregon State: How can we create systems that are guaranteed to be able to detect all of those new objects in addition to the things they know about?

Intel RealSense depth camera. (Photo: Ian Vorster)

Terra: That leads to the next logical question. What are the weaknesses of current AI systems? And what are the prospects to make them more robust and high-performing?

TD: One is what we just touched on — the open- versus closed-world problem. The other is that training data are never complete. If you visit Garmin (navigation technology company) in Salem, it has a group that develops avionics software for the airline industry. That software is incredibly, carefully engineered and tested. Garmin verifies with high confidence that it will behave correctly. We don’t have a similar set of standards or practices yet for AI software in ground transportation. There’s no regulatory body for general AI software like the Federal Aviation Administration that requires extreme rigor in these systems. That’s an area where I think we need much more attention to figure out how to test and audit these systems.

Because these systems are part of a larger human and social system, it’s important that the functional organization around the computer system is also reliable.

Terra: You spoke earlier about some major areas where Oregon State is advancing artificial intelligence. What are other key focus points for OSU with this growing technology?

TD: We have quite a large group in artificial intelligence and robotics. Our work really runs the gamut. On the AI side, we’ve had a machine learning group for many decades. We work on computer vision and language understanding. We have experts in automated planning.

We develop AI methods for ecology, bioinformatics and genomics. In our machine learning group, we’re looking at problems of fraud detection, cybersecurity and a technique known as “anomaly detection algorithms.” These algorithms look for unusual images, decisions or outliers in data that might signal fraudulent transactions or failures in a computer system — either because the computer system was compromised by cyber attack or some part of it is broken.

One of my own projects is part of the Trans-African Hydro-Meteorological Observatory (TAHMO) effort led by OSU professor John Selker. The goal of TAHMO is to create and operate a network of 20,000 weather stations in Africa. My role in the project is to detect when weather stations are broken and need a visit from a technician. My AI tools look for unusual behavior in the numbers coming out of the weather station that indicate something is wrong.

Terra: Again, there’s that human element where data are looked at critically with human interpretation, which goes beyond the AI systems.

TD: Yes, exactly. We also do a lot of work at OSU in what’s called “human-robot interaction.” Typically, robots are kept in a separate cage because it’s unsafe for them to be around people directly because if there’s an error in the programming, the robot might smack you in the head if you’re standing in the wrong place. One of the big goals is to figure out how we could make robots intelligent enough and understandable enough so you can read their body language. Then you would know when it was safe to hand them something or take something from them or interact with them more directly. We’re also looking at making robots out of soft materials instead of steel bars. This is known as “soft robotics.” In the robotics program at OSU, there is research on walking robots, swimming robots, spider robots, flying robots, wheeled robots, all kinds of things. There are single robots, groups of robots and combined groups of humans and robots working together. So, there is a lot of activity being pursued at Oregon State in both artificial intelligence and robotics.

Terra: There is plenty of speculation about some of the perils of AI. Are these warranted or is Hollywood influencing this thinking?

TD: In Hollywood, there are basically two main stories, it’s either the Pinocchio story and Commander Data wants to be a real boy. Or there is the Terminator story, and robots will somehow turn on us. I think the most accurate view of the future is from 2001: A Space Odyssey. There, we have the HAL 9000 computer. HAL doesn’t “go rogue.” HAL was just incorrectly programmed. The programmers set up HAL to put a higher value on the mission’s success than on human life. And so, HAL just followed the values it had been given. The most important thing to remember about AI software is that it is software. And we all know it is very hard to program software so it does what we want it to do. We see this every day on our phones and our laptops when we have to kill programs, reinstall apps or reboot our machines.

The big challenge is how do we define what is correct and appropriate behavior for these systems and then reliably create software that produces that behavior. That is very difficult. So, the biggest risk is the HAL 9000 risk that we program the system, and then we give it autonomous control over something. I think one of the most important research areas is precisely this area of building AI systems that are robust and safe so that we can have a higher degree of confidence that they’ll behave correctly.

Illustration by Christiane Beauregard.

The Top Five Successes So Far

From global businesses to driving directions, artificial intelligence is making a significant impact in our daily lives. These are some of the top AI successes so far, according to Tom Dietterich, OSU professor emeritus of computer science:

PLANNING AND LOGISTICS: Package routing that is now common with companies like Amazon, FedEx and others.

COMPUTER VISION: Face detection and face recognition. This is being used now by Apple to unlock phones, by Facebook to tag friends in photos and by several photo sorting applications. Security and law enforcement use it as well.

SEARCH ENGINES: Queries through search engines like Google use many different AI techniques to answer billions of questions a day.

SPEECH RECOGNITION: Personal assistants like Siri and Alexa call people, play music and find recipes among other things.

LANGUAGE TRANSLATION: Google Translate and Skype Translator enable spoken conversations between people who don’t speak the same language.