AMU Cyber & AI Editor's Pick Original

Artificial Intelligence and the Societal Influence of Machines

American Military University STEM Dean, Dr. Ahmed Naumaan continues his discussion on where strong AI is headed in our society. Will the machines ever be genuine thinkers? It’s all in the programming, according to Dr. Naumaan. Once genetic algorithms are in place they are susceptible to mutation, which would revert it back to weak AI. Watch to learn more about the future of AI.

Video Transcript

Artificial intelligence is being used in lots of places: in surveillance, in predicting crime, and deciding who to give loans to. And the problem is and there have been cases identified where people’s racial and cultural biases get programmed in. They don’t think of themselves as biased and they’re not trying to build bias into the system. But if you have a prospect if you have a certain perspective that’s how you build whatever is being built, and then it goes into operation and all of a sudden the impact of it is… There are entire classes of people who are being disadvantaged because the system is biased. That’s the problem.

And if you just dig the set of people who in this country let’s put it, because every country is different and there’s a certain class of people who go in. The ones who do intend to be from the privileged segment of society because they have access to the education and so on and so forth. So whatever their biases are, they get built into the system.

Get started on your cybersecurity degree at American Military University.

And so it’s important for the managers to make sure that these systems are being built. First of all it is part of the design process. People from multiple perspectives and backgrounds are involved in the design process. They don’t have to be coders. They don’t have to be mathematicians. They don’t have to be computer scientists, but they have to be part of the design process. What are you building what is it going to do? How is it going to do it.

What rules will be used in making decisions? That doesn’t require technical knowledge and the sense of being you know computer science or engineering. What have you but it does require thinking as a human being within a human society. So that portion is important. Then in testing those systems, because the systems very often get deployed. They aren’t really tested with cases which will test the system with regard to saying… Is this the right decision or the wrong decision.

If you’ve never given the system a case where it comes back with a negative outlook, you haven’t really tested it. So you say well it works every time I asked the question. Every time I give a loan it gave a loan to the right person. And then you deploy it and then some case arises and the system discriminates and people think oh it’s legitimate but that’s because it wasn’t tested and nobody wants to question the computer even today.

I mean 10, 15, 20 years ago I personally have had experiences. You call up a company… This is wrong. Well, I’m sorry I can’t do anything about the computer program that way. With AI that’s only going to get worse because it’s a more complex system. It’s going to be harder to change. Its consequences are going to be widespread and that’s a real challenge. And that’s a real danger.

Get started on your cybersecurity degree at American Military University.

So this stuff like Hawking and others and Musk or that they go on about taking over the world. That’s nonsense. I don’t think that’s an issue at all. The issue is very mundane. Have you built a system that discriminates against certain classes people, and probably systems out there like that right now.

The question then arises is is it something that you have programmed to respond in that fashion. For instance, if you think about chatbots and many child bots over the label on the Internet, you know you say something and the bot responds and you say something else in response to that. In fact, these chatbots go chat bots go back to the 1970s.

One of the very first ones that weren’t called chat bot chatbot that was Eliza, and it was supposed to be like a psychologist. And people really related to it and they just tell their life story and so on and so forth. But it was just a set of programmed responses to certain keywords. And that is what is present today in the chatbots that you encounter. They may be a little bit more sophisticated in terms of recognizing keywords and putting multiple keywords together but it’s essentially the responses of pre-programmed.

And that is what is something that is called weak AI. In other words, it simulates the behavior of a human being or the responses that a human being would provide but it’s pre-programmed. There is nothing that is new that arises. When it encounters something like that everything is essentially preprogrammed. Now in my opinion that can actually change, if you use certain types of algorithms like genetic algorithms which are susceptible to mutation and therefore can give rise to new responses. I would also put that under the classification of weak AI and that is distinguished from strong AI where the computer still responds like a human being. But it’s it’s actually thinking it understands the meaning of what it’s saying.

So when you use your Scheuren apple and say oh that Apple looks delicious it actually understands what delicious means, and the satisfaction that one gets from eating something like that. As opposed to weak AI, which just recognizes… Oh, this is an apple. If somebody is showing it to me and saying Do you like it. I’m going to respond by saying it’s delicious. It doesn’t really know what’s going on. It’s just a stimulus-response kind of behavior. As opposed to if I was short to a human being who could look at it and be able to appreciate it for what it was: its smell, its taste, its texture, and so on and so forth and respond in that particular fashion.

So the sense of meaning and the larger question is, can something that is not aware of itself actually understand the meaning. And this gets into both philosophy and technology and it’s wide open right now. We don’t we don’t know what will happen. Will, someday there will be a genuine thinker. Absolutely. I think someday there will be a genuine thinker, but that day is not close. It’s probably going to be decades in the future.

Get started on your cybersecurity degree at American Military University.

Wes O'Donnell

Wes O’Donnell is an Army and Air Force veteran and writer covering military and tech topics. As a sought-after professional speaker, Wes has presented at U.S. Air Force Academy, Fortune 500 companies, and TEDx, covering trending topics from data visualization to leadership and veterans’ advocacy. As a filmmaker, he directed the award-winning short film, “Memorial Day.”

Comments are closed.