AI APU Business Cyber & AI Innovations in the Workplace Podcast

Podcast: How AI and Link Management Can Reduce the Spread of Misinformation

Podcast with Dr. Wanda Curlee, Program Director, School of Business and
Dan Prather, CEO of TelPowered Corporation

The internet is ever-evolving and so are the threats. In this episode, APU’s Dr. Wanda Curlee talks to Dan Prather, CEO of TelPowered, about his company’s work building link management strategies for businesses. Learn how replacing URL shorteners with branded links can help businesses maintain credibility with consumers because users can be confident about the source and accuracy of information. Also learn about the importance of ethics, including how ethics must be programmed into artificial intelligence systems, as well as the need to educate consumers about how to protect themselves, and their systems, from bad actors.

Listen to the Episode:

Subscribe to Innovations in the Workplace

Apple Podcasts | Spotify | Google Podcasts

Read the Transcript:

Dr. Wanda Curlee: Welcome to the podcast, Innovations in the Workplace. I’m your host, Wanda Curlee. Today, we are going to be chatting about artificial intelligence and the internet. With the COVID-19 outbreak, we have spent a lot more time at home and probably on our electronic devices. Today, my guest is Mr. Dan Prather, who is the CEO of TelPowered. He also has many years of IT and leadership experience in industry, including NCR, Blue Cross Blue Shield, to name a few. Dan, welcome to Innovations in the Workplace and thank you for joining me.

Dan Prather: Hi, Wanda.

Dr. Wanda Curlee: It’s good to have you here. Many of us spend many hours per day on our devices, either for work or for pleasure. We spend time on them to help us understand what’s going on out there. Can you explain a little how those pesky ads know us?

Dan Prather: Oh, you’re talking about the ads on the internet when you’re doing some search?

Dr. Wanda Curlee: Yes.

Dan Prather: Well, I think this has been an age-old issue with internet and search engines and just about everything we do now, even your email has pesky ads in them depending on what service provider you’re using. But I think at the end of the day, the issue we have with them is everything we do on the internet is putting some kind of cookie or something on our machines.

And those things, unfortunately, are being used to, depending on who you are and what you use your machine for, they use to have a better understanding of our behaviors in search engines, our behaviors and discussions, our behaviors with just communicating with each other.

Where they become pesky is when there are bad actors out. They’re ethically challenged, and they use that same information to feed us what we now have coined as fake news, or they can shoot different types of malicious software applications to our operating environments, which can include desktops, laptops, notebooks. And in some cases, our mobile footprints.

Dr. Wanda Curlee: Interesting on that, you mentioned fake news. That brings to mind that now with AI, artificial intelligence, it’s bad actors, so to speak, can create photographs, digital photographs that you can’t even tell that they’ve been manipulated. How is that done, or can you provide us some feedback on that? And how will we ever combat that to understand what’s real and what’s not real?

Dan Prather: If we look at industry there’ve been a lot of great efforts to combat it and there are a lot of good solutions out there. There are organizations like the IAB [Interactive Advertising Bureau] the 4 A’s [American Association of Advertising Agencies] and some of those organizations that have taken great strides into putting measures in place to authenticate the advertising that comes through their partner companies.

What the images that people download via search, of course, there’s always been a copyright issue thing. Search engines scrape just about anything they can find and people download images without authorization.

Unfortunately, we’re not all educated to understand what is good in an image and what is bad. With every image, of course now going through an educational course for this podcast sake, there’s certain information and just about every file that’s produced and it’s called metadata, right?

And with metadata, that metadata is going to let us know where something was created, the date and time it was created, and in some cases, the location of where that file was created.

And if the author or the person that created the file is extremely savvy and have their stuff together, then they’re going to pre-fill that metadata and lock that stuff into that file with copyright information as well.

Now, from the human perspective, we typically don’t go check that information. We just want to get what we want to get, and then we keep on moving. But now we have some sophisticated tools if you’re in business, that when we download information from the internet or from some of these clearing houses that sell images, our tools will automatically, which is AI specific, will socials images is search that metadata and report back to us to let us know if there’s a copyright issue or a trademark issue or anything like that. So yes, there are some great strides made in these arenas, but ethically the challenge is education, education, education.

Dr. Wanda Curlee: That seems to be the truth with just about anything these days. So we use the internet for many things, including research. It is hard to know if it is legitimate or not. So how can AI help us understand whether the research we’re looking at is truly legitimate or can it?

Dan Prather: That’s a good question. The obvious thing is AI definitely helps us from an obvious perspective, helps us to reduce the time to complete task, right? There are different approaches to AI. And of course, of the three major, I wouldn’t say major, but three best known methods for understanding AI are automation of business processes, connecting customers with resources, important resources, some people would say employees or analytics, having a better idea of how deep that information goes.

Some of those expected things, to answer your question, would be to seek and collect the data of course, to react and act on that data. And then of course, influence human decisions and actions. And it’s that influence in human decisions and actions that I really think we become ethically challenged, because when the machine is doing everything, we take actions.

We have to now police ourselves and police the stuff that we’re looking at to ensure that we’re making decisions on ethical. When I say ethical, I’m talking about tried, true authentic information from an authentic source.

Unfortunately, when we go back over the obvious, which is to reduce time and complete tasks, we often don’t care if it’s authentic or not. We just need the information. If it sounds good, we’re going to run with it. But now we have to police ourselves and challenge ourselves to ensure that we’re getting the correct information.

Dr. Wanda Curlee: You keep on harping on ethics, which I do too. I think it’s a very important aspect of our lives. How do we know that the AI that we’re using, those of us that use AI, has the correct ethics in it or our ethics?

Dan Prather: The AI is really not responsible for the ethics. The AI is nothing more than it’s a program, right? So the program is going to do what is programmed to do. And based on this programming, it’s going to continue to evolve to do specific things.

For us, when I say ethics, I’m more of an internet ethics person. And when we look at the internet, the internet just has so much information that it returns. We often found ourselves getting in trouble with just looking for information, doing research, going to different portals that we thought were credible, with the information that they provided, only to find that the links and the things that they were providing were either outdated, someone put different information on them, they weren’t maintained well, or the information didn’t have all of the source materials put together correctly so that when we went to qualify that information, we could actually point to these resources.

Because that’s not being done heavily, we’re taking approaches now to really look at that stuff, because now with Alexa and Siri, all these things that are scraping this information to report back to us, I think we have a corporate responsibility to ensure that accurate information is being provided to the end users that depend on that information.

Case in point, the internet right now, if we search the internet, a lot of things use URL shorteners. Those became very popular. URL shorteners shorten URL so that people don’t have to see all the gobbledy goop that comes behind it. And it makes it a lot easier for you just to point and click and go to whatever the source says go to.

There’s really nothing to stop anybody from putting anything behind that URL shortener. So one of the things that we’ve been focused on as you probably know, is looking at from an ethical perspective, working with companies to help people better identify with not URL shorteners, but branded URL links. So now link management becomes a major factor because not only does it help the customers focus in on their credibility from a branding perspective, but it also allows AI to identify with that base, that base URL, to know that I’m getting credible information from a credible source.

Dr. Wanda Curlee: So Dan, could you go into a little bit more detail about branded URLs versus the shortened URLs? I think that’s a new concept for some folks on the branded URLs.

Sure. Nothing is revolutionary here, but when we look at the stuff that systems are pulling from our websites, for instance, most websites have broken links. They have a lot of things.

Ads, for instance, advertising is posted and those ads are impressions. They gain impressions from people, but they ultimately want you to do what? Click on them. And once you click on that ad, it’s going to take you to an end point.

Dan Prather: The same thing with images. Images can be ads or they can be pictures, but if they’re on the internet, ultimately you click on them. That’s a link that’s going to take you to an end point.

This is the dilemma that we’re trying to deal with now. Because once again, if you have a bot program or you have a process or a system that goes out and looks for a specific information and it’s gathering specific links from that resource. For example, if you post a white paper and you have 40 resources in that white paper, well, my bot is going to go out there, search that site, find that paper, and it’s going to start making decisions all from that, so that I can be well informed on how to deal with my customers in a contact center, for instance.

If your information is bogus or your links are broken, and I can’t authenticate that information, well we’re going to have a problem, right? Because now my contact center, with its automation, is pulling all of this information from that source and it can be incorrect information for the agents that I have on the line that are going to be searching to try to get this information to people.

Or if it’s in our IVR system, the IVR could be sourcing out incorrect information as well. Depending on the type of contact or care center, which we have a care center, depending on what type of system you have and how that’s mapped, giving people the wrong information could be dire in some situations.

So this is why we took up the challenge many years ago to really started looking at the URL shorteners and getting people away from URL shorteners to look at branded linking solutions, which we coined it as link management or link choices.

The same methodology is used for ad choices. With ad choices, ads have a little symbol on them. People look at those ads, they know that they’re monitored by specific clearing houses and partners.

Lead choices is pretty much working with a similar concept, where the policies and the rules and those types of components are being facilitated by an entity to help build best practices by users on the internet.

And by doing that, it will definitely help people create their content, host content for us to capture that content, and make sure that that content is available for bots, AI, different machines that need to have a better understanding of how to strengthen their processes for organizations that are trying to get a better deep sense of learning for their systems to mature by making sure that these best practices are built into our link management strategies across the internet. Not just the internet, but any artifact or end point that can be mined by any system or bot.

Dr. Wanda Curlee: That’s kind of interesting because with that branded URL, it almost takes out the requirement to triangulate. We’ve always been taught triangulate, triangulate, triangulate, to make sure that the information is correct. So with that branding, that goes by the wayside. I wonder if critical thinking will go by the wayside as well with that, but I hope not.

Dan Prather: I don’t think it will, Wanda. I don’t think critical thinking because these are machines.

Dr. Wanda Curlee: Right.

Dan Prather: They can do a lot, a lot faster than we can do in many situations. However, the human side is going to be an intelligent agent that will always work to make that component even better.

Dr. Wanda Curlee: I totally agree with you because I know there’s AI in for example, health care, helping doctors make diagnosis, but it’s still ultimately the doctors’ or the nurses’ responsibility to make sure that the AI is correct. And if the AI is not correct, it learns from that.

So I see a partnership between AI and people and people provide the value-add, whereas the AI is the dump of data, so to speak. Logically for humans, but for that.

So let me ask you this. Most people think we don’t have AI on our laptops or computers, but we kind of do these days. But I wonder if we’ll, in the future, need to have a more sophisticated AI on our laptops or maybe the Dells and the HPs of the world are already building that onto laptops, I’m not sure, to help us go out and find the branded links and to find things that are legitimate. What are your thoughts on that?

Dan Prather: I think most people are like you and I sitting here talking today. The companies and the people that work for those companies are always trying to better, not just the companies, but themselves. As a people, we want to be the best we can be. And so that challenge is up to us.

Now, when we look at our computers and our smart phones and as you stated before, we’ve come to a point that the AI is necessary. I would arguably say that it’s already there, whether it’s our smart phones or our computers. If I take my laptop and leave home or leave my office and go anywhere with the location on and this, that, and the other, the computer’s already reporting back information to Google or whatever entity it is that wants to collect information of my movements. There are some things built into the computer that makes suggestions for us already. The smartphones are doing it.

So with those suggestions, I’m might not be aware that this restaurant is closed or that one lost its license. Especially during this pandemic, we don’t know what we don’t know. We thought we knew a lot, but it’s just you don’t know.

Now my phone or sometimes the computer does it, depending on where I go, it’ll say, well, you like to eat here, you’ve been here 13 times in the last six months. Did you know that this is closed? You might want to try this location. Okay. Well, I didn’t know that.

It’s just looking at the data that I had on my machine or on my device, it’s also looking at other sources of information that might be related to the things that I do. And it’s helping to make a decision that it thinks I might find applicable with the things that I do and the movements that I make.

So I would say, arguably, the computers are AI. We’ve had it around for a while. We input things, it’s learning, it’s capturing that information. It’s trying to help us make better decisions. Maybe at a lower level. I’m not a NASA engineer with AI by any means. But arguably, I would say most things are pretty much AI-driven because AI is helping us to better ourselves every day and automate our task.

Dr. Wanda Curlee: Yep. I totally agree with that. AI is a part of our lives anymore. It helps us to navigate when we’re in our car and it starts learning what we, do you prefer the highway or going local roads, and things of that sort. So even come up with those things, it’s kind of scary. It’s almost like big brother is watching. And I guess Google has actually gotten into some trouble because their Hey Google is actually listening in the background. So collecting data of course, but who else is listening on that?

Dan Prather: Right.

Dr. Wanda Curlee: So let’s go back to AI. And if we have all these AIs going out there and trying to do legitimate things, we also of course have AI that’s doing illegitimate things. Do you ever see an AI that is going to take the opportunity to try to spoof the other AI into doing something that it shouldn’t do?

Dan Prather: I don’t know if AIs have a personality, right?

Dr. Wanda Curlee: Well programmed to spoof something.

Dan Prather: Then we call those bad actors.

Dr. Wanda Curlee: Right.

Dan Prather: So bad actors are going to always try to get over, which arguably with the state of our nation here in the United States. Some would say that’s happening right now, different countries, the election tampering. Who knows how all of this is happening. Arguably we’d say it’s like a bad joke, me playing a joke on you, changing out this or changing out that.

Can it be done? I can’t say no. I’m not a bad actor. So I don’t have time to play games to do malicious things to other people. But the people that have that mindset, then with that kind of mindset, anything is possible.

Dr. Wanda Curlee: Okay. So like kids with too much time on their hands?

Dan Prather: Yeah.

Dr. Wanda Curlee: So when we talk about ethics and AI, the ethics has to be coded into the program. How does a company or somebody that’s using AI on the internet, make sure that it’s got the ethics that you’re willing to use or not use because let’s face it, for the military, there’s one style of ethics and then for civilian populations, there’s a different style of ethics.

Dan Prather: Well, once again, it comes down to the person that’s actually creating the program, what their experiences are, what they’ve been exposed to. And it could be their personal, their mentality on one side or the other, it could be their spirituality and one side. Everything that makes up a person is going to go into the work that they perform.

Now, of course, if you’re working for a company, they’re going to have guidelines and rules and there’s going to be code reviews and everything else before something is set loose. But it still comes down to the people that put it together.

If you’re a good person working for a bad company and they want to do something malicious, it’s up to the person. We just don’t know. So can anything happen? I think anything can happen. It’s still up to the people that are behind it. But when I’m looking at, how do I know, and how do I determine if something is valid, we have to break this thing down into simplified components. We have to keep it simple. And I think sometimes we make these too difficult and too complicated.

If we took a look at the internet and we take that into consideration, we know that most of the data comes from some sort of database. We know that most of our internet experiences like what search. We know that search results provide us linkages to end points and the end points provide us with content that we consume and take in. It’s just as simple as that.

Well, what’s the AI going to do with that? The AI pretty much automates processes that we would do on a daily or for whatever reason it is basis, because it’s trying to shorten our time. So if I’m doing the same thing on the internet every day, would it make sense for me to program a bot or some type of artificial intelligence agent to facilitate that process for me so I wouldn’t have to spend hours doing that every day?

In the domaining industry, for instance, and for those listeners that have never heard that term, domainers and the domaining industry is an industry where there are buyers and sellers. People buy a domain names, whether it’s for their portfolio to strengthen their search capability, so how people find them on the internet or to have stronger brands.

Well, in that industry, there are places you go to look to find different names. You want to get those names before anyone else does. And there are people within the industry that instead of sitting down like me, old school, I like to go to 20 pages every morning, spend five hours a day, searching, searching, searching through tens of thousands of these names, looking for these little gems based on keywords that I identify with, to try to strengthen our brand’s presence.

And then there are other people that take shortcuts and they’ve automated that same process, and their agent just runs every day for one minute or two minutes and they’re done. It’s just, boom. I’m old school. Some things I just like to look at myself. That’s just the way it is. I got to look at, I have to have an emotional attachment to it. Some people would say I’m having a bromance with a brand name, but that’s just me.

So everyone has their different approach. And that’s where ethically for me, I have to be able to see it. Can I have an AI that could go do this for me? Yes. Can those AIs that others are programming be sophisticated enough to check the trademark clearing house to ensure that this thing is not trademarked? Yes. Can they check a lot of different? Yes.

Ethically though, can they check to see if I have a spiritual connection to this and I may or may not do it because I now need to have better information behind that and the previous buyer? A person could have died, and I don’t want to do it out of respect for this or whatever. That’s going to be on a whole other level.

So I still think we have a ways to go because as these things still try to automate some of our processes, there’s still an emotional component that’s connected to humans that disconnect us from how far in AI can really go.

Dr. Wanda Curlee: Oh, that’s quite interesting. And I want to go back to something you said, Dan. You were talking about ethical situations and AI and how it can do different things.

I’m in education at the university level. One of the things that when we’re designing courses that professors struggle with is, if I send them out to a website, I’m wondering, going back to your branding of URLs, if that’s something that could help education in the future, especially at the university level where students have to do research, or professors are trying to put different things. Even articles, AI has found that even articles in journals, accepted journals, are 60% probably not valid.

Dan Prather: That’s correct. You’re echoing something we spoke of earlier during this session, which is that credibility of content. These links, most people will, I wouldn’t say most people, a lot of educators, a lot of authors, when trying to source things in even white papers, use cases, it’s quick just to put the link to something in there.

The issue with that is like you said before, companies dissolve, go away. Then the link is no good, which makes the paper null and void. The source may have changed because another company bought or purchased the rights to this. Well, now your paper, although it’s a great paper, the links, they’re no good. All that stuff are some of the challenges we’re dealing with today. And that’s why we’re having this discussion.

Although when we hear AI, we immediately think of the technical components of AI. There are the ethical components, which like I said before when we started, we’re echoing education, education, education.

And I think I love the way you segued into it, the educational component, because that is one of the reasons why we got into this business years ago. Because people were sending information to each other and the credibility was gone. After I clicked the one or two things here and there and it just unresolved or I’ve had customers and clients in the past that have had great businesses, and they’ve had links to specials and discounts and everything else.

Then they didn’t renew their domain name or they didn’t brand the thing right. And some bad actor, it always happens, somebody will buy it, knew you had a good business on it. Then they’ll have all the links you used to have, pointing to sites that you wouldn’t want anyone to see.

Well, now your credibility just went down. The business would tank because there’s no way they can come back from that or stop it, all because of a little change. It can become a nightmare if you have that. Or URL shorteners, they do have unique keys. So that helps out. But if the person deletes the account or anything else, well guess what? Null and void. So with brandable links, it’s not just a branded link. The way we’re looking at link management is from a responsibility because people do read to understand.

For about three seconds when we look at something, we want to know if it is a credible link. We’re to look at that base. If we can find it and it looks credible, then we’re probably going to click on it. And which is why we took on the challenge of working with link management across the internet, because of educators. Specifically, I’m big with education. I didn’t say I was great at it, but I love to support education. And I think the best way we can support it is to ensure that the engines and the things that house that information, they have credible information in them.

And 50 years from now, or 100 years from now, the information in the documents that are produced, the linkages that are in them, are still credible, they will resolve. And if that can happen, we keep that cohesiveness together. Then what’ll happen is the AIs and things that are digging for that information will know that that information is credible and authentic.

Dr. Wanda Curlee: Okay. So with the branding of our link management, as you call it, how do you see that changing the internet? Will the average user see a difference or what will it look like?

Dan Prather: I think the average user will see a difference. This evolution has been going on, trying to better identify with what’s good and what’s bad, has been going on probably for over 10 years now. It’s just a slow crawl because we as a people still want to see things done very, very fast. We want things now. We want it now.

Dr. Wanda Curlee: And the millennials and Gen Zers want it even faster.

Dan Prather: Oh yeah. The difference between them and us is I’m ready to eat. When I say I want a hamburger, I go in the refrigerator, take out the meat, form the patty, cook the burger. That’s a 30-minute process. A lot of folks today, if I tell a friend of mine, “Hey man, you want a burger?” He’s out the door, down the street back within four minutes with a burger, right? Because he wants it now.

We’re pretty much the same way. It’s just going to take time. But with anything else, is once something clicks, it clicks and people will start seeing that. Like I said, it just takes time, like anything else. But the responsibility and the onus is on those that take up that challenge. And so our challenge is ethics on the internet.

Dr. Wanda Curlee: So for example, when we see the ad choice that tells us it’s a credible ad, will there be some visual identifier with your link management or brand management?

Dan Prather: Yes, there will be. There will be a visual identifier. Right now, we’re working in beta with a couple of companies that are putting this stuff in documents and doing some of the things that we’re discussing today.

The way we say we have a little bit of skin in the game is that we, as domainers taking the domaining side of our experience into consideration. We spent a lot of time over the last four or five years ensuring that we secured the right domain names, that people can identify with, that says, this is the trust.

It’s kind of like you don’t buy a Coca-Cola from somebody that misspelled Coca-Cola or put four dashes in, in three words. You buy Coca-Cola from Coca-Cola, right? You don’t go to the misspelling of Google and download something else. You download from Google.

And we expect it to be the same way. People will be able to identify us, probably not by a visual icon, but they’ll identify with what is credible. When we say credible, we look at it two ways, things either approved or they’re confirmed. Right? And that’s what we’re working on right now.

So that people can say, “Hey, boom, this is it. We’re good. If I click on this link, I know I’m going to the right source and it’s a safe link.” That’s the other thing, too, safety. We want to make sure you’re not going through four or five clicks that are loading things onto your device to try to do malicious things or mislead you to somewhere where you don’t want to be.

Dr. Wanda Curlee: Wow. That’s got to be powerful for companies and education and healthcare, all of them, to know that you’re going to a credible source because there is so much misinformation out there. As you said, with the customer side, and as I mentioned with doctors, can you imagine if doctors knew that it came from a credible site, they would feel much more comfortable with that AI or with that website.

Dan Prather: And Wanda, just to make sure I’m clear, it’s not even just our site. Sometimes we’re working with an agency now that is in the faith-based arena and they’re just a partner with us. So the stuff can come from their site, but it just runs through our base, where they’re actually managing the dissemination of the information that’s coming out. It’s just being facilitated and managed through our system.

But we have several partners in this. For the most part, I would say that we’re probably the least technical of all the partners in the solution. But from a solution and an architecture perspective, we wanted to ensure that ethics on the internet was first and foremost, that the brand identification was key. And that when Siri, Alexa or any other device looks for information across these sources, they were going to get credible information. Even if it reads down to the metadata level, we’re trying to ensure that these sources are credible and the information is solid.

Dr. Wanda Curlee: Excellent. That’s got to be great for Siri, Google and Alexa too. I don’t have Siri, Google or Alexa at my house except on my phone. But if I were to ask them a question, it sure would be nice that the information that they’re sending back to me, I knew was ethical and branded and had link management. So to me, that’s just amazing.

If you had a crystal ball and I know this is not a good question, but I’m going to ask it anyway. If you had a crystal ball and knowing what you’re doing with brand management, how do you think AI and your link management will change the internet in five, 10 or 15 years, or will it?

Dan Prather: Well, the operative word is “if.” The only thing we can really do is try. Like I said before, I need to believe that people try to do the right things, whatever they may do.

With writing code and going into a deeper learning state, I’ve worked with big data companies, have been across that all throughout my career. So I totally understand how deep this thing can really go from a deep learning machine, learning analytics perspective. And I think, armed with that information, like I said, I lean to believe that the sky’s the limit. The sky’s the limit. But that sky can only become the limit if we have proper education.

And knowing what I do know, lets me know what I don’t know, because things change every day. So instead of trying to understand everything, I’m going to work with the lowest common denominator, which is ethics on the internet.

And if I can work with some great people to make a little bit happen, then maybe bigger things will happen in the future. And I think that’s where we start. The internet is an ever evolving thing. I still think it’s young in what it can do. But I think we have an excellent opportunity right now to, I would say, change behavior and change how we look at somethings.

Most people will say, “Well, that’s probably not going to make a huge dent in five years or maybe even one year.” But then six months ago, most people or most companies were like, “Nobody can work from home. Nobody will ever do it. We don’t care what you say. You can’t remote work, it’s not possible. Our clients are too important. We can’t give you security. That’s a security risk to work from home.” Then overnight, you can work from home. We go and get your computer. All that stuff changed in a day. So one word, it just changed.

Dr. Wanda Curlee: Yep. And even now they’re not going back to their office spaces.

Dan Prather: No. So you have to you ask yourself now, you scratch your head and say, “Okay, now all the companies that are super duper secure, you couldn’t work from home.” Well you’re super duper secure is not working through some box at home, that is some $59 internet security, internet package at home. And you’re just working on your company’s laptop, VPNs are popping up everywhere, the whole nine yards. So we changed overnight with that footprint.

I tend to believe that the way we gather information and we start doing our own due diligence to authenticate the things that we bring, I wouldn’t just say into our homes, but onto our desktops, what we review and what we consume to say is correct, is going to start with the lowest common denominators, which to me right now when I look at it, I don’t care what search I do on the internet, everything is linked to something, somebody and it goes somewhere.

And unless we can build some trust into that link management strategy, we’re going to always be questioning it. If I click on this, what’s it going to do to my digital footprint, my desktop, my mobile footprint, whatever? We’re going to always have that question until we have some best practices that everyone understands across the stage. So I’m going to stick with that for a while.

Dr. Wanda Curlee: So as you said, education, education, education. We have got to train everybody.

Dan Prather: That’s correct.

Dr. Wanda Curlee: And that’s not just the employees. We also have to educate the consumers as well. So education is key to everything, I believe. So, Dan, thank you very much for joining me today for this episode of Innovations in the Workplace. Do you have any final words?

Dan Prather: Some of the things we talked about today, with respect to AI, innovation in the workplace, I think the key things that resounded throughout our discussion were education. I highly encourage everyone to seek out credible, credible education when it comes to understanding what AI is and what it can do for you. Don’t try to take on AI from an industry perspective, see what it can do for you. And if you start with that, then that’s the biggest first step that anyone can take.

The second thing is always keep ethics in mind. Ethics play a major role and what we view as good and what we view as bad also helps us to determine what lens we’re going to look through. Right? Our lenses are challenged every day. But we sometimes have to stop, smell the coffee, and say, “All right, which lens am I looking at this through? The AI is doing this, but do I need to go back and double check sometimes?”

The machine is only as good as how we program it or how it’s programmed to evolve, but we still have that gut feeling sometimes to go back and double check and make sure our sources are the sources that it’s supposed to be coming from.

With the internet of things, AI, all the $50 million coined terms that are coming out, we still have to remember that behind everything and every good machine there is going to be that human intelligent factor. And we can’t forget that, no matter how much we want to see all these great things work, we got to remember that there’s still that factor in there that we still have to build in over time.

Dr. Wanda Curlee: Dan, thank you for those wise words and thank you to our listeners for joining us. You can learn more about this topic and similar issues in artificial intelligence by reviewing APU or AMU blogs. Stay well.

About the Speakers:

Dr. Wanda Curlee is a Program Director at American Public University. She has over 30 years of consulting and project management experience and has worked at several Fortune 500 companies. Dr. Curlee has a Doctor of Management in Organizational Leadership from the University of Phoenix, a MBA in Technology Management from the University of Phoenix, and a M.A. and a B.A. in Spanish Studies from the University of Kentucky. She has published numerous articles and several books on project management.

Dan Prather is the CEO of TelPowered Corporation, which specializes in connection identification, link management and elevated endpoint advertising. Whether he’s consulting to small business owners, advising business leaders, or working with community efforts, Dan shares practical strategies and techniques that optimize operations, enhance sales, and increase sustainability probability for organizations.

He has consulted, coached and resolved projects for hundreds of businesses with several companies across multiple industries. During his 30+-year career he’s worked with Arthur Andersen, Teradata Corporation, NCR Corporation, BlueCross BlueShield as well as restaurants, not-for profits, small businesses and faith-based organizations.

Comments are closed.