AI APU Careers & Learning Cyber & AI Everyday Scholar Online Learning Podcast

Navigating the Age of AI: Its Impact on Education, Arts, and the Future of Work

Podcast by Dr. Bjorn Mercer, DMA, Department Chair, Communication and World Languages and
James Lendvay, Faculty Member, School of Arts, Humanities, and Education

APU’s Dr. Bjorn Mercer and James Lendvay engage in a compelling dialogue on the impact of AI in education, arts, and employment. They explore academic integrity, the authenticity of AI-created art, and the transformative role of AI in the workforce, shedding light on the complex interface between AI and our everyday lives.

Listen to the Episode:

Subscribe to The Everyday Scholar
Apple Podcasts | Spotify | Google Podcasts

Read the Transcript:

Bjorn Mercer: Hello, my name is Dr. Bjorn Mercer, and today we’re talking today James Lendvay about AI and education, the arts, and the future of work. Welcome, James.

James Lendvay: Hello, Bjorn. Thanks for having me with another session here, topical discussions.

Bjorn Mercer: Yeah, definitely. No, this is extraordinarily important. There’s a lot of podcasts coming out about AI. I think there’s a lot of confusion, honestly. It’ll take a few years for AI to mature and for people to grasp AI and how it’s going to impact them. But the first question I have for you is, what are some of the challenges that have presented themselves recently in education, in relation to AI?

James Lendvay: I did want to just follow up on what you said there quickly to start off and just give a general overview, to my understanding though, I’m not an expert in the technology of AI here. So what I’ve been seeing in my work in education here is, some of these tools like ChatGPT and how they’re being used by students to either create seemingly original work or to different functions like paraphrasing perhaps, or helping with language. But this is really a new phenomena.

This stuff just came on the scene recently, and it’s been what, maybe six months since ChatGPT was released. And one of the things that got me thinking about this was just seeing my friend’s son using this Genie app about six months ago to answer a question that he could not come up with an answer for. So immediately went to the phone and said, “Okay, genie solved this problem for me.” And I was really taken aback by that, and it produced a beautiful answer, and I thought, “Well, this is really going to be transformative, I think.” And then now, it’s starting to trickle into education, and especially the work we do in online education.

Bjorn Mercer: The number one thing that our students have to do, and we ourselves is we have to learn and then we have to create information. But then what happens when an AI is creating it for you? So what do you do with that? Is that step one in learning? Because if you just take that information, then just use it, that’s not your own. So that’s an issue.

James Lendvay: Right. We’ve been talking a lot about how this comes into play in terms of what we call academic dishonesty. It’s really not plagiarism for a student to type in a question into ChatGPT, and then just copy and paste the answer, because it’s not really something that was published by somebody and then they’re just taking somebody’s words and passing it off as their own. But they are taking some “thing’s” words and passing it off as their own. And so, there is that dishonesty element. And one of the things that’s come up is, it seems as though anything that is generated by something other than the student would seem like academic dishonesty. But what I’ve seen that surprised me, and I guess I should should’ve maybe seen it coming, is that some students are using programs like Grammarly or Quill Bot to help improve their writing, but not necessarily writing it for them.

So we use this system called Turnitin, which will run a student’s work against a whole library of work to find out if there’s enough matching content to say that, “Okay, this was probably plagiarized”, and it gives a percentage report. And now Turnitin has added an AI detecting function. I don’t know how well that works, but it has produced several 100% matches. So now we’re going back to students and saying, “Okay, can maybe you try to explain why we have 100% match here on the AI. We don’t want to make any assumptions.”

And then some students are saying, “Well, I’m using some of these apps to help because I’m not really confident in my English, or I want to clean it up a little bit.” And that seems like a fair use, especially I think for somebody who maybe is not a native speaker of English or it’s their second language. I can understand why somebody would want to do that, but we also have to back up and think if a lot of the work we’re doing, especially in humanities, is to teach writing, this just really nullifies that whole process. And the same with, to some extent, the critical thinking skills that we want to build as well.

Bjorn Mercer: It’s extraordinarily confusing when I think of literacy in the US and many western countries, and in other countries, literacy rates are near universal. But, adult literacy, more difficult concepts or problem solving or critical thinking, that’s when we start seeing it go down, because a lot of people have a hard time. Critical thinking is hard. Problem solving is difficult, it is extremely. Information literacy is difficult. And then if you have a student or a group of students that then start using AI as step one of their research, which is again, it’s fine if you’re using it as a part of your process, that you’re still going to then use and edit and make your own. But, if students then just use step one as their only step, then they will never develop their critical thinking – and they won’t develop their writing. So I think AI is then adding another complexity to the contemporary world. As humans it takes us a very long time to develop, but what will happen when that difficult, painful development period is cut out? How many people will develop?

James Lendvay: Yeah, I think there’s two parts to the critical thinking aspect. Now, the writing element is really interesting, because if AI’s going to take over a lot of our writing functions, then the question is, does it need to be taught? And that’s an interesting concern on its own as students are going to say, “Well, look, I’m not going to do much writing in my profession. And any writing there is to do is going to be done by AI. So why do I have to bother learning how to write well?” A lot of people have been making this analogy to other kinds of tools that have come out, especially calculators.

So years ago, it was, students shouldn’t be using calculators, they won’t learn how to do math. Well, okay, calculators are just ubiquitous for our smaller tasks especially. But one question I have is whether this is really a tool or if it’s something different, and if it’s not being used to just supplement or help, but in fact take the place of, and especially that’s what the whole idea of generative AI is, it’s that it’s creating something. And yes, it can, I think be used as a sort of assistant, but I think it also can be used to do much more than that. And that’s where people really seem to be legitimately concerned.

Bjorn Mercer: So what are some questions that people are asking about AI and the arts and entertainment sector?

James Lendvay: That ties, I think, to a question that I’ve had, and I’ve been posing this to students recently, is what does something mean to be authentic? So if we’re talking about authentic art, if something looks really beautiful, but it’s a fake Picasso, does it matter? It looks nice, it’s beautiful mounted on my wall, who cares who painted it? And the same thing with music. We’ve been doing this for a long time as well. Technology’s relatively recently that improve a singer’s performance. Okay, so is that really the authentic voice that I’m hearing? And does that even matter? Lip-syncing has been used for years on stage. I’m not getting a real performance. So these questions about what we take as real and whether we care about that and what authenticity means and how that plays into all this is really interesting.

Bjorn Mercer: It is; art is tough because artists have been copying for well forever. You can go super nerdy and go way back to the renaissance and the Middle Ages, where composers would literally lift entire pieces and drop them in theirs – and then hide them a bit. But today, with copyright laws, what does it mean to be original? Because AI that creates art is still copying existing art in some way. So then is it original? And is that original person getting credit, which is typically, no.

James Lendvay: Yeah, I think two things there, it just came to mind. There was this recent case with Ed Sheeran, I think, who had that copyright lawsuit. And it was great, he brought his guitar up into court to show that there’s so many songs that are very similar. What are you going to do? There’s only so many options in terms of creating a song, and some of them are going to sound a little bit the same. And I was even reading earlier today, writers published an article about what we’re going to do in terms of regulations with AI, and copyright is one of those issues.

Bjorn Mercer: Music is difficult because in a scale, there’s seven notes. If you don’t include the chromatic notes and then you go up and down like two octaves, and you essentially have 21 notes to deal with – maybe 28 if you use four octaves. And we’ve been using those same notes for 500 years. And, it’s amazing how today Ed Sheeran is using the same notes as Bach, who’s using the same notes as Monteverde, as Beethoven, as Rachmaninov – and you go on and on.

All Western music has been using the same structure for such a long time, but then there’s also people who will try to copyright little elements, and then if somebody uses it, they’re like, “Nope, nope, that’s mine.” And it’s a universal element. And so, that’s why with AI, it is difficult. Because with words, there’s only so many letters, but there’s an infinite number of combinations you can put those letters into words. I could see AI just writing all your basic articles. Who needs a human, when an AI can just spit out an article, doesn’t require a lot of detailed thought. An editor just runs through it and you publish it.

James Lendvay: I was sharing this these last couple weeks, this article that I found, it was published by Vanity Fair, and it was really interesting, because it set up the scenario where, you come home from work and you can just maybe speak into Siri or whatever and say, “Look, I want to watch a TV show in the style of Seinfeld or whatever movie, and I want it to have this kind of soundtrack”, and it’ll just build that for you almost on demand. And I don’t know if that’s a realistic expectation for AI, but that would be really interesting. And then later in the article it said, “If you don’t think that that’s possible, just consider where we are with entertainment these days. What does it take to build one of these Marvel scripts?” So it doesn’t seem like it’s really beyond the capacity of what AI can either do now or will do.

Bjorn Mercer: Oh, for sure. Yeah. That right there is something, not today, but in 10 years or even 20 years, I could definitely see that coming up. Today, we’re speaking with James Lendvay about AI in education, in the arts and the future of work. The last question, James, for our conversation about AI is, what are some of the questions going forward about impacts to the workforce?

James Lendvay: So it ties, I think, to some of the things we’ve talked about already, whether or not AI is a tool or if it’s something else, if it’s going to replace people altogether. Years ago, decades ago, this was a concern where people said, “Look, robots, robotics are going to take over manufacturing jobs, and what are people going to do?” And there was a similar question where, robots, are they tools?

Well, in that case, yes, I think they were replacing people’s physical actions, but they were tools for automakers. So we have a new tool and you can go home. We are going to have a robot do this for us now. And there’s been also a lot of talk about this in philosophical circles going back a long time, 20, 30 years. David Chalmers is a famous philosopher who works in philosophy of mind, and he’s talked a lot about this idea of offloading work from our minds to what he calls an extended mind.

And so, we’ve done this with calculators. We mentioned he uses a really interesting analogy with just a pen and a piece of paper or a notebook, and we use that to write down, or a calendar, we use that to enhance our memory. And those are tools, but those aren’t thinking for us in the way that AI seems to be. So I really am not sure about the analogies that people are making to saying this is a tool just like other tools, and how those are then going to be used to supplement what we do in the workforce, or in a lot of ways, maybe just replace us.

You’ve probably seen in the news there was, or is a writer strike in Hollywood, and I don’t know all the details, but I know one of the sticking points was that, have to build in some protection so that AI would not take over the jobs of the writers. And so, who knows if writers are even using AI and just saying, “Okay, I just produced an original script”, you would never know. So they’re doing what they’re saying is a month’s worth of work in five minutes. So there’s a lot of interesting concerns there. And the same thing actually, I wanted to mention earlier, a lot of people have somewhat jokingly said, “Well, if students are going to be using AI to produce their work, then as educators and teachers, we’ll just use AI to grade their work.”

And in a way that was a little bit of a running joke for a while, but actually I think that it’s not just a joke or just some kind of whimsical thing. There’s been a lot of talk recently about using AI to help grade work, to automate it. AI can look at a student’s writing and say, “Hey, maybe you ought to shorten some of your sentences or your thesis could work better this way.” And so, that takes a lot of work offloading some of that cognitive work. But at some point, what are we going to need to do if AI can do all of the functions that we do? That’s going to be a real concern.

Bjorn Mercer: It’s even hard to know where to start. It’s like, well, “let’s do parts one through 50 on our analysis and predictions.” As far as a workforce, it makes me think, and this is my own personal opinion, so I just had to put that out there first. I tried to watch some of “Lord of the Rings” on Amazon, and then I’ve been watching some of “The Citadel” on Amazon. And so, the scripts that those two shows have used, and the plot points are just, if an AI wrote them, I’d be like, “Yeah, that totally made sense.” But I’m assuming a human wrote them.


And especially with “Lord of the Rings,” I shrugged my shoulders, I’m like, “This isn’t anything original.” Again, I’m being highly critical here, but there’s nothing compelling about the progress of that show that made me really believe and want to be engaged. And then with “The Citadel,” I like the actors, but all the plot points are just so, well, of course this is what was going to happen. And then you think, well, it’s going to happen next in your typical James Bond type thing. So there’s a lot of very mediocre art out there that, this sounds terrible, AI can fill the void and be cheaper.

James Lendvay: Right, that’s always been a concern. Okay, your art, your music, it’s formulaic. And that’s exactly what AI does. It’s following a formula that it’s collected to my understanding, from all this data and just pushing out whatever you want it to, a script or anything else. But again, that really just mirrors what we do anyway. We just take in our influences and our experiences and spit out whatever work we want, but it’s almost like it’s fundamentally the same thing.

Bjorn Mercer: I could see it writing every sitcom moving forward. A sitcom doesn’t have to be brilliant stuff, but you read all the sitcoms and you watch them and they’re like, eh, it’s okay. There’s something about certain sitcoms that are good. “Two and a Half Men.” – to me, there’s very little redeeming about that. Anybody could have written that, and anybody could have honestly acted in that.

James Lendvay: Well, I really like “Two and a Half Men,” full disclosure. But I think take something like “Seinfeld” – a there is so much in there that just gets at our foibles and our little problems. And I’m not sure AI, at least right now, could even come close to mimicking something like that. So maybe that’s just a case of, AI can do okay work if you’re okay with that, but it’s not going to do really great work, at least not right now.

Bjorn Mercer: So as far as humans go, where do humans go anyways? Because, in the arts, we’ve reached a point to where a blank white canvas is a very famous piece of art, or an all-black canvas or a textured art, whatever, is now art. And well, it is, it’s more performance art, but it’s still art. So then AI is mimicking everything.

James Lendvay: Art has always been something where you need a little context. What is the artist going after? And then once you know that, you can say, “Okay, now I see where they came from. Now this makes sense.” But whatever’s built in the motivation to make the art seems like it would be uniquely human. Some struggle, some problem that I had that I wanted to express in art. And what does that do? How does that affect what we call good art? And maybe that’s where AI could never really replicate the experience that we have of making art.

Bjorn Mercer:

No, it’s true. And I think a lot of AI will create a lot of art and people will like it and consume it, and then others will not. Rightfully so, because: art criticism. I have an entire podcast on the aesthetics of art. And there is no good art, there is no bad art, there is just art. Absolutely wonderful conversation. Any final words, James?

James Lendvay: The only thing that, maybe to tie back with education and the workforce, I think it’ll be really interesting going forward to see how the needs are going to be met for people staying employed, doing especially creative work, and then what kind of job training we’re going to need or want to provide to people in higher education. So a lot of people are optimistic and saying, “Well, this just presents some challenges and maybe there’ll be ways that we can make AI, again, a useful tool.”

And then of course, there’s the other side saying, “Well, okay, this is the end of everything we know, or the beginning of the end of everything we know.” And I think some of that remains to be seen, but very interesting philosophical questions here for sure. So for me, if nothing else right now, this gives me fodder to talk about my courses. So I’m optimistic in that sense, at least for the short term.

Bjorn Mercer: Definitely, and I completely agree. And so, absolutely wonderful conversation. And today we were talking to James Lendvay about AI in education, the arts, and the future of work. My name is Dr. Bjorn Mercer and thanks for listening.

Dr. Bjorn Mercer is a Program Director at American Public University. He holds a bachelor’s degree in music from Missouri State University, a master’s and doctorate in music from the University of Arizona, and an M.B.A. from the University of Phoenix. Dr. Mercer also writes children’s music in his spare time.

Comments are closed.