AI APU Business Careers & Learning Cyber & AI Everyday Scholar Online Learning Podcast

Should Universities Embrace ChatGPT?

Podcast featuring Dr. William Oliver HedgepethFaculty Member, Transportation and Logistics Management and
Irv Varkonyi, faculty, Supply Chain Management

The widespread availability of artificial intelligence (AI) tools like ChatGPT presents some pressing challenges for institutions of higher learning. In this episode, APU’s Dr. Oliver Hedgepeth talks to professor Irv Varkonyi about issues of plagiarism and ethics when students use ChatGPT to write term papers and discussion forums. How can faculty members address these complex issues to ensure students are adequately learning the material and submitting original work?

Listen to the Episode:

Subscribe to The Everyday Scholar
Apple Podcasts | Spotify | Google Podcasts

Read the Transcript:

Oliver Hedgepeth: I am Oliver Hedgepeth, I’m a professor at American Public University and today we are interviewing Irv Varkonyi, he’s also a professor and the subject we’re going to talk about today is really chatting about artificial intelligence, AI, and especially ChatGPT. That’s one of the bots, B-O-T-S, that you’ve been reading about in the newspapers for the last year, and especially the last few months as ChatGPT and AI technology has been entering the classroom. And so, we’re going to talk about how AI is being used or not used by students. But first, I’d like everybody to know who Irv Varkonyi is. Irv, would you give us a short bio of who you are and everything? I’d appreciate it.

Irv Varkonyi: Thank you very much, Oliver. It’s a pleasure to be here with you. I’ve known Oliver for a couple of decades. I came to University from Global Cargo Carriers and moved from in there to consulting and working in supply chain, especially supply chain risk.

It’s been about 20 years that I’ve been at the University, and I principally teach supply chain logistics, and especially supply chain risk management. And I’m now essentially retired but still very active in looking at how things are developing. And, Oliver, this field of AI is just fascinating, and I can’t wait to see how the story will turn out.

Oliver Hedgepeth: Well, thank you, Irv. I know you and I have helped students for many, many years, and as we’ve seen the technology change from basic standing in front of the classroom to what you and I have done a lot in the last decade, teaching online. And teaching online has its own challenges, as different from standing in front of students.

But the world of artificial intelligence, especially as ChatGPT, is changing, I think, the learning environment. And I know there are a lot of faculty members that I’ve spoken to who are concerned about AI and ChatGPT and how it may impact the student’s ability to learn, or to cheat. Plagiarism is one of the topics that comes up, but as you and I have seen over the last few decades, there are students who do plagiarize now then, but most of them don’t. They are really there to learn and write a paper, and be excited about what they’ve found and discovered.

Unfortunately, ChatGPT, if the student wanted to write a paper all he has to do was type in a question in ChatGPT and it ChatGPT would write you a nice two- or three-page essay on it. So, these are questions such as we have to understand how students can or cannot use this technology.

Let me ask you a few questions. First of all, can you describe what you know about AI? And maybe especially ChatGPT or anything like it. What do you think about AI? What do you know about it?

Irv Varkonyi: Well, I have much to learn about AI and ChatGPT. It’s not in my area of expertise, but, just like every other human being, I’m affected by AI. It has great potential to be able to help us, to be able to help students, but it also has great threats as well, to where and how we work, and how we learn today.

Best I can understand, that AI ChatGPT is a natural language processing tool. AI stands for artificial intelligence, and it was developed by a company back in 2015 where, in fact, Elon Musk was a part of the founding company. It’s programmed with endless amounts of text and data to understand content, and how to generate human-like responses and questions. And it responds in real-time using a neural network, which I’m not sure I can understand. GPT, by the way, means generative pre-trained transformers.

So, in layman’s terms, which I’m more competent in rather than technical terms, it can take information and put it into human-like responses. So, I went and tested GPT when Oliver had asked me to join him on this podcast, and to see what it can actually do.

At APUS, we use discussion forms as a way to interact. It’s a substitute for us since we don’t work in real-time and study in real-time, we have asynchronous discussion chats. The professor puts in a question, the student responds with an answer, then other students respond to that, the professor responds. And, in that sense, we are simulating a discussion. So, I put in a discussion question that I have in my current class, which is supply chain risk, and I wanted to see what GPT would do.

Well, guess what? It gave back exactly the answer that I would be grading as an A answer. And I’m asking myself, “Well, this is pretty fantastic.” And then, I went back to see some of the students’ responses. I don’t know how to say this all Oliver, but some of them were eerily similar to ChatGPT. So, either I have wonderful students, and I do have wonderful students, or we are talking about whether or not GPT will affect us, and in fact, Oliver, I think it may already be affecting us, as we speak. So, I don’t know a whole lot about it. I described to you what I do know, and I think it’s a wonderful tool in terms of what it can do.

Oliver Hedgepeth: I hear you loud and clear, and just this week I was on another conference with some professors we were all talking about this AI and how it impacts us. It is quite an interesting problem. You bring up a very good point. It seems like you may be broaching on how this is impacting student learning. What do you feel about that?

Irv Varkonyi: Let me preface that a little bit with what the purpose is for the students to be at our university. Students at our university are already adults. These are not young people who are just graduating high school, these are adults. How are they going to learn? Well, a great many of them know a lot, but how will they learn and work in the environment that they’re going to be going into? To answer your question is, what will they be experiencing in that real world? What will they be expected to do?

I think, to an extent, the university has to be closer in approximating what these students can expect. And if part of that expectation in the real world is, if you have a tool like GPT, then let’s use it, whether it can be in a number of areas such as in marketing, in operations, in technology, in risk management, then should we not be encouraging the students to use ChatGPT as a way to respond. And maybe that’s what we’re saying is, how should students respond that will help them once they’ve complete their degree?

So, I think we’re going to be asking ourselves, Oliver, to change, or understand whether we’re talking about learning, or we’re talking about a different way for them to achieve their goals.

Oliver Hedgepeth: Most interesting, you raised the point that they are going to school, they’re learning technology, they’re learning risk management, they’re learning about supply chain logistics, as you teach them. And they get out, they get their college degree, and they go get a job at a warehouse, at a Walmart, or an engineering firm, a retail firm of some kind.

But what I’ve read in the newspapers recently is that these companies are already using AI, ChatGPT right now. In fact, I was reading in The Wall Street Journal the last couple of weeks how marketing firms are now using ChatGPT to create a marketing ad for a customer. And the results that I’ve read so far is that those companies are using ChatGPT to create, in two or three seconds, a nice ad for this company’s new product. And then, they modify it a little bit, they tweak it here and there, but instead of spending hours and days creating that first line or that photograph, or picture, or drawing, which ChatGPT and other AI technology can develop as well, they go ahead and do that, and call the customer and say, “Oh by the way, I’m ready right now.” And the customer’s like, “You’re not going to call me next week?” “No, I’m ready right now, today.”

So, business is already using AI, and has been for a lot of years. It has become a common tool. It’s a skillset that the student will be learning to use on his job, so, Irv, I think you’re telling me we need to think about embracing AI and ChatGPT and other similar bots like that to help build their learning skillset. Is that what you’re telling me?

Irv Varkonyi: I appear to be, as I’m learning more and gaining more, and as I said at the outset, this is not an area of expertise for me. I’ve been a professional that is conveying knowledge, and having students to absorb knowledge and be able to apply that knowledge in there, but it isn’t part of that. Now, we should also say that there are some areas of concern in using GPT. It has some biases. It will reflect the biases of the literature, it may not realize it’s reflecting that. In terms of data privacy, it may or may not be able to honor that. It doesn’t necessarily have common sense that it can do, and how to apply that.

So, the student has to gain the skill to understand that when they’re creating a response through GPT, they have to apply their own human standards. Such as, does it have common sense to it? Is there a biases that is explained? Those parts, the student has to do.

Perhaps a role that we have is, as students will respond to questions using GPT, as I mentioned, I think they already are, how they’re applying their own common sense to that. In fact, I would be challenging faculty at all universities, that whatever that student is providing, that there’s a way for the professor to interact and determine the student’s understanding of that material, and whether or not they’ve applied common sense to it.

Now, it may be that some things that we’ve relied on to demonstrate that students have learned such as 20-page term papers, well, it may not be appropriate. Maybe what we’re talking about is having a discussion about a subject, and see how the student can, in a matter of a few minutes, explain what traditionally we have asked them to do with citations in a term paper. Because as they go into the real world, that’s more likely what they’re going to do.

I guess I’m saying then, Oliver, that GPT seems to be more in terms of what that real world is, and that preparation for that real world, which is our obligation as faculty at APUS, is to prepare those students for that. And I think if GPT can help prepare them, then we need to be able to find a way to incorporate more of that as a duty to those students.

Oliver Hedgepeth: That’s most interesting. What you’re describing, Irv, is the learning environment in that college is changing. And if that learning environment is changing, well, the students are already changing. The data is showing that the students are very comfortable with using AI. I read a report earlier about AI as a big driver for student motivation. And 98% of students are comfortable with using AI, 98%, I read that somewhere recently, this week. So much to read about this technology.

Well, what I’m sensing is that the students are comfortable with using ChatGPT and other AI to search out what’s going on in the world, but the faculty are not comfortable with it. You raised a good point, maybe it’s time to change how we teach students, knowing that ChatGPT is out there. I think majority of students might use it, the ones I know, to maybe search something out.

So, what I think you’re telling me is we need to, maybe, look back the way we used to teach. Maybe we need to give more oral examinations or something so you can’t cheat. They can use ChatGPT to bring down information on something, then you say, “Okay, stand up, Bobby and Sue, and tell us, what do you think about this?”

We could ask the students to do something different. It seems like, as professors, we might have to get a little more personal, especially in the online world. You and I are working in 100% online world, and you’re not really with them live a lot, once in a while you have a Zoom meeting with them. I think we might have to change how we teach. What do you think about that, Irv?

Irv Varkonyi: I’m very much thinking about that in the same way, Oliver. And I think that universities also have to look at that. Since I’m still a bit old school, I still do research and do citations, and just the other month I saw in a short document from a professor, Peter Jacobsen from Ottawa University, and the title of his essay was “Artificial intelligence Will Change Higher Ed for the Better”, subtitle, “If Professors Respond Correctly, ChatGPT Can be a Friend, Not a Foe.”

And the essence of what he’s talking about is that universities, for the most part, have tended to be stagnant and slow in terms of moving forward. Whereas GPT is a much more dynamic tool that moves things much more expeditiously than universities. So, in that sense, Professor Jacobsen is stating that if higher education isn’t the fastest moving industry around, then a tool like GPT can in fact help them.

So, GPT can in fact enable universities to move forward faster, improve on them, and what Professor Jacobsen is also saying is in fact it can provide a more value proposition to those students. That is a possibility.

There was also another document that one of our colleagues at APUS had identified about the roadmap to becoming an AI university, and what they’re talking about here is how universities are building AI into their curriculum in many ways, not just focusing on STEM, but focusing many areas that there’s a global tech talent shortage. That’s some 85 million jobs that we’re going to need by 2030. We may be short if we are unable to in fact train adults. Universities have an obligation to embed AI much more within the curriculum, whether in the direct areas like in technology and operations, but also in terms of how it is that universities conduct themselves.

One more thing, if I may, on that. I do some mentoring here in Fairfax with high school students in career and technical training, and the data shows us that, today, our students are going to be entering into jobs that some 65% of which, tomorrow, we don’t have today. How are we going to prepare those students for the jobs that we don’t even know today what they’re going to be? That’s a huge amount.

It is possible that AI and what GPT does is a way where we can in fact expedite the training for whether you’re in high school or you’re in college. So, I think we should be embracing it. The question will be is whether our university and other universities can embrace it productively and accurately.

Oliver Hedgepeth: You’re focusing on the changing of how we teach as faculty and administrators, how we design courses, and design that interaction with students, and between students, and with us. And what you bring up is, I think, probably a critical point, a turning point in the universities today. I do believe here, at 2023, we’re on the edge of a transformation as we saw in the ’70s and ’80s.

I remember in the ’70s and ’80s that I’m a math major, I have a math degree, and you could not bring a calculator into the classroom. If you brought a calculator in the classroom, the teacher would point to you, “Get out, you get a zero for today’s work.”

Well, it took a few years before the faculty and the universities realize these calculators, if we bring them into classroom, yes, they don’t need to understand what two plus two is, that equals four, but they can analyze complex equations. They can analyze, in the classroom, how to send a rocket ship from earth to the moon and do calculations that they usually can’t do. So, the school system, all universities, had to change their way of thinking about how to use technology, those calculators.

I think what you’re bringing up, Irv, we’re on the brink of a change that’s happening with this new technology. I think, today, many faculty are not comfortable with that change, but the students have already changed. We may have to go into all our classes at our online university, and at the face-to-face universities, to redesign the written assignments, redesign maybe the oral discussions, maybe have some more oral discussions. I think the entire group of people who design courses or give us procedures to follow to design courses, those procedures have to change now. I really believe you’ve hit on something, Irv, that we are at a point of change in education.

I really appreciate that, Irv. Right now we’re going to take a break. Today we’re speaking with Irv Varkonyi regarding how AI and, especially ChatGPT, is or will impact students and the university of faculty.

I’d like to ask you a question that’s different than we had before. When students are using this AI technology such as ChatGPT, does this impact plagiarism or the ethics issues in our schools? Do we need to rethink plagiarism just because of ChatGPT?

Irv Varkonyi: Excellent question. Let’s define plagiarism as the university would define it. And we have a policy at American Public University, the plagiarism is taking material that is not of your own and crediting it as your own material. Plagiarism also involves where you’re not referencing it. So, essentially, in more modern terms, plagiarism is a copy and paste process. You’re copying something, you paste it into your submission, your response, and you claim it to be your own. That is plagiarism.

We have developed tools for catching such plagiarism online, and one tool that has become popular is called Turnitin. And Turnitin evaluates every submission that it gets from the students, and then will give us a score. The student can see the score, and so can the faculty member. And as the score rises, where maybe 20% or 30% or more, that tells us, more and more, that paper was essentially copy and pasted.

That was an excellent tool, and since the students knew that we were watching them through that tool, we felt that gave them an impetus to try and create more original material. Well, what I’ve been noticing, a trend over the past maybe year, two years, not more, maybe less, has been those Turnitin scores have become less, and less, and less.

I submit that students have figured out, or something has figured out, that they can take material which could be copy and pasted, but now is not showing up as such. So now, I have additional due diligence in terms of determining whether or not that material was or was not original. And I might do it by asking certain questions in a way that will be hard to copy and paste. And then, based on that answer to me, that can tell me that the student actually did create that because they explained it, or I don’t think the student understood what they had said.

So, we are no longer in that comfortable world we can accurately determine what is plagiarism or not. If we can’t do that, well then, the question comes back, as you just asked me, is what is going to be our viewpoint? How are we going to look at this as an ethics issue?

A ChatGPT response, when that question is asked, that ChatGPT will come up with an original created answer. It is not then taking something, in other words, ChatGPT does not, essentially, copy and paste, but it gives a specific answer to a specific question. Well, is that plagiarism? By our definition at the university, it’s not plagiarism because it’s different. It is plagiarism in the sense that the student, are not doing it. Is that ethical or not? If that student is going to get a job after earning their degree in a company, and they’re going to use ChatGPT to create things, are they not ethical at the company? That’s what their company wanted.

So, are we, at the university to say, “Well, it’s not ethical because you had GPT use it,” to which the student will say, “Yes, but that’s exactly what my employer wants me to do. Why can’t I do it?” Now we are getting into a very complex type of area, whether it’s ethical or it is not. I think, what we said before, we need to find ways that can test the student’s understanding, but not really get caught up in this is it plagiarism or is not plagiarism anymore. And that our policies, while I can’t say for sure because I think this needs a lot of discussion, we have to find a way that a policy is going to say that the student did something here, that they learned something about it, even if the words may have been written in some other way, and that if their process in acquiring such knowledge is going to help them in their future work, that should be part of our policy.

I’m sure there’s a lot of implications on that, but I think we have to change that, how we define it, we have to change what we’re going to accept, and we have to change what we think the student will be able to do. So, that’s not exactly a yes or no answer, Oliver, but I think that wherever we have now, we know enough that in fact that’s not appropriate, and something different is going to have to be put in in its place gradually over time.

Oliver Hedgepeth: Yes, Irv, that’s a very good discussion point. And I have to agree with you, the answer on: Is it plagiarism or whether we need to change our policy on plagiarism, it’s not a yes or a no situation. It is something that has to be discussed by all universities, whether we change our policy in our faculty handbook, the words that say, “Here’s what plagiarism is.” Personally, I don’t think we need to change the words that says, “Here’s what plagiarism is.” What I think, because of this newest addition to search technology, AI technology, that we just need to figure a way to be comfortable with it. It is very clear that AI, ChatGPT, and others like it have become kind of a virtual assistant to the student. I’m thinking now that maybe we teachers need to use ChatGPT to look at it to see what’s going on. The students are using it, the data is out there, the students are using it, we need to understand it, and maybe we need to redesign our courses that use ChatGPT.

I can see that our assignments, like you said, a 20-page paper may no longer be useful to teaching students research and learning how to do research. Maybe we should ask them. Maybe we should place AI and ChatGPT into a learning assignment, “Okay, this is week three, go to the ChatGPT and find out what’s the hot issue in 2023, in February. And let’s discuss it.” And that’s the key, “Let’s discuss it.” So, they would bring down something from ChatGPT that would give them, here are the issues, and you say, “Okay, Bob, Sally, that’s really great. What do you think about what you found?”

It seems like it’s not much different than what we do right now, but it is different because the ChatGPT can give them an answer that, like you said, you tried it, looks pretty darn good. In fact, I’ve been concerned with the policy change for plagiarism at our university, and I had a student, and then I did it too, use ChatGPT to write a new plagiarism policy for our university. And doggone it, in about five seconds, I’m reading on my screen, the seven points of the ChatGPT-created plagiarism policy for our university. And I read it, and I sent it out to other folks, they read it, and they said, “That’s pretty good, Oliver.” And then, I told them I didn’t write it. But it did hit all the key points.

So, ChatGPT is here to stay. It is common for the students, a lot of the faculty are not comfortable with it. We’re raised with the face-to-face classroom before online came on, but we’re in that online world today. So, we need to figure out how to redesign our courses. And I don’t think we need to change the policy on what is plagiarism, and what is ethics. It’s pretty clearly defined, and I don’t think it changes.

The students can cheat by going to their neighbor and say, “Write a paper for me.” In my early days of school, the plagiarism ethics issue was those people who would go hire someone to write a paper for them. I remember when I worked around Washington D.C., it was very easy for students to spend $500 and get a nice term paper written, or $200 to get a nice term paper written. There were advertisements out there, it’s in the newspaper, “You want a term paper?” And it would be original. There would be somebody out there writing an original term paper for you.

So, I don’t think it’s changed much in terms of that possibility of cheating, but this technology is similar to those calculators. This now is like those calculators of the ’70s and ’80s. We have to embrace it. So, I think, Irv, we’ve got to embrace this new technology. You did mention Turnitin. This week, I did listen to the programmer at I was in an AI session, and he demonstrated a term paper that had been written by a student that used ChatGPT for part of it. Turnitin, when you hit the button that says turnitin to check the paper, it underlined all the parts that came from ChatGPT. is working on, right now, an ability to, evidently, take that paper from the student, and pull it and ask ChatGPT to write that paper again, and it comes up with the identical paper. So, if you asked the right question, the title, you can get it twice or three times, evidently. So may be able to do what it was supposed to do in the early days. And as you said, the percentages of plagiarized items in the paper were getting less, but I’m thinking is going to partner with ChatGPT, seems to be partnering, to make sure that the cheaters don’t get away with it. But I do think we have to rethink, Irv, how to use it as part of our toolkit in the classroom. What do you think?

Irv Varkonyi: I agree with that, and I think that’s great that Turnitin can find that as a tool. And this discussion has inspired me to think about things that I can do until such time as Turnitin can help us. For instance, if I ask a question in a discussion, and more than one student decides to take that question, put into GPT, then I should be able to see exactly identical responses from more than one student. When I see them, then I suppose what I can do with these students is say, “Your answer, Bob, is the same one that Sally gave, is the same one that Jesus gave. Please redo on your own.” So, that’s something that I can do. But I do think that we still need to use it as a learning tool, and I think the issue of plagiarism, as important as it is, I think that part is changing, as you said, what could be done there before.

That would be my view on that, is to embrace it, to show students both sides of it. It’ll be our job to show them the biases of it that it may have. But nonetheless, go ahead and use it in terms of helping you to respond to an answer, but that it should not be your only answer. Use it to guide you, and then respond with your own original work on that, so that you can demonstrate to me, who’s grading you, in fact that you’ve acquired that type of knowledge. I think that’s how I would now respond, Oliver.

Oliver Hedgepeth: Okay. Irv, I want to thank you very much for joining me today in this really exciting topic of artificial intelligence, and especially ChatGPT. Do you have any last words you’d like to leave for our listeners? Any final words?

Irv Varkonyi: I appreciated the opportunity to engage on this with you. I think we should be embracing it and helping our students, and guiding them how to use AI. But because this is a complex subject, I think it would be appropriate to bring together faculty in a conference and be able to look at how all of us can guide our students and guide each other toward the future. And I would hope that we can do that sooner rather than later. Thank you, Oliver.

Oliver Hedgepeth: And thank you, Irv. And to you listeners out there, thank you for joining us. We have some exciting podcasts coming up, so stay tuned and stay well.

Oliver Hedgepeth

Dr. Oliver Hedgepeth is a full-time professor in the Dr. Wallace E. Boston School of Business. He was program director of three academic programs: Reverse Logistics Management, Transportation and Logistics Management, and Government Contracting. Dr. Hedgepeth was also Chair of the Logistics Department at the University of Alaska, Anchorage, and the founding Director of the Army’s Artificial Intelligence Center for Logistics from 1985 to 1990, Fort Lee, Virginia.

Comments are closed.