09/10/2024 | News release | Distributed by Public on 09/10/2024 18:09
Are today's college students turning in essays generated by ChatGPT? Are professors using AI to grade students? Has artificial intelligence revealed a better way to learn? And is a university education in danger of becoming extraneous?
Julie Schell, UT's assistant vice provost of academic technology and the director of the Office of Academic Technology, has been an expert on the use of technology in education since the 1990s, when the cutting edge was PowerPoint. Since then, she has held positions at Yale, Stanford, Columbia and Harvard. She was recruited from Harvard by then-vice provost Harrison Keller to be the founding director of OnRamps, UT's program to help Texas high school students improve their college readiness through technology-enhanced education. During COVID, as assistant dean of instructional continuity and innovation in the College of Fine Arts and an assistant professor of design and higher education leadership, she helped lead the College of Fine Arts in online learning.
The State of Play
The easiest of the above questions to answer is the second one, about grading students: "A faculty member should never upload a student's writing, get feedback from AI and then provide the AI's feedback to the student. We have a hard line on that," Schell says.
"AI is very seductive because it's so good. It can save you so much time, and you're surprised by the quality of the response. But it's also full of these paradoxes: It can teach you a lot, but it can also teach you bad information, so you can have negative learning. It can help you be really creative, but it can also diminish your creative voice by making you be more like other people. Faculty should never use it as a proxy for their own feedback."
"It's hard to make sense of this whole world," she admits.
Schell thinks of AI as having two kinds of use: transactional and transformative, and whether its use is good or bad in either case depends on the situation. "There are times it's okay to use it as a transactional tool: 'I need a list of ideas for planning dinner for the week,' or 'I need a list of ideas for a meeting coming up,' or 'Help me brainstorm research ideas.' Those are low-stakes transactions, and we need to help students understand when transactional use is OK."
But in this transactional category, she once experimented with using AI to write letters of recommendation, something that can be a time-consuming task for those in academia. "When I read what it said, it wasn't fair. It wasn't me. It didn't have the flavor of how I really thought about the student, and I didn't think it was fair to my student for me to use that output," she says. "That's a moral bridge I would not cross." She adds, "It takes about 15 hours of experimentation to realize AI is not as good as you. It's good, and it can do things faster than you can, and it has a vaster knowledge base than you do, but it's not as good as you because it doesn't have your voice. It's not you." That's equally true for faculty members and for students. "I want to see my students' voices and identities show up in their work," she says.
Then there is the transformational use of AI. "Let's say I input a prompt for a journalism class that I am teaching: 'Help me write three learning outcomes for burying the lede.' And then it spits out the three learning outcomes. If I just take those, copy them and teach from them, that's transactional use. Transformational use is to take that output, look at it, evaluate it, critique it, see what doesn't fit, edit it, transform it and use it more as a scaffold, so that you transform it to integrate with your perspective." In this example, transactional use is bad; transformational, good.
Ban or Teach?
When it comes to the use of AI by students it is a more nuanced issue. "There are contingents of educators who are very against the use of AI [by students]. There are also some institutions that bar the use of AI." Schell's view and that of her colleagues in the Provost's Office is that "the cost of policing students to never use this incredibly relevant, timely, transformative tool is more than the cost of creating a culture of academic integrity and honesty."
Where AI is concerned, the horse has left the barn. Ignoring AI or banning its use is not preparing students even for the world as it is now, much less the world of the future. Students need and expect their higher education institutions to help them engage ethically with AI, Schell says. "Telling them never to use it is not helping them with that."
Instead, she believes in putting that effort into helping people become "the architects of their own ethical frameworks." "When they leave here, there aren't going to be bars on these things, so the critical thinking, the decision making, misinformation, bias - those are all baked into the AI tools. Our students are going to thrive in environments where they're prepared to address that ambiguity."
That said, generating an essay with an AI tool such as ChatGPT and passing it off as one's own is prohibited as an academic integrity violation. "There is a clear prohibition on doing that, but it wouldn't just be for writing, it would be for generating code or preparing a presentation. But I think academic integrity is not a 1 / 0 on something like that - is it cheating or is it not cheating?"
Schell knows the difficulties of AI firsthand because she teaches a class in design pedagogy in which she introduces AI. She does so in phases: "First, I talk to my students about AI, and I make it really clear that if they use AI, they have to cite it, and I show them how. They need to document how they are going to use it."
But she recalls one instance that was instructive. "We were making user personas, and a student turned in one that was a really great graphic. I said, 'You did a great job. I really get the sense of the user by the image, and it feels very connected,' and they said, 'Thanks! I used AI!' I was so surprised because I had made it really clear that that was not how to go about AI use in our class. But in that moment, I realized my students needed more help. It needs to be a conversation. It's not a 1 / 0. It's not 'Follow my rule.' A UT-quality learning experience is about helping them become architects of their own frameworks and engaging with AI effectively."
On the second project, she actively encourages them to use AI, but introduces UT's AI-forward framework, which includes six concerns about AI students should always consider: 1. privacy and security 2. hallucinations (AI stating things as facts that are false) 3. misalignment (when a user prompts an AI application to produce a specific output, but the application produces an unexpected or undesired result) 4. bias 5. ethical dilemmas and 6. cognitive offloading. Of the last item, she explains, "If we're not careful, if we give AI too much, then we can lose cognitive capacities, so we have to be careful and judicious about what we decide to offload to it."
Finally, in the last project she requires her students to use AI. With this phased approach, she hopes to build both skill and savvy about AI's limitations.
The Upside: Introducing Sage
Asked about the upside of AI in education, Schell says, "I'm getting goosebumps talking about this. One of the things I'm most excited about is an AI tutor we're working on named Sage. A custom GPT (generative pretrained transformer) is also known as a custom bot.
ChatGPT and other text-based AI tools used on campus such as Microsoft Copilot are large language models. When you ask a question, it finds where the information lives and then answers it. With a custom GPT, you can create your own limits around what you can train it to ask. "You can train it to ask the kinds of questions you want to ask," Schell says. "You can train it to have resources that you want to respond with."