Process vs. Product: AI and the Future of Learning
December 18, 2025
If you are a parent preparing to send your high schooler off to college soon, you have likely been reeled in by recent headlines about AI on college campuses. Many institutions, from highly-selective Ivies like and , to massive public universities like the and , have announced campus-wide AI initiatives as well as invested partnerships with leading AI companies. Meanwhile, a flurry of stories have emerged about of generative AI, professors getting called out by students for , and general outcry about the .
If the current media frenzy can tell us anything, it is that the situation is polarizing and still in flux. On one side are the savvy tech moguls and starry-eyed university administrators shouting from the rooftops that the future is here! On the other, the beleaguered educators ringing the funeral bells for higher education as a whole.
Strong feelings are, at base, indicators of what we value. And the polarization here shows that there has been a schism in belief about what the ultimate value of higher education is. Is it about the product or the process? In the former view, higher education demonstrates its worth through its end products and its return on investment: the GPA, the degree in hand, the high-earning job, the financial security. In the latter, the ultimate value of higher education is in the learning process itself, and the self-actualization that comes along with it.
The AI bubble has slipped right into that schism, and as it expands, the rift only deepens.
The AI hype bubble and higher education
While university administrations are falling over themselves to make comprehensive deals with the biggest AI purveyors, faculty are often hung out to dry鈥攏ot included in decision making around changing AI policy, and, for the most part, left to fend for themselves in the classroom, where the effects of these policies will actually show themselves. While many of these AI partnerships do come along with introductory seminars and professional development for faculty and staff, they rest on the assumption that everyone at the university will鈥攊nevitably, and universally鈥攊mplement AI into every aspect of the educational experience.
This urgent and frenetic push towards the 鈥渋nevitability鈥 of AI is a recognizable phenomenon. In an article about the data science researchers Dr. David Gray Widder and Dr. Mar Hicks point out the ways this pattern of technology hype repeats itself through history: 鈥淎 key strategy for a technology to gain market share and buy-in is to present it as an inevitable and necessary part of future infrastructure, encouraging the development of new, anticipatory infrastructures around it.鈥 Additionally, this hype rewards (or claims to reward) those who jump on the runaway train early, before the technology proves itself to be the stable, necessary thing it purports to be, and before it can be thoroughly tested and critiqued.
While the effects of AI hype have been widespread across industries, it is higher education that has ended up being one of AI鈥檚 most ideal marks. But it鈥檚 also one of the riskiest. We risk a lot when we surrender the educational process to a tool that was not built for it. ChatGPT, OpenAI, and other AI tools that are being rolled out on college campuses were not originally designed or optimized for education. Their effectiveness as pedagogical tools, or as more involved educational collaborators, has not been robustly tested, which means that the current and near-future cohorts of college students are guinea pigs in an experiment they did not necessarily consent to. This is a significant point of criticism from many educators.
As institutions turn further towards AI integration, many faculty are voicing their concerns that student AI use shortcuts鈥攁nd shortchanges鈥攖he learning process in favor of a seemingly optimized 鈥減roduct.鈥 In the face of this changing landscape, individual educators are taking matters into their own hands, and finding new (and old) ways to center 鈥減rocess鈥 in their classrooms again.
New (old) methods
How are professors adapting to, and resisting, the AI-saturated university? Here are just a few examples of the practical pedagogical shifts happening in college classrooms today:
Connecting assignments to a clear purpose:
One of the things that consistently comes up in students鈥 justifications about AI usage is that they don鈥檛 connect with or care about the material or the assignments. Maybe they鈥檙e taking a required course that they don鈥檛 feel passionate about, or maybe the structure of the class just isn鈥檛 drawing them in. Dr. Matt Dinan, a professor in the Great Books Program at St. Thomas University, argues that one of the most important things he can do for his students is . The way he approaches this is, first, to help students recognize the significance of their learning process, and then, to be very explicit about how each lesson and assignment connects with that greater purpose. This means providing students with clearer guidelines and expectations, lower stakes around grades, as well as one-on-one support, so that they feel fully accompanied in their intellectual risk-taking.
Creative return to blue books:
While blue books have something of a bad rap for older generations (who may remember using them to take long handwritten final exams), the rise in AI plagiarism has given these quaint assessment tools a second life. Professors Danielle Kane and Claire Mason, who both teach at Purdue University, have in their classrooms, less as test-taking tools and more as semester-long records of the learning process. By creating a grading structure centered around daily reflection and experimentation, Kane and Mason have been able to measure their students鈥 understanding, growth, and progress over the semester, scaffold longer assignments, and reconnect students to the importance of rigorous engagement with their own ideas.
Reading a whole book:
National statistics show that . While educational interruptions at the start of the COVID-19 pandemic impacted that decline, experts say that it began before those interruptions and hasn鈥檛 bounced back in the time since. One likely contributing factor to this decline is the shift in high school (and middle school) curricula towards 鈥渢eaching to the test.鈥 In a reading and writing context, the incentive then becomes about preparing students to master very specific standardized responses to texts (or rather, excerpts from texts), rather than allowing them to explore their own reactions and interpretations in discussion with others. What some college-level educators have noticed is that students are entering the university without having any experience of reading a full book in a classroom context. Dr. Helen Choi, a professor in the School of Engineering at USC, has decided to implement for her students: reading an entire, physical book together as a class. Students will then have the chance to discuss, debate, and question what they鈥檝e read together in person.
Project-based learning:
In lieu of formal exams and final papers, many educators across fields are basing their assessment of students on participation, discussion, experimentation, and more in-depth longform projects. A.J. Juliani, an instructor at UPenn鈥檚 Graduate School of Education, is in favor of integrating AI into some aspects of education, but with this in mind, emphasizes the importance of assessing process over final product. One of the ways he approaches this is through , where students are graded on meeting certain process goals, check-ins, and benchmarks, rather than fulfilling a rubric with their final product. In encouraging students to document (and celebrate) their struggles as well as their successes, Juliani opens students up to more robust creative exploration.
Final thoughts
For students entering college in the next several years, AI is likely to be a presence on campus in one way or another. Right now, policies vary significantly from school to school, and even from class to class, which can be confusing for students to navigate. As with many aspects of the AI bubble, it is too soon to tell how widespread or effective the adoption of AI into higher education will be.
One thing we can know for sure is that true inevitability is incredibly rare. Anyone who is trying to tell you otherwise probably has something to sell you. When we convince ourselves that something is inevitable, we forget our agency, our ability to critique and dissent, to innovate and to imagine otherwise. And these are exactly the things that a great education鈥攁nd great educators鈥攆oster.