molly's guide to cyberpunk gardening

the rochester provocations on AI in education: some thoughts

Spending my work-adjacent time reading the Rochester Provocations, a series of eight statements about AI intended to provoke conversation about how schools respond to generative AI:

the Rochester Provocations

I feel provoked to converse.

In conversations on the AASL listserv, I've come to understand that the Provocations treat generative AI as inevitable. It's here, it's not going anywhere, and it's going to change how we do absolutely everything, so might as well prepare students for that. This isn't the first education-related document I've seen to treat generative AI this way. Yet every time I see this attitude, I ask the same question: What is the use case for LLMs, image generators, and similar generative AI things?

Because so far, the only real "use case" for generative AI seems to be to convince executives that it can replace employees. And if the only real use case is to convince executives it can replace employees, then why do our students need to understand it? Is a future where the only career path is "fight this AI for your right to eat" really one we want to accept as inevitable? Is it one we want to teach our students to accept as inevitable?

Anytime any "generative AI in education" discussion treats generative AI as inevitable, I distrust it. To take inevitability as a given seems to doom our students to that inevitability. I don't like it.

Some early thoughts on each of the eight Provocations:

1. There is no AI problem: Education itself is a wicked problem; assuming there is an AI-in-education problem risks us thinking there is an AI solution.

This one strikes me as a non-sequitur. Could not "assuming there is an AI-in-education problem" also lead us to the conclusion that there is "a no-AI solution," or one where we yeet generative AI from the classroom entirely?

I think what they're trying to get at is that generative AI is yet another tool/trend that affects learning, just like PowerPoint slides or filmstrips or slates were. Again, though, I question why we treat every new trend as inevitable, particularly when facts supporting that argument are NOT yet in evidence.

2. We can be agents of validity or victims of cheating: We cannot control cheating and focusing on it will lead to an adversarial relationship with students; where we have agency is in how we engage students with more meaningful tasks that lead to an accurate understanding of student learning.

I agree with this one. I don't see how it's generative AI-specific. I do see why it needs to be included in any conversation about generative AI.

3. Our assessments were broken before AI: We need to rethink how we understand and implement assessment including an emphasis on developing and implementing valid assessments in pre-service teacher programs.

I can't remember a time I didn't agree with this one. My only question here is, if we've known this for decades (and we have), then why *aren't* we "rethinking," "developing," and "implementing"?

I do know some of the reasons for this, including how "sticky" grades and grading can be. But still. This is far easier said than done. And I question the value of talking about it when generative AI is being billed as a way to make our current modes of assessment easier on teachers. "Generative AI can create rubrics and assessments for you!" is a common talking point. We're being sold tech that lets us give broken assessments BUT FASTER. This is not tech that is interested in rethinking assessment. Indeed, to rethink assessment would *lessen* teacher and student use of AI - an approach anathema to corporate interests and thus a non-starter in our current political environment.

We need better assessment. We've needed it for decades. A big reason we don't get it is that corporations would lose money if we did. Generative AI's presence or absence has zero effect on that problem.

4. There is no such thing as AI-proof: Be it assessments, careers, or anything else, the ubiquity of AI and pace of growth means AI-proof is a problematically alluring impossibility.

Objection: Assumes facts not in evidence.

Generative AI is only "ubiquitous" because it's in the early stages of the tech takeover model: Make it cheap and stuff it in everything, so the sheer number of people tripping over it implies "growth."

The moment generative AI models are forced to pull their own weight, in terms of making the money they suck up to run, their growth - and probably their existence - will collapse. Full stop.

Also, I run a wholeass website and write wholeass fiction focused on sussing out what is "AI-proof" about humanity. The only "problematically alluring impossibility" here is the belief that we can blithely dismiss the concept of "AI-proof" based on generative AI's "growth."

Finally: How are you going to claim that nothing is AI-proof in Point 4 and then insist we need to "research" what it is about teaching that AI can't do in Point 5? (see below). If nothing is "AI-proof," then that includes "human ingenuity, creativity, and relationships" - the very things cited as our best offense in the fight against the AI for our right to eat.

5. Most things a human teacher can do, AI can mimic: If we cannot identify and prioritize what human teachers bring to the classroom, we risk replacement. We must research the impact of human ingenuity, creativity, and relationships.

This is "fight the AI for your right to eat" again. The emphasis is wrongly placed. It assumes generative AI replacing workers is inevitable and concludes that the onus is now on workers to "prove" they bring value to the equation. The emphasis *should* be on "mimic," as contrasted with "do." Generative AI can predict the most likely strings of words teachers use, in person and in print. That's it.

I'm not opposed to "research[ing] the impact of human ingenuity, creativity, and relationships" per se. I am opposed to doing it as grovel fodder: "oh please mr legislator administrator man please give us wage!"

I want a more assertive approach: "Here's what you need to understand about what teachers actually do." If we're going to fight the AI for our right to eat, then let's FIGHT it.

6. Teachers must have permission to compromise, diverge, and iterate: Schools must develop a supportive and collaborative culture where teachers are empowered and expected to seek better solutions to the wicked problem we face.

I have been saying this literally since I took my first education class. One nitpick: It's not *schools* that need to develop this culture (though many could improve). *Education* needs to be a self-governing profession.

7. The real AI crisis is how we take advantage of the opportunity: There will not be a perfect solution, but we (students, teachers, institutions, society) have an opportunity to explore innovative changes to what we do.

Again, what opportunity? What is the use case that will make generative AI models sustainable? I mean concretely sustainable? What is the use case that allows generative AI models to pay for their own chips, buildings, HVAC systems, real estate, water use, electricity, and environmental degradation? WHAT IS THE USE CASE??

8. Avoidance of AI in education is not an option: AI is unavoidable and a failure to address harm reduction feeds into the ongoing public health emergency created by the intentional design of algorithms to extract capital from humans.

I could be fully on board with this if (1) it were clear that "unavoidable" is somehow meaningfully distinct from "inevitable" in context (but I don't think it is) and (2) "the intentional design of algorithms to extract capital from humans" wasn't tacked on to the very end of these eight provocations instead of central to each of them.

If their name is any indication, the Rochester Provocations have done their job. I certainly feel provoked. I find them a mixed bag. They identify some real problems, but the assumptions on which they are based are very much not in evidence. Starting from faulty assumptions is only going to lead us to faulty conclusions.

--

tip jar
email
home