HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
June 24, 2025
5 min (est.)
ASCD Blog

How AI Pushed Us to Rethink Assessment

author avatar
One instructional coach's honest account of how AI disrupted the way her school looked at assessments—and sparked a much-needed shift.
AssessmentTechnology
A student uses a laptop to complete an in-class assessment while a teacher provides instructions at the front of the room.
Credit: Drazen Zigic / Shutterstock
When generative AI tools first showed up in classrooms, it felt like a technology story—another innovation to understand, manage, or resist. It quickly disrupted routines, from how students drafted essays to how teachers evaluated student work. But educators soon realized: the changes happening in classrooms weren't just about technology. This was an assessment story, one that would force us to reexamine how we define and measure learning. 
As an instructional coach in South Florida, I saw how these tools raised urgent questions about what truly counts as evidence of learning. Teachers in my district approached AI with both curiosity and concern—some appreciated how it could scaffold and differentiate, while others feared it would replace student thinking. They started asking: Can students use AI to revise their essays? Should we accept AI-generated answers as evidence of mastery? Does a perfect-looking product still reflect learning if the student didn't do most of the cognitive work? Instead of answering these questions alone, or pretending we already knew the answers, our school leadership team created space and time to talk through these challenges.  
This process revealed a deeper truth: our traditional assessment practices weren't built for generative AI. We realized we needed to shift our lens. The goal wasn't to catch students misusing AI. The goal was to redesign assessments so that student thinking remained visible, even when AI was in the room.  

Leading With Inquiry

To make sense of these shifts, we formed a monthly learning circle made up of instructional leaders—coaches, teacher department chairs, and assistant principals, all representing a range of grade levels and subject areas. These weren't formal PD sessions—we simply showed up to reflect on what we were seeing in classrooms and how AI was challenging our thinking about assessment. Each session started with a real classroom dilemma: "A student used AI to write their paragraph, should I grade it?" or "How do I give feedback on AI-assisted writing?" From there, we asked questions, explored ideas, and left with something to try or observe. School administrators supported this work by protecting time in our schedules for these meetings, and more importantly, by valuing the outcomes. Ideas from our learning circle influenced PD design, faculty meeting discussions, and, eventually, classroom practices. 

The goal was to redesign assessments so that student thinking remained visible, even when AI was in the room.

Author Image

What emerged wasn't just a tech integration strategy. It was an evolving framework for rethinking how we measure learning in authentic, growth-oriented ways. Several consistent themes surfaced in our discussions: 
  • Assessment as Process: Teachers realized they needed to focus more on students’ processes and thinking, not just a final product or answer. Teachers started asking students to submit drafts, notes, or voice recordings explaining their thinking. 
  • AI as a Learning Partner, Not a Shortcut: When AI was used, students were asked to reflect on what parts they did themselves and what they learned from the tool. This shifted the emphasis from "Is it cheating?" to "What did you learn by comparing your draft to the AI's output?" 
  • Feedback Over Grades: Teachers found that traditional rubrics weren’t always helpful when evaluating AI-supported work. Instead, they began using narrative feedback and conferencing to gauge understanding. 
  • Performance Tasks and Student Voice: We saw a renewed interest in oral presentations, live demonstrations, student-created rubrics, and other formats where students had to defend their thinking. These made learning—and assessment—harder to fake and easier to personalize. 

Scaling the Shift

As our learning circle deepened, we realized these insights couldn't stay siloed. During school-wide PD sessions, we created workshops centered around redesigning assessments for the AI era. We didn't start with tools—we started with student learning goals: What do we want students to think about, apply, or demonstrate? From there, we explored how AI might support, not replace, the cognitive processes we hoped to assess. 
Teachers brought current assessments and explored: 
  • How might this task change if a student uses AI? 
  • How can we redesign it to prioritize thinking, reasoning, or creativity? 
  • What scaffolds might promote equity without doing the work for students? 
The results were transformative. A vocabulary quiz turned into a collaborative concept map. A compare-and-contrast essay became a student-led podcast debate. Instead of restricting AI, teachers were rethinking what evidence of learning looks like. 

Instead of restricting AI, teachers were rethinking what evidence of learning looks like.

Author Image

Leaders must model the kind of thinking we want to see. In faculty meetings, the leadership team shared our own learning process—what we were trying, where we were uncertain, and how feedback helped us grow. This transparency helped staff feel safe to experiment. When they saw that leadership was grappling with the same big questions—What does mastery look like now? How do we give feedback that builds independence?—they felt more empowered to take risks.  

Turning Assessments into Dialogue

One of our biggest mindset shifts was seeing assessment not as a one-note judgment of what students did or did not know, but as a dialogue. When students used AI, we asked them: What did the tool get right? What would you change? What does that tell you about your own thinking? 
This kind of reflection became the assessment itself. In one class, students wrote reflections comparing their writing to an AI-generated version. In another, students annotated AI responses, evaluating them for accuracy and depth. These practices brought metacognition front and center. They allowed students to become assessors of their own learning and gave teachers richer insights than a rubric score ever could. 
But as we implemented these new approaches across classrooms, we faced some practical challenges. As with any innovation, equity questions quickly emerged. Not all students had the same access to devices or AI tools. For example, in one 7th grade ELA class, students were asked to revise a paragraph using an AI tool, but a handful of students didn’t have reliable internet access at home and couldn’t complete the task with equitable access. This highlighted how digital access disparities could widen instructional gaps if left unaddressed. Additionally, not all teachers felt equally equipped to evaluate AI-assisted work. One science teacher shared that she wasn’t sure whether to grade a lab report that used AI for formatting and summarizing results. She asked herself: Was it the student’s thinking or the machine’s? 
To address these challenges, we emphasized three principles: 
  • Transparency: Students should disclose when and how they use AI. 
  • Intentionality: AI use should support, not replace, thinking. 
  • Reflection: All AI-supported tasks should include a student reflection. 
By focusing on student thinking and clear communication, we made space for all learners to participate meaningfully regardless of their tech access or experience. We also didn’t assume students would know what “disclosing AI use” meant. Teachers explicitly modeled how to acknowledge AI support in their own examples, such as adding a sentence at the bottom of an assignment that said “I used ChatGPT to help brainstorm topic sentences.” In writing tasks, students were encouraged to include a short statement describing how they used AI, and which parts of the work were their own. We also created simple reflection prompts and sentence starters to scaffold this habit, especially for younger students or English learners. Some examples include: 
  • “I used AI to help me with...”  
  • “I revised AI’s response by...”  
  • “The tool suggested ..., but I decided to...”  
Teachers reviewed these disclosures during formative assessments and conferences with students. This conversational approach helped normalize responsible use while making it easier for teachers to evaluate student thinking accurately. It also made expectations clearer for all learners, regardless of their comfort with technology.  

The Mirror We Needed

Within six months, these individual changes—new assessment formats, reflective practices, equitable principles—began to add up to something larger. The biggest win wasn't a new policy or checklist. It was a cultural shift. Teachers started asking: 
  • What am I really assessing? 
  • How do I know students understand? 
  • How do I make feedback part of the learning journey? 
Students, in turn, started asking better questions about their own work. AI wasn't the threat we feared. It was the mirror we needed to reflect on our assessment practices and ask whether they were truly supporting learning.
That said, the process wasn’t without bumps. Some teachers initially struggled with the ambiguity. One common concern was: How do I grade something that was co-created with AI? Others worried that emphasizing reflection over traditional rubrics might lead to inconsistent expectations or confusion for students.  

AI was the mirror we needed to reflect on our assessment practices and ask whether they were truly supporting learning.

Author Image

On the student side, there was also some pushback. A few students saw the reflection prompts as extra work, especially when they didn’t feel confident articulating how they used AI. In one class, when asked to compare their writing to AI-generated output, students rushed through the exercise, treating it like a checklist rather than a thinking task. It became clear that we needed to better scaffold what meaningful reflection looked like. We adjusted by slowing down. Instead of introducing AI and reflection protocols simultaneously across all subjects, we focused on piloting in a few classrooms first. We also created models and examples of high-quality reflections. Teachers began building time into lessons for guided reflection and peer sharing, which helped students internalize the value of the process.  
The biggest lesson? Implementation takes some trial and error. Building a culture of transparency and thoughtful assessment didn’t come from mandates; it came from ongoing dialogue, shared learning, and a willingness to recalibrate when things didn’t work the first time. 

AI in Schools

The innovation-focused February 2025 issue showcases examples of the ways (large and small) that schools and educators are using AI to enhance instruction and transform the nature of their work—and student learning—for the better.

AI in Schools

Anna Bernstein is an instructional coach in South Florida, passionate about empowering educators through innovative teaching practices. She holds a doctorate degree in curriculum and instruction, with a focus on how AI is reshaping the K-8 classroom. Anna is committed to helping teachers harness the potential of emerging technologies to create more engaging, equitable, and effective learning environments.

Learn More

ASCD is dedicated to professional growth and well-being.

Let's put your vision into action.
Related Blogs
View all
undefined
Assessment
The True Cost of Standardized Testing
Harry Xiao
8 months ago

undefined
Uncovering Bias in Testing
Guillermo Solano-Flores
3 months ago

undefined
Resist the Urge to Grade Students During the Coronavirus Closures
Joe Feldman
5 years ago

undefined
Beyond a Highlight Reel: Portfolios as Dynamic Workspaces
Starr Sackstein
1 year ago

undefined
3 Strategies for Student Self-Assessment
Susan M. Brookhart
1 year ago
Related Blogs
The True Cost of Standardized Testing
Harry Xiao
8 months ago

Uncovering Bias in Testing
Guillermo Solano-Flores
3 months ago

Resist the Urge to Grade Students During the Coronavirus Closures
Joe Feldman
5 years ago

Beyond a Highlight Reel: Portfolios as Dynamic Workspaces
Starr Sackstein
1 year ago

3 Strategies for Student Self-Assessment
Susan M. Brookhart
1 year ago