Document Type

Presentation

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

Publication Date

11-16-2017

Abstract

All assessment practitioners know the penultimate goal is loop closing. But we also live with the reality that assessment is also beholden to the Gods of Accountability. As assessment professionals, we dutifully attend to the cycle of improving student learning, but at each step in the cycle we must continue to hone our skills. Results reporting may seem like the least sexy of all of the steps in our practice, but we argue that is singularly important and represents a high art of assessment craft. Results must be engaging, actionable, meet accountability mandates, and importantly read by those who can either effect change/improvements or endorse (read, resource) their recommendations. How can we improve reporting?

Perhaps first we are better off diagnosing the symptoms of reporting “illnesses.” Reports are generated either too late (end of semester) or report back to far (“that is so last year”) for any recommendations to be enacted. Alternatively, your audience may be reading for accountability when you need them to read for deploying or resourcing change. Also possible is the reader may not know s/he is in a position to act on changes based on assessment findings. Sadly, it is also just as possible that s/he may simply not want to. Worse still is the report that, for whatever reason, doesn’t get read at all. Why it doesn’t get read (bad timing, lack of attention to audience, impenetrable complexity) is the real key to understanding how to get a reader to engage with your reports such that improvement or change is obvious. We maintain that your assessment reports should be met with excitement and anticipation.

How much do we and should we think about improving our assessment reporting? Who can help us turn our reports into actionable efforts that focus on improving student learning? Who should we write these for and for what reason? Do our reports get read the way we think they should be, why or why not? These questions can drive rich conversations for assessment professionals to continue to have with each other, as we work to get past simple doing assessment and move towards using results.

We can work together to learn from each other what kind of reporting strategies have worked well and other that might have missed the intended mark or downright failed. What other factors contribute to successful reporting that leads to acting on results?

Share

COinS