General Card #1723
Six EM Classroom Assessment Best Practices
Five guidelines that support robust EM classroom assessment best practices.
At the Fulton Schools of Engineering (FSE ) at ASU, EM is implemented--and assessed--like ABET. EM objectives are defined across all 17, ABET-accredited programs as EM@FSE. The EM@FSE framework has 17 EM indicators, each mapped to ABET Student Outcomes (see EM@FSE Framework resources, below).
Like ABET, most assessment of EM objectives takes place in the classroom. FSE has undertaken extensive faculty development (i.e., workshops and coaching) to support faculty in integrating EM@FSE indicators into their assignments, instruction, and assessment. The open-ended, rubric-based responses that typify EM assessment are much different than the exact, single word, number-, or formula-based assessment on traditional engineering assignments and exams. Process-oriented outcomes are more difficult to assess than product-oriented outcomes, and more time consuming. Consequently, although faculty strongly support integration of EM in engineering curricula, assessing EM objectives effectively has been an ongoing challenge.
As the director of assessment for the EM@FSE initiative, I have worked with faculty over several months and gathered many examples of effective and ineffective classroom assessments. Lessons learned from these experiences have distilled into EM@FSE Assessment Best Practices, or "Robust EM Assessment," which we now share with faculty, who have found them helpful. The five guidelines of robust EM Assessment are:
1. EM assessment requires that students demonstrate curiosity, make connections, and/or create value.
2. At FSE, assessing EM means assessing the EM@FSE (a-q) indicators.
3. Each EM@FSE indicator is assessed explicitly.
4. Each EM@FSE indicator is assessed individually.
5. The assessment determines a level of performance at, above, or below proficient.
6. EM indicators are assessed multiple times during the term.
These guidelines are explained in the Robust Assessment resources, below. Examples are also included.
I would appreciate hearing from anyone who would suggest additions or modifications to these guidelines.
I find it an effective starting place to focus on a faculty member's specific course assignment or module. I then ask, "What's the EM-ness of this?" They might tell me in terms of the 3Cs or some other language. After clarifying the specific objectives (which they often aren't clear about initially), we then turn to the 17 a-q indicators and I ask which best capture their EM objectives. Of the 17, there are usually a few, but even just one is suffiicient.
Next, focusing on a single objective or indicator (often they are one in the same, but not always), I ask What is it in the assignment deliverable(s) that will demonstrate proficiency on this objective/indicator? Often, faculty cannot articulate this to me, which means they cannot articulate to students what, EM competencies they want students to know and be able to do. So we work on this a bit until we get 2-3 criteria that are specific, relevant, and teachable. In our definition of EM assessment at FSE, faculty have to link student performance to proficiency. So our last step is creating categories for each level of proficiency on the rubric, which is, at the least, at, above, or below proficient.
Faculty have been surprisingly patient stepping through this process. In fact, most seem eager to learn these basic tenets of effective instruction and assessment.