Part one of our two-part interview with Mahnaz Moallem, Professor of Instructional Technology at UNCW.

When examining the future of learning and assessment the nuances take the form of a digital matryoshka doll– one piece unfolding into another, then another, and on and on. While some pieces resemble tradition, multiple choice questions, and factory (or cemetery) models of education, others take the shape of disruption, AI, creativity, and critical thinking.

To rethink assessment is to rethink every single nuance, all the way down to the core of learning.

One such pioneer in the field of STEM education and assessment research is Prof. Mahnaz Moallem – Professor of Instructional Technology, and Research and Grant Coordinator at the University of North Carolina Wilmington (UNCW), College of Education where she has been a faculty member since 1993. Prof. Moallem has been the recipient of numerous awards, such as the Annual Achievement Award from the Association for Educational Communications and Technology and INSIGHT Into Diversity magazine’s “100 Inspiring Women in STEM.” She has also served as a rotating scientist (IPA) for two years at the National Science Foundation and an MIT program coordinator for nearly two decades.

I contacted Prof. Moallem to hear her thoughts regarding assessment and education in the twenty-first century. We delve into everything from technology and robots to girls in STEM and changing paradigms.

Assessment in the Digital Age v the Industrial Age

In a keynote you gave for OpenLearning’s Social Learning Conference, you explained that while the education system has changed over the years, we are still using the Industrial Age’s “factory model” of education. In your research, you emphasize that we have outgrown this model of impersonal, passive learning.

Learnosity

If this model of learning is no longer working, what can we adopt in its place?

Prof. MM

By rethinking assessment – I mean assessment that focuses on students demonstrating the application of knowledge, skills and other attributes such as motivation, engagement, and so on. when doing real-world tasks.

Also, by rethinking assessment, I mean thinking of assessment as being built into the process of learning and focusing on a progression toward student mastery, rather than representing it as the end of learning. This way of thinking about assessment is very challenging and requires not only conceptualizing the teaching and learning process differently but also seeing the assessment as “educating and improving student performance, not merely to auditing it” (Wiggins, 1998, p. 7).

I also think the design of the assessment should focus on developing a sequence of progressively more difficult real-world tasks to which students are asked to respond and move progressively to demonstrate applying knowledge, skills and other harder-to-measure outcomes of learning (creativity, critical thinking, collaboration, communication, etc.). These performance tasks must be carefully designed to provide evidence that’s linked to the cognitive model of learning and to support the kinds of inferences and decisions that will be made by the assessment results.

Learnosity

Do you think traditional assessment tools, such as the multiple choice question, will be displaced for something more apt and dynamic?

Prof. MM

I think before we can claim whether an assessment tool is appropriate or properly measures what we intend to measure we need to pay attention to several issues.

For example, the assessment may serve multiple purposes in the education system. For example, the assessment may be designed for larger-scale and often high-stakes uses in the educational system (a broader concept associated with various policies related to accountability). The assessment could also serve formative, summative, and evaluative functions (each has its own purpose and requires specific design and validation process) in typical domains of classroom instruction.

Thus, because assessments are developed for specific purposes, the nature of their design is very much constrained by their intended use. In other words, the purposes of assessment should define the process of designing an assessment that can make a claim about student knowledge.

I think a well-designed assessment system (that is aligned with the purpose and function that it serves) offers different forms of empirical evidence to determine how well it addresses important forms of domain-specific (content-specific knowledge and skills) and domain-general knowledge and skills (e.g., 21st-Century knowledge and skills (Partnership for 21st Century Learning, 2016)).

Thus, an effective assessment relies on multiple strands of valid and reliable evidence to permit proper triangulation of inferences to support both domain-specific and domain-general knowledge and skills claims. However, an effective assessment should also emphasize the design of complex and/or challenging tasks (including open-ended and/or ill-structured tasks) that measures deeper learning and employs meaningful or authentic, real-world problem contexts; makes student thinking and reasoning visible; and explores innovative approaches that utilize new technology and psychometric models.

A learner-centered approach

British education expert Anthony Seldon once made the bold claim that robots will replace teachers by 2027. While education is vital in preparing future generations for a world that is increasingly digital, equating education to machine learning disregards the real nature of learning and assessment, which you maintain is both personal and dynamic, catering to a learner’s unique skills, needs, and drives.

Learnosity

What are the benefits of this kind of personalized, learner-centered approach to education and assessment?

Prof. MM

I foresee that we will continue making more progress in utilizing analytic tools to identify gaps in learners’ knowledge and skills – particularly within immersive learning environments (e.g., virtual reality, augmented reality, and digital games) – to personalize learning and address learners’ needs more effectively as well as accomplishing a wide range of responsibilities that teachers have to complete – and in many cases are not able to fulfill.

I also agree that further development in the area of AI agents in computer-supported learning environments will help us provide much of the cognitive and metacognitive guidance and scaffolding needed to assist learners in their learning processes, which again could enable teachers to provide individualized support within a group environment.

However, I don’t think advancement in learning analytics or artificial agents or robots will take the place of the teacher. Learning is a social and emotional endeavor as much as it is a cognitive task. Thus, the role of the teacher, mentor, or facilitator is critical in engaging learners in the types of activities that allow them to develop empathy, compassion, ethical values, and emotion, which are associated with cognition and creativity.

Learners also need to be assisted in re-examining their assumptions and perspectives and reflecting on who they are and how they can utilize their knowledge and skills to help their communities and humanity in general. Thus, the presence of a human teacher is critical and necessary and cannot be replaced by machine learning.

Learnosity

How do you believe that we as researchers, teachers, entrepreneurs, and academics can usher in this new paradigm?

Prof. MM

I think we can initiate and guide this new paradigm.

At institutions of higher education, we should challenge ourselves to define what we should do to facilitate and enhance student learning and growth. This effort will help us design new learning environments and assessment systems that emphasize the effective use of assessment results.

Since as faculty – as both teachers and researchers – we are closest to addressing these issues, the main challenges we will face are whether we should use outcome-based assessment and decide on useful interventions and remedies for changes. It appears that faculty members increasingly believe that deep learning and outcome-based assessment are worth doing – even apart from the requirements of accreditation. However, they usually are not directly involved in setting institution-level strategies for assessing student learning and using the results to improve teaching and learning practices. Though faculty cannot spontaneously reform teaching and learning process and administer a campus-wide assessment plan. The institutions of higher education must facilitate faculty involvement through information, guidance, and mechanisms of support.

This effort is very challenging and requires careful planning and depends on many factors such as the size of the institution, faculty commitment and recognition that the process is connected to their interest in terms of improving teaching and learning.

Another challenge might also be related to the personal and social aspects of students’ development. Although these goals are seldom an explicit teaching goal of courses in the academic disciplines, all faculty members attach importance to affective outcomes. However, I should admit that because of the implicit nature of such outcomes and their lack of specific placement within the total college experience, personal and social development goals are hard to define and are often not included in assessment practices.

Read Part Two of our interview with Professor Moallem.