This is my 500-word reflection on carrying out the PgCert group project on “e-Assessment and pedagogical practices”. I am supposed to write down:
a) how the changes we recommend to technology assessment practices will impact on what we actually do; and
b) how our experience of group work on this project will/will not influence our use of group work in our own teaching.,
The group work has no doubt deepened my understanding of the power of assessment, and of different forms of assessment (group assessment, self-assessment, peer-assessment, formative and summative assessment, learning portfolio and feedback approaches). If done properly, assessment and feedback can improve student learning outcomes. The readings also help me to conceptualise my problems in teaching in higher education (Heywood, 2000; Biggs, 2003).
My participation in this project has also allowed me to develop some thoughts on how to assess new kinds of outcomes, especially those assignments materialised on social media (blogs, wikis, interactive websites). Social media technologies can help with both formative and summative assessment, though their most prominent contribution still centres on the former – formative assessment. Inspired by Harry G. Tuttle, whose book “Formative Assessment: Responding to Your Students” is said to “offer a wealth of tools for ensuring that a good feedback system is set up in classrooms”, I’m especially intrigued by how social media technologies can facilitate:
* the generation of an activity that requires a student response through technology;
* the collection of student responses through technology;
* the interpretation or the diagnosis of the gap between where the student is and the desired goal through technology;
* the provision of meaningful suggestions to help the student close the learning gap through technology.
I’m quite pleased about those audacious and insightful words that I put down in the group work (Yes – I agree that this is a bit complacent):
How to measure and evaluate performance of this level of seamlessness in the digital world is challenging. As of now there exists no assessment technologies that allow tutors to move swiftly between these sites and monitor students’ performance in a streamlined fashion. Tutors will have to be informed by students of what they have done (where and what they post a message or upload stuffs), go to those individual sites and see what’s happening themselves. This way of assessing and monitoring students work is discreet and un-systematical.
Apart from the absence of an assessment technology that can monitor and evaluate student work in the realm of social media, another issue is whether students’ activities on social media should be evaluated, and if so, how? Whether their performance should be evaluated against volume (frequencies, the page views, numbers of views, numbers of fans/followers), engagement (e.g., number of comments, sentiments in comments, ecosystem size / social network size), convergence (e.g. leads generated, link analysis, web analytics), ROI (return on influence) or simply their motivations / ideas at face value. In short, we need to ponder what “criteria” and “indicators” we’d like to measure performance on social media.
The concept of interaction embedded and embodied in social media technologies also denotes a level of egalitarianism. Given that content produced by students will be openly available (if not deliberately set up to be unavailable), tutors will no longer be the only assessor / evaluator of students intellectual products. It would be interesting to explore peer assessment, and perhaps not only just peer assessment between students themselves, but also comments from Internet users who come across to the student work. The former is being explored by some e-assessment demonstrators (e.g., WebPA), but the concept of the latter hasn’t been discussed widely yet.
My experience with the group work has also prompted me to rethink dynamics of group work. Indeed, leadership is inevitably needed when it comes to group work. However, there are different forms of leadership. In light of Kurt Lewin‘s leadership styles, the style of leadership I observed in our group is something in between a “Participative Leadership” (Democratic) style and a “Laissez-Faire” one (Delegative/hands-off/leave to do), instead of strong bossy “authoritarian leadership”. Tasks are discussed, defined, scheduled, assigned, and carried out by group members. We’ve all got responsibilities and commitments (foreseen or unforeseen) other than doing PgCert assignments, but group members generally have been motivated and helpful towards each other. Basically, it’s been a grown-up style of collaboration, and a good opportunity to learn to negotiate, communicate and take actions.
Technologies play an important role during our collaboration. We’ve used mainly wiki and email to update the document (and of course word processor to write document). It’s funny to see how wiki is used in various different ways by people from different schools. Surprisingly (not of common stereotype), those outside the school of computing have a good (if not better) level of understanding and skill of operating wiki.
Not directly from my experience with this group work but somehow related, I’ve also changed my assessment and marking criteria for students’ group work. In order to recognise individual contributions to assessed group work, I introduced a marking mechanism based on peer assessment. This would differentiate students’ individual mark from the team mark. Basically, each group would receive the same team mark (of blog, presentation and written essay), but each group member would receive an adjusted mark based on peer assessment of their individual performance. The result from rolling out this procedure was quite positive.
To conclude, the findings from our group work deepened my understanding of constructive alignment of teaching and learning activities, curriculum objectives and assessment tasks. And my confidence in group work has increased, and will encourage group work and group assessment in my own teaching. I’ve embarked a journey of exploring what and how social media technologies can help with assessment, and I have no doubt that I will continue to ensue this journey for the rest of my academic career.
Biggs, J. (2003). Teaching for Quality Learning at University. Buckingham: The Society for Research into Higher Education and Open University Press.
Heywood, J. (2000) Assessment in Higher Education: Student Learning, Teaching, Programmes and Institutions. Jessica Kingsley Publishers: London.