Recent conversations with colleagues at AdelaideX have us wondering what assessment will look like in the future. When I think about that, the questions that come to mind are:
- What isn’t working now about how we assess learners? and
- What might assessment that leverages the potential of technology to more efficiently and effectively assess at scale look like?
The latter is the more intriguing and attractive question, isn’t it? The first few ideas that came to my mind were:
- Intelligent text analysis for short-answer text or essay questions (the bane of any LMS quiz tool and educator to design well). It would support multiple permutations of answers, using suggested answers from educators and scanning external resources, while also safeguarding against plagiarism by reviewing previous submissions or plagiarism databases. It would build a database of answers and support flags or triggers to go to educators about questions that are commonly answered ‘incorrectly’ or in a tangential theme, to encourage educators to either review the question, model answer or instructional content. This would allow for online learners to paraphrase or apply new concepts in different situations, and be assessed on their judgements or ability to propose opinions, rather than simply factual recall (or the ability to have the quiz and instructional video open on two different screens).
- AI video conferencing for oral/video presentations. It would perform some basic biometric recognition and verification of the learner, in support of online/distance proctored assessment. It would support intelligent conversations and example questions, that could encourage a learner to extrapolate a theme, divert away from a tangential theme, alerts for plagiarism or ‘incorrect’ content or skills, prepare a summary report for educators about the key points presented. Ideally, it could prepare a summary grade against a rubric of elements, with data points collected throughout a presentation (e.g. timestamps of an unsuitable source referenced). This would allow for the assessment of deeper reflection and higher order thinking skills, with limited interaction from the educator in the first instance.
- Intelligent text, personality and psychosocial analysis tools to dynamically form collaborative assessment groups. I see it as a combination and extension of Moodle’s Workshop, Groups and Team-Builder tools (the latter being a UNSW custom plugin), where the creation of teams/groups uses data about and from learners to automatically form groups according to different grouping approaches – e.g. a group where learners have opposing views on topics or are a mixture of cultures (to encourage empathy and diverse discussion), groups where learners are all part-time and active in the evenings or have the same career goal (to encourage shared motivation). It would use a mixture of qualitative and quantitative data (learner profiles, specific questionnaires, enrolment information, LMS activity) and support educators to consider the group dynamic best suited for an activity. It would also provide groups with spaces for collaboration (discussion forums, hook into FB group or Twitter hashtag, sync with Google Drive/Dropbox/Box/Office365)/FlipGrid) and a way to disseminate conversations or artefacts. Like the Workshop it would support group submissions and evaluations from self, peer and educator, at formative and summative stages. It’s important that it can sync in with wherever the learners already are in terms of social media or online collaboration.
When I consider the former, I’m often reminded that the work I currently do is ‘atypical’ and not all that relevant to ‘real’ higher education.
My current team, AdelaideX, works specifically on designing MOOCs, as opposed to ‘traditional’ on-campus face-to-face or blended learning experiences, and, at times, it can be difficult to remain optimistic about the impact and potential for global open learning at scale – just see any post from the likes of Audrey Watters and others with very real and valid points against the notion that MOOCs are disruptors.
Personally, I believe that MOOCs are simply another circle in the Venn diagram of technology-enhanced learning – a wholly-online, often solitary/without direct contact with an educator/facilitator, generally content-transfer-driven, short/medium-term commitment type of learning experience. One of the disadvantages and, indeed, comments against xMOOCs has been the reliance on simplistic and, arguably, ineffective assessments, such as multiple-choice quizzes.
The argument in favour of multiple-choice quizzes I’ve heard is that they are easy to set up to automatically grade a learner’s response. We can configure feedback based on responses, to give some semblance of a personalised experiences. Most LMSs support tracking of responses and attempts, so we can track learner engagement and success (or lack thereof). Most importantly, I think, they allow for all of this without interference or interaction on the part of the educator – which is hugely important with a learner base of 5,000, 50,000, 100,000 enrolled learners.
The reason we stick with them is that they’re easy and familiar for educators and learners, they require little ‘live’ effort on the part of educators, they provide instant feedback, they allow for tracking of completion and success. But, they’re not perfect. In fact, they’re often written poorly so as to not actually be assessing the knowledge or skills supposedly learnt – and it can be hard to write quiz questions that go beyond basic recall of facts and ask learners to apply or demonstrate deeper understanding of the concepts covered/what they’ve learned.
This is where technology like AI could be useful, to provide a personalised and customised experience for each learner. Interactions could be based on information provided by the educators and through analysis of set texts. It cannot, and should not, replace real human contact with educators though, because even though computers can ‘think’ more things quicker than us, we can think creatively and innovatively (for now anyway). It could, however, be a way of increasing connection and providing more learner-centred learning experiences without increasing the workload or number of required educators.
It’s important to note that I do see AI, or quantitative click-based learning analytics for that matter, as an especially positive development. My principal concern is that of the biases consciously and subconsciously infused within any program – the fact that the AI companions in our current phones and technology are all female by default (Siri, Alexa, Cortana) point to a potential pervasive and subconscious mindset of the servant as female/women as subservient to the typically male programmer, as one example. Reports of predictive technologies reinforcing out-dated gender stereotypes. Who decides the ethical and moral decision criteria for these systems, if the vast majority of software programmers are male and (I admit, I assume given what I’ve read about many US-based software companies) cis, straight, white, abled middle/upper class, English as a first language, etc. I’m dubious that anything from this rather un-diverse pool of privilege there can be much hope for an equity-focused AI, but I look forward to being wrong.
There are many who, like me, will say AI is simply another edtech trend that will do everything but “disrupt”. But, AI and predictive analytics seems to be where many software technology companies are heading, and education technology won’t be far behind. Should we be worried? Luciano Floridi lists some positive challenges to overcome that could help. I can only hope that those designing building the AI tools have read the same sci-fi novels that I have…