Student competitions involving coding and technology are proliferating. They are a fine, fine thing and a great way to incentivise kids to get involved in coding and electronics. But the number of competitions taken together with the burgeoning number of tools to work with seems to me to be raising some issues with the judging process.
Over the course of the last couple of years teams I’ve mentored have won Young ICT Explorers competitions, Robocup Junior, the iAwards, the STEM Video Game Challenge, and more. I offer that simply to make clear that this is not sour grapes about missing out talking, it’s a genuine attempt to address what’s becoming a more visible issue.
Robotics competitions tend to be fairly straightforward – the robots have an assigned set of tasks to complete and it’s clear whether they do that or not. But the other competitions are much more open-ended and much more open to interpretation and confusion.
I have one pivotal point: ‘student’ is not a knowledge domain. If you were running a competition for medical technology you’d presumably choose judges who know something about medical technology. If you were setting up a category for environmental innovation you’d find people who knew about the current state of the art in environmental technology. In other words you’d have the judging done by domain experts. ‘Student’ or ‘young’ does not qualify as a domain; and if you’ve seen the quality of what many students are producing in high school, you’ll understand you do need people who know what they are looking at to judge them. If that is too difficult for organisers to achieve, and I get that it’s not a simple request, then they need to give some thought to the structure of their competition.
Some further observations:
1. The environment does not equal STEM – the ‘E’ is ‘engineering’ not ‘environment’ – so having a cute crab or squirrel on your project should not equate to science or technology. There’s a real tendency amongst competitions aimed at students to rate anything with the environment involved in it unreasonably highly as against something practical or commercial.
2. In the same vein, social good should not be a criteria unless it’s stated to be one. Judging criteria should always be stated and clear, they should also be adhered to. If the competition doesn’t say that having some socially worthy idea at its core is part of the judging process, then that shouldn’t determine success. So, for example, an automated water pistol should be judged on the same basis as an automated watering system unless social good is an explicit judging criteria.
3. Don’t get taken in by a flashy result. A program created by sticking together parts of a kit – whether in the real world or online – is entirely different to something that has been created by someone who sat down and created something from scratch. There increasingly needs to be some way to differentiate something made in Gamemaker or using a Raspberry Pi kit from something programmed from a blank screen in C++ or made using Arduino, some wires and sticky tape. The core problem here is that the kits will provide a flashy end-product, but it’s not indicative of innovation or effort.
4. Look at the code. Competitions really need to not just look at the finished product, but to look at the underlying code, to get a sense of how much work went into a project.
5. Give feedback. Entering the competitions should not be an end in itself. It should be part of an ongoing process. Thus ideally, at least for shortlisted entries, there should always be feedback that would allow the teams to grow and improve. As an added benefit, the need to give feedback also helps ensure a transparent and objective process.
6. Break up the years. High school covers an enormous range and to pit a 12-year-old from year 7 against a 17-year-old from year 12 is not always going to be reasonable.
7. Recognise the clubs. With CodeClub and other groups increasingly providing the coding experience in schools – especially in Years 7-9 – it’s important to recognise the difference between something done on the student’s own time and something done as their HSC project.
Now none of this is necessarily easy. It makes it harder to find judges and to do the judging. But one of the persistent problems I see amongst students today is that they look for the easy way to do something, they are seduced by the instant gratification they see on YouTube and TV. As adults we should not be playing to that alone, we should be recognising the huge effort that creates the less flashy but truly original project, because that reflects the reality of success in the outside world.
While the competitions often contain a core of similarity, they are all different and so there’s no one answer to how to improve things. The starting point is a clearly articulated guiding principle backed up by transparent judging guidelines. So, purely for example, if you are the STEM Video Game Challenge you should be immediately rejecting any game that does not meet the stated aim of teaching science, technology, engineering and maths. If you are Young ICT Explorers you should be marking up the clever contraption made with wood, duct tape and inspiration and marking down the neat device made from a kit.
All student competitions should have finalists. This not only gives a bit of glory to a wider group, but it refines down the group that needs specialist judging. And there should be specialist judging if the competition is open-ended enough to attract a wide range of entries.
Student competitions in IT are a fabulous way of recognising the kids’ efforts and encouraging greater involvement but to achieve those ends it’s essential that they are seen as clear, fair and effective. Most of the competitions are new enough that there is room for improvement; and I know the various organisers I’ve spoken to have a great enthusiasm for making the competitions the best they can be. Perhaps we need a competition judging the student IT competitions…