Better Apps: Assessing educational apps
Let’s face it, selling educational things to parents is a no-brainer: We’re a sucker market for anything that might instill a bit more knowledge into our little ones. The enormous proliferation of educational apps in the iTunes store is somewhat less than mute testament to this. But sifting through the dross to find apps that are actually useful is no easy task – and the star rating system is a very dodgy starting point for education in particular.
A straight game is generally easy to assess using the star system and reviews. What most players want out of a game is going to be similar. An educational app is a different kettle of fish: There’s the delivery of educational content, there’s the content itself, there’s the fit, if any, with the local curriculum; and all that’s before even getting to the design and structure of the app.
Daniel Donahoo is a Victorian consultant who has done a lot of research and thinking about what makes a good educational app. Now he’s launched Better Apps in an effort to add another layer of assessment to the educational app development process. The idea is that the site allows developers to self-assess using Donahoo’s PAC21 methodology which looks at the quality of process and design: What it does not do, however, is shed light on the quality of the underlying content and curriculum.The PAC21 methodology covers:
- Process helps “understand how well the design and content of the app work together to lead a learner through an experience.”
- Academic categorises apps into domains such as literacy, maths or science (and one assumes provides a handy vowel for the acronym).
- Creativity “assesses how well the app supports a learner to create something new and develop their creative skills”.
- 21 refers to the 21st Century skills of critical thinking, media literacy and network literacy.
I really like the thinking behind Donahoo’s methodology and a thorough assessment using it would certainly illuminate the quality of design. If it was applied externally, which at the commercial tiers beyond the self-assessment on the site it can be, I think it would have enormous value. I’m not so certain that self-assessment will provide that same value unless the developers using it are astonishingly honest and open-eyed about the weaknesses of what they have developed.
That level of honesty might be more likely were it not for the other element to Better Apps: a list of the top 5 apps in each of the academic areas. The site says “Our top 5 lists celebrate the best of the best” but of course it appears to be the best of those who have paid for self-assessment based on their self-assessed answers. There would be real power to this idea if it were backed by customer reviews or even better perhaps, panel reviews from domain experts. Better Apps does say that the top five lists are reviewed by “a Better Apps assessor” but the process remains a bit self-referential given the underlying selling of consulting services.
As a tool for developers to improve the quality of what they provide Better Apps, and more particularly the consulting behind it, seem to be an extremely positive step. An approach that encourages developers to properly think through the actual educational approach to their work is laudable and Donahoo has the background and level of thinking to add real value. An app that can get objectively positive responses using Better Apps’ PAC21 methodology is likely to be better than one that does not. Certainly, if I were developing an educational app I’d see spending $25 on a self-assessment as being worthwhile.
For those developers who go through the Better Apps process they’ll likely end up with a better app: And more good educational apps is a fine achievement. This, however, wont stop the proliferation of dubious educational apps, improve underlying content, or even really help parents sift through the never-ending lists. Challenges for another day.