I recently read an essay written by my sister. She’s on her second or third degree, training to become a teacher. She’s worked in education for over a decade, but until I proof-read her recent essay, I had no worldly clue how much I didn’t know about learning.
I recently wrote about Learning about Learning - how I manage the list of things I want to learn, and how I structure that time. I had no idea that the body of knowledge that existed on the science of learning and of teaching was just so massive, and how much of that you needed to know before becoming a teacher.
In her essay, it describes a recent handwriting intervention with some 8 year olds. They’ve fallen behind, and the handwriting learning they’re doing in class isn’t helping because they haven’t completed the foundational learning which the classwork is building on. In this session, the children were first engaged in a fine-motor function competition, then reminded about how to form letters. They then went off to practice writing lists of letters (something I remember my kids doing in early years — writing long strings of the same letter to practice form and muscle memory), but were then given those criteria for well-formed letters again and asked to judge which of their letters best fit those criteria.
It made me think about whether I knew the things teachers were doing with 8 year olds, and whether I could apply any of it in a work context.
The first stage is practising related skills, which is something we do in testing. We love testing games — ones about problem solving and pattern recognition and odd ones out, and playing established games with twisted rules. The first step also includes competition. I’ve seen great talks on gamification within testing, and how to do it right and wrong, and I remember the value of competing with other kids in the class and how that pushed me to improve.
The next stage is practice. We’re not short of that in testing. Maybe we’re specialists and we get lots of practice in our specialism. And maybe we get to occasionally step outside our box and do other things because that’s what our team needs from us right now. Maybe we’re generalists and we spend a bunch of time on functional and behavioural testing, but switch into performance, security, accessibility and a half-dozen other things as it comes up. We’re never short of practice, and we’re never short of opportunities to challenge ourselves to extend our skills here.
(If the previous paragraph doesn’t apply to you, consider if you’re happy in your current role.)
The novel bit of this intervention was the self-assessment. I definitely don’t do that. I’ve led a lot of teams where I’ve promoted peer review of test planning to get the best collaborative plan, avoid obvious gaps, and to promote a bit of knowledge sharing about work and features. What I haven’t done is reviewed my own testing after execution to see how well I achieved the value I set out for. If I started with a concrete plan, I don’t check how closely I followed it. Because I’m sensible and rational person and if I made a decision at a point in time then I would have done that for good reasons. Right? I don’t consider that if I was doing the testing again, would I do anything differently (ignoring the obvious “skip straight to where I now know the bugs are”)?
Thinking directly about the intervention, I never consider for a given day, week or project what was the best testing I did. I’m going to be keeping a much better eye on this.
There’s also a meta-level to this.
- Who sets the success criteria for testing? If it’s only me, then shouldn’t I always succeed?
- Are success criteria permanent, like “delivers risk information to the team”? Or are they deeper and more transient, like “validate user goals can be achieved for proof-of-concept because we’ll go deeper for a Beta later”?
- Am I the best judge of my best testing? If my team judged it, what would they say? But if they aren’t testers, how should their opinion be weighed?
What I do know is that I’m going to spend more time looking back at my own testing, and marking my own work, rather than sending plans and results off to others and waiting for feedback.