London #TesterGathering with Michael Bolton

Earlier this year I was invited to go down to London to attend the monthly tester’s gathering organised by Tony Bruce. It was the first one I went to, and unfortunately I haven’t had the chance to attend any others mainly due to lack of time and the starting time of the meet up is also not ideal for someone that has to travel from Cambridge, where I am based at the moment. Seeing that we also have Cambridge testing meet ups happening regularly my plan is to start attending those instead.

But, this one back in February I couldn’t say no to, as it was my first opportunity to listen to Michael Bolton speak about testing so I thought I’d give it a go.

The meet up happened in the downstairs area of a London pub, which quickly became quite overcrowded to listen to Peter Marshall speak about testing the overhaul of a legacy web app that only had a handful of automated tests as a safety net.

From what I could gather, the audience was a bit of a mix: some test leads/managers, some testers including people that never attended any community events before, to recruiters which was a little bit annoying but I guess some of them were sponsoring the meet up so I guess it’s only fair they were “allowed” to be there doing their job.

It almost felt like the whole thing was staged in the fact that the first talk of the evening was partly about “automated tests” as this allowed Michael Bolton to have his say on the topic, which I found fascinating and resonate a lot with.

If you haven’t heard of Michael Bolton before then you should head to his website and read the number of posts he has done on the topic of test automation (or checking vs. testing, like this one).

Some of the main points Michael raised up were that testing is different to checking, and that because the two are different (of course by his definitions, which I agree with) automated testing is impossible. You can automate checking but you can’t automate testing, as machines are still not capable of learning. However, it’s important to state that checking is part of testing too.

Machines are very good at what they can do, and automated checking is certainly something you could have in your “testing toolkit” as a technique, but you must be careful when selecting when to use it - right tool for the right job law here.

Michael also said he uses tools to do a lot of stuff these days, such as data generation, logging, probing, test execution and that those can also be oracles for automated checks.

He mentioned a blog post he made on the motive for distinctions which you can access here.

The “Deep Blue” computer chess player having a bug that allegedly led to it beating chess champion Kasparov was also referenced, but in all honesty I can’t remember the exact context here.

Unfortunately I couldn’t stay for the whole duration of his talk but the last piece of information I gathered before I went my way was about using feelings as oracles when testing.

I’m looking forward to attending more community meet ups and will be sharing my takeaways, notes and/or comments here afterwards.

comments powered by Disqus