Thursday 26 September 2013

What is a bug?

Sounds like a simple question doesn't it? But is it?
Roll back a few years when I was a small way into my testing career and I would have been able to confidently answer the question very simply. A bug is anything that isn't quite right with the software. My job as a tester was to squeeze out every last obscure, difficult to find issue with the product - in fact We used to have a competition to see who could find the most way out, corner case bug. Much kudos was obtained by being able to find the unfindable. 
In those days when asked at interviews what makes me want to test, I used to pridefully give the response that I have come to fear and hate, "I love breaking things!" This was fairly indicative of the software industry at the time - teams of testers would take receipt of a dev complete product then try their damndest to break it.
But thinking evolves, and thinking around test has evolved a fair way from what it used to be.
Agile integrates test along side development, widens the responsibility of test away from just the tester to the team and focuses the priority towards features and delivery. What Agile also does is restrict the time we have with a feature to an iteration (or part of an iteration depending on how quick it can be coded). This dramatically reduces the time one can spend attempting to 'break' a feature. We need to apply some smart thinking to our testing. Who tests what and where, more unit testing, less automated UI testing - targeted manual testing - all things that help to make sure we are not only testing, but testing smartly. Even with better approaches and distributed testing streams, we still have to start looking at our scope.
Which brings me back to my original question - what is a bug? I am reminded of the old saying - if a tree falls in a forest but no one is around to hear it, does it make any noise? Not wanting to get too philosophical on the matter, but our bug exploration should be a bit like that. We can spend a great deal of time trying to find those hard to find elusive bugs, but what is the point? They are hard to find and elusive (on the whole) for a reason, because they are rare, unlikely scenarios that a customer is extremely unlikely to come accross. We have taken the focus away from the typical, common usage scenarios in order to find the killer bug. Our focus has potentially sacrificed the many for the sake of the few (or none). So if a bug is unlikely to ever be found by a customer, is it really a bug?
Agile, in combination with the 'skilling up' of our test engineers has provoked a shift in the way we approach ensuing a product is tested - no longer is the 'play with it for a week and try and break it' a good enough plan for testing - its unreproducible, (meaning a cumulative investment in effort release after release to achieve the same testing), it's ineffective and promotes the obscure bug mentality. We now have to engage with our customers to learn how the product is used, we have to know how it is developed to identify areas of risk, we have to understand what tests are being written by others so we can test a bit smarter and we have to put all these things together to ensure that our customers are getting a product that will work really well for them in the vast majority of their use cases.
Is it a bit risky? after all a bug is a bug is a bug! well, yes, it is a risk - but testing 101 teaches that you will never find all the bugs and all shipped software has a number of bugs waiting to be uncovered by customers. QA is all about managing the risks appropriately - so is it really worth chasing the elusive?
What then is a bug? a bug to me is something that causes, (or will cause) our users to have issues using our software.

Wednesday 6 March 2013

The Evolution of Search


I recently watched a great video by the one any only James Whitaker. In it he talks about how computing is evolving, particularly the way we have evolved our use of data as more and more of it has become available to us.
Whilst nothing he says is particularly controversial, he does manage to spot trends early, (he has been on this particular bandwagon for a while now), and articulate those trends well.
Well worth a watch.
Whilst I agree with much of what he says, (it is interesting how when I look at my own habits I am gravitating towards some of the usage patterns he describes), I think search and find will always have its use. But I will let you come to your own conclusions. 
Enjoy.

Wednesday 30 January 2013

Agile Conference

The Cambridge Agile Conference is an annual event attracting Agile practitioners from across the region. It is two two days of keynotes, tutorials, reports and workshops delivered on topics ranging from complexity theories and their application in development processes to the use of physical or virtual scrum boards. The conference attracts some keynote speakers of international acclaim, this years were Dave Snowden from Cognitive Edge and Dan North, an independent consultant previously of ThoughtWorks. Also guest appearing at this years event was Dave Hart of IBM fame.....oh, that's me! Well, ok, I wasn't quite a keynote speaker, but I had volunteered, and been accepted, to give a talk about the role of developer in test, I will give an overview of that in a bit. This was my first time at the Agile conference, so I wasn't entirely sure what to expect, but the programme looked interested, and we have been using Agile development methodologies for a number of years now in this little corner of IBM so I was quite comfortable about the content.

Overall, the event was excellent, well worth attending. Other than the two keynotes I attended sessions on:

  • Agile Innovation Practices - a talk more about how different styles of product can be innovative (new product, feature release and maintenance) than how to fit innovation into Agile teams.
  •  Rapid Product Design In The Wild - a report on how one company took a product idea to a trade show, set up their stand like dev team and developed product prototypes live during the trade fair. 
  • iPlayer for XBox - a report on how Agile was applied in the teams working on implementing the iPlayer for the XBox platform Scrum, physical or virtual walls - what the title says realy 
  • Testing the Way to Faster Releases - a report about how a website development team reduced time to delivery by sharing the testing effort and making more frequent deliveries. 
  • DevOps - an introduction to some of the tools and practices utilised in the field of DevOps, bringing server deployment into the development realm 
  • Making Cross Shore teams work - a report on the pains and discoveries a company made making off shoring work. 
All the talks were interesting, and I learnt a great deal, my top three below, but I also found it quite affirming. Many of the processes and practices we employ here in the Cambridge Lab were mentioned as good practice during the talks.

Anyway, my top three:

  1. Our estimating isn't as depressing as we might think! We are currently working in the complex phase of our product. Everything is new, lots is unknown, we are innovating all the time and our code base is still in its infancy. All these factors lead to a situation where estimating is very difficult, and often wrong. Advice given was to a) try and reduce some of the unknown before hand - technical spikes etc, b) embrace the complexity and factor that into your estimates - there will be things you don't know, factor that into your estimates as best you can, c) Don't beat yourself up when you are wrong - you will be. The nature of this phase of product development leads itself to estimation failure, learn from it and improve the bits that can be improved. 
  2. Cross site teams take effort to make them work! This might seem really obvious, but so many people mess this up. It isn't enough to throw work at people and expect them to know what they are doing, Collaboration and coaching are key to making teams on multiple sites work. 
  3.  A good test strategy is key to quality deliveries. Again this may seem obvious but many team simply throw untested code over at the qa function. The talk on testing for faster delivery was from a qa lead who is the only qa team member in her company! she essentially defines a strategy, polices that strategy (getting her hands dirty as well), but most of the testing is performed by developers in the teams. It was a great example of coming up with a process that a) ensured a quality delivery, and b) enabled the testing to be carried out at point of source. This led to a much more streamlined and timely delivery process. The key thought here were: don't bottleneck your process by putting all the testing in one place use the resources you have effectively accept that developer are actually pretty good at testing their own code fit your strategy in with your company, team and product structure 
So, onto my talk. I presented on the role of the Developer in Test. The presentation took the form of an experience report, and essentially detailed how our agile practices, and product roadmap had led us to the point of requiring a more technical testing role, a little about what the role involves and some of the benefits it gives us. The talk was well received, (no boos at least), with some good questions asked and a follow up discussion over lunch. I believe passionately in this role, and it was good to be able to share that with other Agile practitioners. In fact, it was good to hear others thinking about introducing the role, and even better to hear about those already doing so (even if they didn't realise they were!).

 Overall, it was a good couple of days. I learnt much, and think I had something good to share with others. We are proud of our Agile practices here, and I am proud of the progress we have made in bringing test into the development teams and to the forefront of our processes.

James Whitaker Interview

James Whitaker is one of the best known names in the software testing industry. He has been an engineering director at the likes of Google, and currently Microsoft. Below is a link to a podcast in which he is interviewed. I recommend you have a listen, whatever your function. He makes some excellent points about the trends in the direction of software testing. http://media.podcastingmanager.com/2/1/2/6/9/304245-296212/Media/softwaretestpodcast_ep041_121119.mp3 Feel free to start a conversation on any of the points raised in the comment threads below. Dave

Agile Cambridge Keynote

Dave Snowden is a founder of the Cognitive Edge network - which specialises in creating frameworks and theories around processes and offering training on applying those. The link below is for the keynote he gave at the 2012 Agile Cambridge conference. I particularly like the modelling he does around complexity, I think there is much we can apply to the complexity and innovativeness of our current project. Anyway, enjoy http://www.infoq.com/presentations/Agile-Theory

We are Testers

I love the software industry! I love it's energy, I love a challenge, I love learning new things, I love the fast pace of change. The one thing that has always irritated me though, and the one thing that causes me despair, is the perception of the test role in our software life-cycle. Whilst our practices and processes have moved away from the old waterfall methodology, the way we view test hasn't! Let me try and explain what I mean. In the waterfall world development managers managed teams of developers who wrote code which turned into a product. Once that was done, it was thrown over to the test team, managed by a test manager. The product was undoubtedly late coming from development, which meant testing was squeezed and more test resource (predominantly manual) was thrown at it in the flawed reasoning that more resource = less time. What's wrong with that you may ask? sounds perfectly reasonable! - No. Despite the fact the assumption is ultimately flawed, more resource doesn't necessarily mean less time and certainly doesn't mean better quality. But those flawed assumptions are not my biggest problem with this approach, it is the de-humanisation of the tester and the devaluation of the test role that really bothers me here. By randomly tossing around test resource they are basically saying it is a generic role where anyone can do any task, reducing the role to mindless task execution rather than the skilled role it can be! I think this not only devalues the role, but ultimately de-creases the quality of the end product. What you don't see a great deal of in the industry is the 'loaning' of developer resource to random teams that might be falling behind on their development, (it does happen, but tends to be localised and playing to the skill-set of the developer in question), why should test be any different. The tidal wave of movement towards the Agile process has created integrated development teams where a team has representation from all disciplines. Yet we still see the siloing of test, (when companies or teams say they are doing Agile, they usually mean they are doing a form of Agile - the things that usually gets left in the old world is test). I have seen this my whole testing career, it happens in IBM, but IBM isn't the odd one out - interestingly the companies that are seen to produce good, quality software, (Google, Netflix, and yes, Microsoft) are the companies that treat test with the respect it deserves. Testers are well trained, qualified and often specialist individuals, not keyboard monkey's! I long for the day in which this is fully recognised in our industry. When it does and when we finally correctly resource the test role in our teams and companies we will start to see a vast improvement in the quality of the products we produce.

Innovation, Agile and Test

It's a funny thing, we adopt new life-cycle methodologies, (such as Agile), to be current and to have a process that offers the most benefit, but we then go and implement it stifling our engineering practices in exactly the same way! What, you may ask, am I talking about - innovation! One of the things that tends to disappear when dealing with Agile, deadlines, and tough deliveries, (at least in my experience) is innovation - the freedom to try new things in new ways with new technologies - little explorations into the unknown. This phenomena is mostly associated with development, about not being able to take risks that might fail with new technologies, designs etc etc, but I think to end the association of innovation at the developer level is wrong, and undersells the importance of testing, That's right, I said testing and innovation in the same sentence! Things have come a long way since the days of fully manually teams of testers waiting to get delivery of a product so they can test it. We now have test occurring at all stages of the product life-cycle, unit testing, integration testing, acceptance testing manual testing, automated UI testing, automated under the UI testing, the list goes on. The point is though, that none of this testing would happen if we weren't innovating in the testing of our products as well as the development of. Which brings be to point already made, in that we tend to stifle our processes, pushing innovation out in favour of predictable and functional features, and as many as we can fit in a release cycle as possible. I look around at some of the testing and technology blogs produced by some of the big names in the software business, Google, Amazon, Microsoft etc and find myself getting innovation envy - I see them implementing cool and useful new things and find myself wondering not how it works, but how they found the time to invest. So if a) I have any readers and b) they have some ideas on idea creation and implementation, I would be very pleased to hear them.