Testing a Legacy product – Get busy in creating a TEST PLAN first!

Have you been ever hired to test the legacy product? Every organization, from start-ups to fortune 500s, has their own definition of legacy product. From a QA perspective I would define a legacy product as a product with NO formal written requirements. The product documentation is known as requirements’ document.

So, where do you start when it comes to testing a product with no requirements?

What Are We Testing?

I don’t believe there is any one right way to approach to test a product like that. However all the different methodologies ultimately take us in the direction of identifying a similar starting point – “A game plan”. A plan that indicates the success measures when we start testing the product. We need to document testing requirements and call it a Test Plan.

What is a good test plan and how to build it?

A good test plan should indicate testing goals along with any pass or fail conditions for every product requirement. A good test plan should include a lot of additional information – schedules, roles of team members, tools to be used, etc. But how do you start building a test plan when you have no defined goals and product requirements.

The test plan could be broken up into two pieces:

1)    What is the activity that the end users of the application do? (What are our use-cases?)

2)    What are the testing benchmark requirements? 

Identify Use Cases

With the brand new application in your hands, how can you determine what a user is supposed to do with it? There are several approaches:

1)    The best approach is usually to ask someone in product marketing or product management – someone who has done the work to create the user scenarios that the development team has implemented.

2)    Ask for a Marketing (or Product) Requirements Document that has this information documented.

3)    Check the product log files captured in the application’s infrastructure. Ask your web or application server administrative team if this is information they capture. For e.g. they may use tools like Omniture, Coremetrics, Web Trends or Google Analytics that can help with performance use case.

4)    If there are no logs, it’s usually possible to turn on some sort of logging for a period of time (a day or two perhaps) so that tracing user activity can be captured.

5)    You could look into getting access to actual end-users by talking to internal resource – customer or technical support teams, sales representative, or (again) marketing folks.

6)    If everything else fails use your common sense. Take a look at the application itself and decide what it is that YOU might do if you were an end-user. If you’re testing an online store type of application, it’s a pretty good bet that users are going to browse through the product catalog, add items to a shopping cart and make a purchase or two. Online banking customers are probably checking their balance and paying bills…you get the idea.

Generally speaking, you should not waste time trying to identify every single use case through the application since the bulk of user traffic will be captured in only a few transactions. Keep in mind the famous “80/20 rule”; that is 20% of the transactions cover 80% of application’s core functionality.

Establish Pass/Fail Requirements

Now we have established the use cases/product requirements – it is important to list the detailed pass or fail criteria. For. e.g. you might want to establish benchmark requirements for performance testing such as are you measuring response time or load time? How fast should be the page load up time for any given number of concurrent users? Now how do you validate your benchmark requirements?

1)    Involve various stakeholders such as product management, marketing, business analysts, etc.

2)    Crosscheck with contractual Service Level Agreements in place between your company and a customer, or between teams within your organization.

Now it’s time to execute your game plan and start reporting those bugs. Remember a good test plan goes a long way! Who knows one day your test plan becomes “the only trusted document” for their legacy product!

Author: Madhu Jain (An explorer)

Kaizen Testing

Advertisements

Similarties between Software testing and wine tasting

An amusing post which would be more amusing after having a glass of wine (of course  applicable to software testers only)!

This post shared seven similarities between testing software and tasting wine:

   1. Both need a staged approach to be successful
   2. The better skills you have the better the results are
   3. Knowing more will give you different findings
   4. Each product is different
   5. A lot of parameters influence the outcome
   6. The price doesn’t say anything about the quality
  7 . The outcome depends on the testers/tasters

Software bashing – an effective way to flush out post-development bugs!

Software bashing has been proven an effective method to flush out any post-development bugs.
Software bashing is usually performed on finished products via exploratory testing where product users
are encouraged to “do their own thing”

As per Wikipedia:

In software development, a bug bash is a procedure where all the developers, testers, program managers, usability researchers, designers, documentation folks, and even sometimes marketing people, put aside their regular day-to-day duties and pound on the product to get as many eyes on the product as possible.

Bug bash is a tool used as part of test management approach. Bug bash is usually declared in advance to the team. The test management team sends out the scope and assigns the testers as resource to assist in setup and also collect bugs. Test management might use this along with small token prize for good bugs found and/or have small socials (drinks) at the end of the Bug Bash. Another interesting bug bash prize was to pie test management team members.

Companies like Microsoft organize these internal software bash activities frequently to encourage their employees in using the products and finding the bugs before customers find them!

For e.g. http://blogs.msdn.com/b/windowsmobile/archive/2004/04/28/122435.aspx

There is a highly followed bug bash running guide put together by Scott Berkun, an author of books on software testing and development:

http://www.scottberkun.com/blog/2008/how-to-run-a-bug-bash/

I have personally reviewed two books on software testing where central theme of effective testing was
software bashing:

1. The Practical Guide to Defect Prevention by Marc McDonald, Ross Smith – responsible to
testing and delivering operating systems at Microsoft

2. Changing the Game: How Video Games Are Transforming the Future of Business by David
Edery – researcher at MIT

Secrets for breaking a product in five minutes!

  1. Try to do what it is supposed to do
  2. Verify that Help matches functionality
  3. Let it run for quite some time, and look at resource utilization
  4. Play around with core functionality for five minutes
  5. Ask someone else for their first impression
  6. Perform any simple scenario as fast as possible
  7. Use keyboard-only for a while
  8. Have a look at all installed files
  9. Get rid of all traces of software (Uninstall)
  10. Run on a radically different platform, preferably an Error-Prone Machine
  11. Use credibly dirty data
  12. Provoke a bunch of failures
  13. Be a user that want to destroy for others
  14. Compare with common behavior for the environment
  15. Review About dialog information
  16. Is it at least as good as a comparable product?

See Original post