Different Facets of Testing Web Applications (and Facebook bugs)

Web UI bug # 1: An empty space for suggested friends.

With an exponential growth in web applications combined with the ease of accessibility via smart devices, it is important to maintain rigorous quality standards. A trivial bug on your website can take away the “wow” experience from your users and divert them to the competitors. Any team responsible for driving the testing of web products should be able to differentiate the different facets of web app testing:

  1. Web User Interface Testing
  2. Web Usability Testing
  3. Web application Testing

Though all three might end up having a sub set of shared test cases but all of them need to have their own release checklists. For example, Web User Interface Testing checklist could include:

  1. Colors – hyperlinks color standard, field backgrounds and page background color to be distraction free.
  2. Contents – uniform font size, uniform content when switching between previous and new pages, text alignment, text cases.
  3. Images – graphics properly aligned, graphics optimized to load, text wrapping around the image.
  4. Instructions – tool tips, activity progress messages
  5. Navigation – scroll bars, pop up message buttons – Cancel/OK, Link to home page on every page, keyboard enabled
  6. Look and Feel – consistent across all the pages

Here is an example of Web UI bugs from my recent browsing experience of Facebook:

Web UI bug # 2: SORT text misaligned

Similarties between Software testing and wine tasting

An amusing post which would be more amusing after having a glass of wine (of course  applicable to software testers only)!

This post shared seven similarities between testing software and tasting wine:

   1. Both need a staged approach to be successful
   2. The better skills you have the better the results are
   3. Knowing more will give you different findings
   4. Each product is different
   5. A lot of parameters influence the outcome
   6. The price doesn’t say anything about the quality
  7 . The outcome depends on the testers/tasters

From Agile Manifest – Direct Communication is the key!

Every team operates differently when it comes to software development using Agile process. However these two sentences directly from Agile manifest can help team work together effectively and enable QAs to shorten product feedback loop:

1. “Business people, developers and QAs must work together daily throughout the project.”
2. “The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.”

Test Case Management Comparison

Every software testing project needs to be organised. And if you really want to be efficient some kind of system is needed to manage all the requirements and data. Once an excel spreadsheet becomes too much for your test team to manage, you may want to start looking into a Test Management Solution.

What is a complete Test Case? 

test case is a set of conditions or variables under which a tester will determine whether an application or software system is working correctly or not. The mechanism for determining whether a software program or system has passed or failed such a test is known as validation of requirement or use case. Test cases are often referred to as test scripts, particularly when written. Written test cases are usually collected into test suites.

What is Test Case Management?

Test Case Management is maintaining the test cases with a process in a central storage medium.  This can involve version control of the individual test case, combining test cases to make test scripts and tracking test results against test cases which have been executed.

Here is brief comparison of various popular test case management softwares:


How do you choose your test automation tool?

Things to look for in an automation tool:

1. Usability:

    • Object recognition – How does the tool recognize elements in your application under test?  Does it simply rely on image based testing or can it individually identify each element and access/set values?  If so, does it recognize all of your controls, even custom ones?
    • Object identification – How does the tool identify elements?  Can you uniquely identify several similar elements?  How fine grained is the search method?  Can you search using xpath, regex etc?

2. Performance and Reliability:

    • Performance – How long does the tool take to identify elements?
    • Reliability – How easy is it to create reliable scripts which run over and over without a failure of the script?
    • Stability – How long has the tool been around for?  Is the tool still being developed and being kept up to date with the latest technologies, for example Firefoxs crazy releases?

3. Test Development:

    • Platforms – Which Operating Systems/Device types are supported by the tool.
    • Language – What languages can tests be written in?  Does the language support threading, file manipulation, database support, xml manipulation, service interaction, registry interaction etc?
    • Test control – Can you control test execution from the tool?  Can you create test suites?  Can you run from command line?  Can you remotely execute tests?  Can you use other test frameworks such as Nunit?
    • Screen shots -Can the tool report screen shots of your application under test? …of the whole desktop?…of individual elements?
    • Test Data – Does the tool support functionality to bind test data to test scripts?

4. Test Code integration:

    • Version control – Does the tool provide script versioning or can you use industry versioning systems such as SubVersion.
    • Continuous Integration – Can you set the tool up with a CI system such as Cruise Control, Hudson or Jenkins?
    • Re-Use – Can you reuse parts of scripts in other test projects for other products you are testing?
    • Runtime – What are the requirements to run tests on different machines?
    • Reporting – Does the tool generate reports or do you have to add your own custom reporting library?

5. Cost and Support:

    • Cost – Is the tool free or does it cost?  If it costs, what is the licensing model?
    • Learning curve – What help/tutorials does the tool provide?  How easy is the tool to use?  Does it use a mainstream language such as .NET or Java, in which case there is plenty of official documentation, forums, help etc online to help you.  Developers in your office may also be able to help you
    • Support – How comprehensive is the support for the tool? How quick do support get back to you? How quick will bugs in the tool be fixed (if any)?  Is there a voting system to decide which bugs get fixed?
    • Web support – Can the tool test websites?  Which browsers does it support?
    • Native Support – Can the tool test applications running natively on your host?  Make sure you plan for the future.
    • Known Issues – Look around forums, are there any common known issues?

See original post

Software bashing – an effective way to flush out post-development bugs!

Software bashing has been proven an effective method to flush out any post-development bugs.
Software bashing is usually performed on finished products via exploratory testing where product users
are encouraged to “do their own thing”

As per Wikipedia:

In software development, a bug bash is a procedure where all the developers, testers, program managers, usability researchers, designers, documentation folks, and even sometimes marketing people, put aside their regular day-to-day duties and pound on the product to get as many eyes on the product as possible.

Bug bash is a tool used as part of test management approach. Bug bash is usually declared in advance to the team. The test management team sends out the scope and assigns the testers as resource to assist in setup and also collect bugs. Test management might use this along with small token prize for good bugs found and/or have small socials (drinks) at the end of the Bug Bash. Another interesting bug bash prize was to pie test management team members.

Companies like Microsoft organize these internal software bash activities frequently to encourage their employees in using the products and finding the bugs before customers find them!

For e.g. http://blogs.msdn.com/b/windowsmobile/archive/2004/04/28/122435.aspx

There is a highly followed bug bash running guide put together by Scott Berkun, an author of books on software testing and development:


I have personally reviewed two books on software testing where central theme of effective testing was
software bashing:

1. The Practical Guide to Defect Prevention by Marc McDonald, Ross Smith – responsible to
testing and delivering operating systems at Microsoft

2. Changing the Game: How Video Games Are Transforming the Future of Business by David
Edery – researcher at MIT

Does continuous testing feedback slow the software development progress?

I guess it depends whom you ask this question. Here is QAs take on this subject:

  • When testing provides continuous feedback, developers understand what is good, what is unreliable, and also what is important.
  • Continuous corrections towards the goal speed up the time to completion.
  • Testing isn’t a role with its own goal, QAs are there to help; and the most important help is speed, and the second one is quality (acceptable quality can be met in many ways, with testing it is faster.)

Someone might object and say “on the contrary, with all their phases and complaints, testing makes
things much slower!” however this might stand true for some.

“Good testing makes software projects go faster”

See original post