Spotted another Facebook UI bug since one of my previous posts. UI bugs are not generally considered show stoppers. However if Usability and User Interface are your strongest product assets, these bugs should be considered release blockers. I am wondering if Facebook`s QA team ever reproduced this issue in-house!
Test Automation and Continuous Integration (CI) go hand in hand. In Agile world quick feedback is critical. Test Automation provides feedback about the quality of developed code and CI can accelerate that feedback!
What is Continuous Integration?
Continuous Integration is a software development practice where members of a team integrate their work frequently; usually each person integrates at least daily, leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly. Martin Fowler
Continuous integration is an ongoing and rapid process. This means that teams need to be able to react quickly when a build breaks or new tests need to be built. In order to integrate a quality code into existing features, it is important to validate it against existing tests. If the tests fail, the code (build) should be rejected until either the bugs are fixed or the test cases are updated.
Below are the steps to deploy a quality code using automated tests and CI:
1. Use source code repository (Version Control System) – It is the first and required step to start using continuous integration. For e.g. Github, CVS, Perforce, SVN
- Introduce check-in policy – Check in policy may be introduced after establishing source code repository. This depends on the team and length of the sprints. Firstly it involves storing all necessary files in repository. Secondly, each developer should check-in as often as possible completed part of work. Frequently integration show problem early and either solution is easy or rollback is necessary for only small part of solution.
2. Automate build – Next step is to perform build automatically – no more than one action should be required to start it. Either developer should start it manually or the build is automatically triggered after each code check-in (commit). The build process could be scripted using Ant, CruiseControl etc.
3. Create auto-deployable test environment and use automated test suite – After each build new version of binaries should be automatically deployed to test server. After each deployment, kick off automated tests to validate the committed code against existing features. The automated tests act as gate keeper for new code before it is integrated. It is time consuming to run source code validation but saves time during manual/regression testing and improves over all code quality. A solid automation test suite acts as a safety net and filters out the buggy code to some level.
4. Report and integrate – Report the status of automated testing. Having a pass/fail label can help in rejecting the buggy code.
- Introduce re-submission policy: Developers need to investigate the failures with the highest priority. They either should fix the bugs or update the test cases and go back to step 1 for re-submission. Once all the tests pass, the quality code should be integrated to the master repository and deployed to staging for further testing.
Do you wait for the feature to be delivered before testing or test a product when it is being developed?
If a Quality Assurance Analyst wants to contribute to quality, they can do more than testing a product when it is delivered. QAs have many other duties during software development. They know the product very well from regression testing. They are aware of the various test input data sets. They understand the changing business needs and the requirements. They facilitate in environment and data preparation. They challenge the architecture and analysis, coach developers and define acceptance criteria to check their delivered product effectively on standards. A tester understands and contributes to the release and deployment mechanisms. Over all more than anyone ‘testers are aware of products’ risks’.
So, DO NOT under estimate your position in flagging risky feature developments! Contribute in delivering high quality products by participating right from the beginning.
With an exponential growth in web applications combined with the ease of accessibility via smart devices, it is important to maintain rigorous quality standards. A trivial bug on your website can take away the “wow” experience from your users and divert them to the competitors. Any team responsible for driving the testing of web products should be able to differentiate the different facets of web app testing:
- Web User Interface Testing
- Web Usability Testing
- Web application Testing
Though all three might end up having a sub set of shared test cases but all of them need to have their own release checklists. For example, Web User Interface Testing checklist could include:
- Colors – hyperlinks color standard, field backgrounds and page background color to be distraction free.
- Contents – uniform font size, uniform content when switching between previous and new pages, text alignment, text cases.
- Images – graphics properly aligned, graphics optimized to load, text wrapping around the image.
- Instructions – tool tips, activity progress messages
- Navigation – scroll bars, pop up message buttons – Cancel/OK, Link to home page on every page, keyboard enabled
- Look and Feel – consistent across all the pages
Here is an example of Web UI bugs from my recent browsing experience of Facebook: