When we talk about UI automation for browsers, the default tool which comes to mind is Selenium. There are different wrappers around selenium like protractor, Nightwatch , selenium webdriver etc. All of them are build on top of selenium and have all advantages /disadvantages of selenium. All of the control browser by executing remote commands through Network. We will most probably need additional libraries, framework etc to make full use of selenium.

Cypress.io is an open source UI automation tool which can be used for UI testing . Unlike others, this is not build on top of selenium . Instead is a complete new architecture and run in same run loop as browser. So it is running inside browser and have access to almost everything happening inside and outside browser. It is a complete set of tools that you will require to create and run E2E UI automation test cases. Team who developed cypress has made few design trade off which causes some disadvantages to cypress. There is no right tool for automation . It will depend on multiple factors.

## Installing Cypress

We can install cypress using npm. Run below command inside project folder to install cypress and all dependencies.

Another way of using cypress is to download zip file from here . Just extract the file and start using it.

## Opening Cypress

Cypress can be opened by running node_modules/.bin/cypress open command in terminal under /cypress.

If you have downloaded the zip file, you can open cypress by double clicking on the cypress executable.

## Write your first test

Cypress already come with predefined example of KitchenSink application which will help you to identify various commands which can be used. It can be found under cypress\integration\example_spec.js.

Let us look at how to write a new test .

Create a new test script file called demotest.js under {project_location}\cypress\integration. Open up the file and write below code into it.

This code will open browser, load google and search for cypress.io and open up the first link.

Note: If cross origin policy error is shown, flow the workarounds mentioned.

## How does cypress.io compare with Selenium

As mentioned earlier, there is no right or wrong tool for automation. It all depends on suitability for the task on hand. Let us compare few features where cypress.io and selenium have differences.

• Cross browser support - At this point selenium have more cross browser support that cypress. Cypress supports only chrome variants. You can read about them here
• Debugging capability - This is high in cypress. I found that error message are more details and infact provide some more details about how to fix it. Also you have full access to chrome dev tools.
• Keypress - As of now , cypress doesnt support pressing Tab key . You can read about it here.
• Since cypress is build on node.js, we can chain commands together
• Cypress.io have built in support for test framework and assertion libraries like mocha, chai etc
• Cypress.io test cases can be written in Javascript
• Cypress.io handles wait times better than selenium
• Cypress.io have hot reloading of test cases. When we make changes to test cases and save it , the test rerun by itself. This is very effective to reduce time spend on building and rerunning selenium based test cases.
• Cypress.io have built in time travel and screenshots which will help us to go back to failure points and debug. It also capture before and after state for all actions

I will keep adding to this when I play more with cypress.

In previous blog post, we saw how to use BDD format for writing test cases in postman. Most important part of writing tests in postman is understanding various features available. Let us explore various options available . The examples specified in postman documentation, have lot of information about how to setup postman bdd, use chai http assertions, create custom assertions and use before and after hooks. Please import them into postman and try that by yourself to familiarise with postman BDD. Below is only few examples from them.

Postman BDD makes use of Chai Assertion Library and Chai-Http. We have access to both libray and postman scripting environment for writing test cases. Chai has two types of assertion styles.

• Expect/should for BDD
• Assert for TDD

Both styles support chainable language to construct assertions. We can use both of them to write postman test assertions. If you need details of all chainable constructs, please refer to their documentation. Major ones which we may use in postman tests are

• Chains

  to
be
been
is
that
which
and
has
have
with
at
of
same
but
does

• Not - Negates all conditions

• any -
• all -
• inlcude
• OK
• true
• false
• null
• undefined
• exist
• empty
• match(re[, msg])

Chai-Http module provide various assertions. Read through their documentation here to know details. Below are main commands at our disposal for validation .

• .status(code)
• .header (key[, value])
• .headers
• .ip
• .json / .text / .html
• .redirect
• .param
• .cookie

Postman bdd provide response object on which we do most of assertions. It will have all information like response.text, response.body, response.status, response.ok , response.error. Postman BDD will automatically parse JSON and XML responses and hence there is no need to call JSON.parse() or xml2json(). response.text will have unparsed content. It also have automatic error handling , which will allow to continue with other test even if something fails.

Examples for various assertions done on response object are below

If we use above assertions in proper BDD format, it will look like below

In Previous blog post,we discussed about how to use postman and how to use collections using newman and data file. If you haven’t read that , please have a read through first .

In previous examples, we discussed about writing tests/assertions in postman. We followed normal Javascript syntax for writing test cases including asserting various factors of response ( like content , status code etc). Eventhough this is a straightforward way of writing, many people would like to use existing javascript test library like Mocha. They can use postman - bdd libraries.

Let us take a deep dive into how to use setup postman bdd.

Note: It is assumed that user already have postman and newman installed on their machine along with their dependencies.

## Installing Postman BDD

Installation is done triggering a Get request and setting the response as Global environment variable.

• Create a GET request to http://bigstickcarpet.com/postman-bdd/dist/postman-bdd.js
• Set Global environment variable by using below command in test tab. postman.setGlobalVariable('postmanBDD', responseBody);

Once we trigger above get request, postman bdd will be available for use. We can make use of postman BDD features by below command eval(globals.postmanBDD);

## Writing Tests

Postman bdd library provide us with flexibility to write tests and assertions using fluent asserts and have best features of Chai and Mocha. Inorder to demonstrate this, I am using sample Tutorial given with postman client.

Open up the sample Request in Postman Tutorial folder under collections. It will already have some test predefined in Test tab. Remove them and add below test to it.

Note: You can find more details of various type of asserts in http://www.chaijs.com/api/bdd/

Once it is done, trigger the request

.

Gulp is a toolkit for automating painful or time-consuming task in your development workflow, so you can stop messing around and build something. Gulp can be used for creating a simple task to run automated test cases.

Firstly, we will create package.json file for this project. This can be done by below command from project folder. It will prompt you to enter a list of information required for creating package.json file

npm init


Once this is done, install gulp. It can be done by below command. This will add gulp as a dev dependency.

npm install --save-dev gulp-install


In order to run acceptance test cases, we will need to install nunit/xunit test runners. It can be done by below command from the root folder.

npm install --save-dev gulp-nunit-runner
OR
npm install --save-dev gulp-xunit-runner


Detailed usage of above test runners are available here.

Once above are installed, we need to create gulpfile.js inside root folder. This file will have details of various gulp tasks

Sample Usage of test runner is below. Insert this code into gulpfile.js

• {read: false} means, it will read only file names and not the entire file.
• Executable is the path to nunit console runner, which should be available.
• gulp.src is that path to acceptance test solution dll. Since we use wild character, we may have to modify this path to reflect the exact path of dll.( something like ./**/Debug/Project.acceptancetest.dll)

Once we have above in gulpfile.js, it can be run by below command

gulp unit-test


Out of above command will be something like

C:/nunit/bin/nunit-console.exe "C:\full\path\to\Database.Test.dll" "C:\full\path\to\Services.Test.dll"


Note: If it complains about assembly missing, it means path to acceptance test solution is incorrect . Retry after fixing the path.

Gulp Nunit runner provide lot options to configure test run, like selecting test cases based on category, creating output files etc. Detailed options can be found here.

Below is an example with few options

• Where - Selects the category which needs to be run
• Work - Create a folder with specified path/name for output files
• result - create test results in xml
• config - select the config which needs to be run

If we run gulp unit-test now, it will execute only the test cases having category test. It will create a folder named TestResultsFolder and will have an xml report of the test run inside it . The folder will be created in root where we have gulpfile.js.

TeamCity is a java based build management and continuous Integration server from JetBrains. Very often , we will have to extract various metrics from TeamCity for tracking and trend analysis. TeamCity provides versatile api for extracting various metrics which can then be manipulated or interpreted as we need.

Below are basic api calls which can be used for extracting mertics. Please note that TeamCity api is powerful enough to do much more than extraction of data. However, for this blog post, I am focussing on metrics extraction part alone. All of these are GET request to TeamCity api with a valid user credentials ( use any id/password which can access TeamCity)

1. Get List of Projects - http://teamcityURL:9999/app/rest/projects
2. Get details of a project - http://teamcityURL:9999/app/rest/projects/(projectlocator) Project locator can be either “id:projectID” or “name:projectName”
3. Get List of Build configuration - http://teamcityURL:9999/app/rest/buildTypes
4. Get List of Build configuration for a project- http://teamcityURL:9999/app/rest/projects/(projectLocator)buildTypes
5. Get List of Build - http://teamcityURL:9999/app/rest/builds/?locator=(buildLocator)
6. Get details of a specific Build - http://teamcityURL:9999/app/rest/builds/(buildLocator) Build locator can be “id:BuildId” or “number:buildNumber” Or a combination of these like “id:BuildId,number:buildNumber,dimension3:dimensionvalue”. We can use various different values for these dimension. Details can be found in TeamCity documentation
7. Get List of tests in a build - http://teamcityURL:9999/app/rest/testOccurrences?locator=build:(buildLocator)
8. Get individual test history - http://teamcityURL:9999/app/rest/testOccurrences?locator=test:(testLocator)

Recently I created a Nodejs program to extract below metrics by chaining some of the above api calls.

• Number of builds between any two given dates and their status
• Details of number of test cases and their status , pass percentage, fail percentage etc for each build
• Details as above for entire period.
• Trend of test progress, build failures etc between those dates
• Create an output JSON with cumulative counts of passed/failed/ignored builds, passed/failed/ignored test cases , percentage of sucessful builds, frequency of pull request and their success rates etc.

### What is Accessibility testing ?

It is a kind of testing performed to ensure application under test is usable by people with disabilities. One of the most common accessibility testing for web applications is to ensure it is easily usable by people with vision impairment. They normally use screen readers to read the screen and use key board to navigate.

Web Content Accessibility Guideline (WCAG) list down guidelines and rules for creating accessible website. There are various browser extensions and developer tools available for scanning web pages to find out obvious accessibility issues. aXe is one of the widely used extension. Details of aXe can be found here. Once browser extension is installed, you can analyze any web page to find out accessibility issues. They also have a javascript API for aXe core .

I recently came across axe-selenium-csharp , which is a .NET wrapper around aXe. It is relatively very easy to setup and use. Below are the steps

1. Install Globant.Selenium.Axe nuget package for solution. This will add reference to dll
2. Import namespace using Globant.Selenium.Axe
3. Call “Analyze” method to run accessibility check on the current page.

Automated accessibility testing is NOT a completely foolproof solution. We will still require someone to scan the page using screen reader software later. But this will help to move accessibility testing to left and have more frequent runs and reduce the need for regression.

In previous blog post, I have explained about how to create a json response in mountebank. You can read about that here and here. Recently , I had to test a scenario about what will happen to application if downstream API response is delayed for some time . Let us have a look about how we can use mountebank to simulate this scenario.

Mountebank supports adding latency to response by adding a behaviour. You can read about that here . Let us try to implement the wait behaviour in one of the previous examples . This is a slight modification of the files used as part of examples mentioned here and here. You can clone my github repo and look at “ExamplesForWaitBehaviour” for the files.

The only change which we need is to add a behavior to the response. This is added in “CustomerFound.json” file. After injecting file, we need to add behavior for waiting 5000 milliseconds.

Now run Mountebank. If you are using the GitHub repo, you can do this by running RunMounteBankStubsWithExampleForWait.bat file. Else run below command inside the directory where mountebank is available. If needed, modify the path to Imposter.ejs as required.

When we trigger a request via postman, we will get a response after specified delay + time for getting a response. Have a look at response time in below screenshot. Response time is more than 5000 ms.

I was not active in this blog over past one month due to multiple reasons - both personal and professional. Following are some highlights over past one month.

### ISTQB Test Automation Engineer

As mentioned in previous post, I had a chance to attend ANZTB SIGiST conference in August 2017. There were some good talks about various certifications offered by ISTQB/ANZTB. ISTQB Test Automation Engineer was one among them. I decided to give that a try . Spend some time over past one month to prepare based on the syllabus and I gave it a shot. Fortunately, I passed the exam with good marks

### Traffic to blog

I had an interesting observation when I looked into google analytics for the blog. The traffic to this blog has grown over ten times compared to previous months. I analyzed the data and found that my blog and github repo for mountebank examples are mentioned in mountebank website. This resulted in having more traffic to the blog. I hope others are also benefitted from my experience with mountebank and the tutorial which I have in this blog. Hopefully, this will give me the motivation to write more.

An agile retrospective is a meeting held at end of a sprint to analyse their ways of working over past sprint and identify how to become more effective and then adjust accordingly. This is a ritual which belongs to team and criticism is given for facts/output and not for people. Retrospective creates an environment where the team feels safe and comfortable, which will allow them to talk freely about their thoughts and let go their frustration.

As an agile team, our team was pretty matured. Everyone knows what to do and what not to do. They come with solutions for most of the impediments faced during the sprint . Even then, there will be few issues which are still not resolved. Often retrospective meetings end up being a place for the team to let out their frustration rather than focusing on identifying what worked well and what could have been done better. Also action items coming out of discussion may be already tried out during sprint and was not working as expected.

Last week, I had a chance to run retrospective for the team. I wanted to focus more on proactive actions taken by the team while dealing with impediments. I decided to run a different retro in which I tried to keep emotions out and focus more on facts.

### Goals of this retrospective

Hope below exercise will help to achieve :

• Take emotions out of discussion and focus on facts
• Review the actions taken by team during sprint and asses its effectiveness
• Identify improvements which were not tried out during sprint

### How to do it

###### Sprint Goals
• First step is to identify the sprint goals and write it down on board for everyone to see. This reminds the team about their every day work to achieve the goals.
###### Negatives
• Next step is to identify the risk, issues and blockers which prevented the team from achieving it. It can be anything which team found as an impediment.
• Each team member has to write down a unique impediment on a card. Hence for a team of 10 members, you will have 10 unique impediments.
• Team will then rate the impediments on a scale of 1 - 10. Where 1 being least and 10 is a complete blocker.
• Once that is done, cards are exchanged with team members.
###### Positives
• Next person will then have to think about what all good ideas happened for impediment on their card ( written by someone else). It can be anything which team has tried to overcome the blockers, any innovative ideas tried out, usage of extra time for learning and development etc. Team members are free to discuss this with others to find out all positives of that issue.
• Once it is written, team member will rate it on a scale of 1 - 10. On practical world, the rating for positives will be less than the rating for negatives. Else, it will not be an impediment to start with.
###### Actions
• Now facilitator has to collect back all cards and look for three cards having a maximum difference between negative and positive ratings.
• By end of this, team will have three pressing impediments which they could not overcome in the sprint. It takes into account of all pro-active actions taken by the team while dealing with that specific impediments. It is also based on facts and collective feedback .
• Now it is time to discuss and come up with action items. Obviously new action items coming out of discussion should be new and was not tried earlier.

Today I had a discussion with a project manager about stake holder expectations about value delivered from regression test automation and how to manage stake holder expectation. The discussion soon spanned on to challenges in automating manual test cases and candidates for automation testing.

### Expectation Vs Reality

Management Stakeholders always visualize test automation as a silver bullet for fixing all pain points.They envision automation tests to be quicker, cheaper and effective in identifying all defects. Automation test cases are expected to be run on a button click and with 100% pass rate (except for valid bugs). Needless to say, that expectation is about having complete test coverage for automation scripts. Thinking is always geared towards reducing manual testers based on automation progress rather than having focus on improved quality of final product, faster time to market etc.

The ground reality is different from above expectation. Automated test cases are only as good as how you script it to be. Automated checks will alert tester about problems that checks have been programmed to detect.It ignores all other problems outside of it. Cost, speed, and ROI will depend on the tool used and complexity of tests implemented. Having an automated test is not a replacement for doing exploratory test manually. We need to cater for manual exploratory testing since automated scripts can only do verification of already known check points( for which the coding is done) and miss out check points which are not automated. In other words, test automation frees up tester’s time to focus more on exploratory testing which adds value.

Challenge in this specific case is to automate E2E manual regression test cases which are not existing. The testers are supposed to identify the regression test cases first by going to through existing application and then automate them. The expectation is that testers will identify all possible error scenarios and incorporate corresponding checks in automated scripts. This is going to be time consuming and expensive. It depends on domain knowledge of the person who creates automation test cases. There are chances that all existing bugs will be considered as an expected behaviour. More over the end to end test cases done at UI level is generally time consuming to develop, slow to execute and heavily depended on UI which makes it brittle.

### Testing Pyramid

Solution to improve quality of a product is to follow the testing pyramid and try to automate more at lower levels instead of focussing at E2E level. This also has to be done while the product/software is developed.

Below is a modified version of testing pyramid.

As you can see above, more emphasis is given to have automated test at Unit test level, followed by component level, integration test level, and finally E2E level through UI. It is relatively cheaper to implement automated test at the base of the pyramid and will get more expensive as we go up. Similarly, unit tests are faster to run, it can isolate issues immediately and are more stable. These characteristics will change adversely as we go up in test pyramid.

As obvious, it is not feasible to achieve this for an already existing system without having a significant investment in people, time and tools. This will impact ROI. Depending on situations, there is no right or wrong way to do test automation. Having something is always better than nothing. Hence when there is a need to automate regression test cases, it normally starts from the top. Significant investment is needed upfront to identify all critical regression test cases and corresponding validation that should be performed by automated tests. It is not feasible to automate all test or to have 100% coverage. Success rates of automation script run will depend on various factors like test data, environment stability etc. Everyone should understand that we are automating check point verifications and hence it does trigger alerts only for the checks which it is programmed to do. E2E regression through UI should be only a minimal subset of what is covered through other levels. We should be ready to invest in maintaining the automation assets over a period of time.

In this case, stakeholder expectation needs to be carefully managed. It is important to set right expectation about benefits offered by test automation for a successful project delivery. Automation testing can deliver benefits over a long period of time , provided proper planning was done upfront to automate at different levels of testing. Instead of considering it as solution for all pain points, we need to clearly articulate /set expectation about its limitations and long term benefits.