Cypress is not just UI automation tool . It can be used for testing APIs as well . Even though we have other tools like Postman, Newman, Rest Assured, SOAP UI etc for testing APIs, I believe cypress is a good alternative for testing API. It will help to use same tool for both UI and API test automation.

Demo

Let us look at a sample API test case. In below example, we trigger a API call to http://services.groupkt.com/country/get/iso2code/AU and validate below in the response.

  • Status code of response is 200.
  • Header include ‘application/json’.
  • Body contain “Country found matching code [AU].”

We can then extend this to do any further checks if needed.

Create a new file inside Integration folder of cypress and copy below code into that.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
describe('API Testing with Cypress', () => {
    var result
    
    it('Validate the header', () => {
       result = cy.request('http://services.groupkt.com/country/get/iso2code/AU')
      
       result.its('headers')
             .its('content-type')
             .should('include', 'application/json')
        
    })

    it('Validate the status', () => {
        result = cy.request('http://services.groupkt.com/country/get/iso2code/AU')
       
        result.its('status')
              .should('equal',200);
     })

     it('Validate the body ', () => {
        result = cy.request('http://services.groupkt.com/country/get/iso2code/AU')
       
        result.its('body')
              .its('RestResponse.messages')
              .should('include', 'Country found matching code [AU].');
      
     })
}) 

Open Cypress by running node_modules/.bin/cypress open inside cypress root folder. This will open up Cypress.

Run newly created test.

APITestingWithCypress

Results of test execution will look like below.

[APITestingWithCypress]

Expand each of them and right click on the asserts and inspect the element. This will open up chrome developer tool. Select the console tab , which will list down details of calls made, request received and assertions performed. It will help to write additional assertions, investigate any failure etc.

[APITestingWithCypress]

When we talk about UI automation for browsers, the default tool which comes to mind is Selenium. There are different wrappers around selenium like protractor, Nightwatch , selenium webdriver etc. All of them are build on top of selenium and have all advantages /disadvantages of selenium. All of the control browser by executing remote commands through Network. We will most probably need additional libraries, framework etc to make full use of selenium.

Cypress.io is an open source UI automation tool which can be used for UI testing . Unlike others, this is not build on top of selenium . Instead is a complete new architecture and run in same run loop as browser. So it is running inside browser and have access to almost everything happening inside and outside browser. It is a complete set of tools that you will require to create and run E2E UI automation test cases. Team who developed cypress has made few design trade off which causes some disadvantages to cypress. There is no right tool for automation . It will depend on multiple factors.

Installing Cypress

We can install cypress using npm. Run below command inside project folder to install cypress and all dependencies.

1
npm install cypress --save-dev

Another way of using cypress is to download zip file from here . Just extract the file and start using it.

Opening Cypress

Cypress can be opened by running node_modules/.bin/cypress open command in terminal under /cypress.

If you have downloaded the zip file, you can open cypress by double clicking on the cypress executable.

Write your first test

Cypress already come with predefined example of KitchenSink application which will help you to identify various commands which can be used. It can be found under cypress\integration\example_spec.js.

Let us look at how to write a new test .

Create a new test script file called demotest.js under {project_location}\cypress\integration. Open up the file and write below code into it.

This code will open browser, load google and search for cypress.io and open up the first link.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
describe('My first test for cypress', function() {
    it('Visits google home page ', function() {
      cy.visit('https://google.com');
    })
    it('should load the Google Homepage', () => {
        cy.title().should('eql', 'Google');
    })
    it('should search and open cypress home page', () => {
        cy.get('#lst-ib').type('cypress.io');
        cy.get('[value="I\'m Feeling Lucky"]').focus().click();
    })
   
   
  })

Note: If cross origin policy error is shown, flow the workarounds mentioned.

How does cypress.io compare with Selenium

As mentioned earlier, there is no right or wrong tool for automation. It all depends on suitability for the task on hand. Let us compare few features where cypress.io and selenium have differences.

  • Cross browser support - At this point selenium have more cross browser support that cypress. Cypress supports only chrome variants. You can read about them here
  • Debugging capability - This is high in cypress. I found that error message are more details and infact provide some more details about how to fix it. Also you have full access to chrome dev tools.
  • Keypress - As of now , cypress doesnt support pressing Tab key . You can read about it here.
  • Since cypress is build on node.js, we can chain commands together
  • Cypress.io have built in support for test framework and assertion libraries like mocha, chai etc
  • Cypress.io test cases can be written in Javascript
  • Cypress.io handles wait times better than selenium
  • Cypress.io have hot reloading of test cases. When we make changes to test cases and save it , the test rerun by itself. This is very effective to reduce time spend on building and rerunning selenium based test cases.
  • Cypress.io have built in time travel and screenshots which will help us to go back to failure points and debug. It also capture before and after state for all actions

I will keep adding to this when I play more with cypress.

In previous blog post, we saw how to use BDD format for writing test cases in postman. Most important part of writing tests in postman is understanding various features available. Let us explore various options available . The examples specified in postman documentation, have lot of information about how to setup postman bdd, use chai http assertions, create custom assertions and use before and after hooks. Please import them into postman and try that by yourself to familiarise with postman BDD. Below is only few examples from them.

Postman BDD makes use of Chai Assertion Library and Chai-Http. We have access to both libray and postman scripting environment for writing test cases. Chai has two types of assertion styles.

  • Expect/should for BDD
  • Assert for TDD

Both styles support chainable language to construct assertions. We can use both of them to write postman test assertions. If you need details of all chainable constructs, please refer to their documentation. Major ones which we may use in postman tests are

  • Chains

      to
      be
      been
      is
      that
      which
      and
      has
      have
      with
      at
      of
      same
      but
      does
    
  • Not - Negates all conditions

  • any -
  • all -
  • inlcude
  • OK
  • true
  • false
  • null
  • undefined
  • exist
  • empty
  • match(re[, msg])

Chai-Http module provide various assertions. Read through their documentation here to know details. Below are main commands at our disposal for validation .

  • .status(code)
  • .header (key[, value])
  • .headers
  • .ip
  • .json / .text / .html
  • .redirect
  • .param
  • .cookie

Postman bdd provide response object on which we do most of assertions. It will have all information like response.text, response.body, response.status, response.ok , response.error. Postman BDD will automatically parse JSON and XML responses and hence there is no need to call JSON.parse() or xml2json(). response.text will have unparsed content. It also have automatic error handling , which will allow to continue with other test even if something fails.

Examples for various assertions done on response object are below

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
\\Verifying Header information
expect(response).to.have.status(500);
expect(response).to.have.header('x-api-key');
expect(response).to.have.header('content-type', 'text/plain');
expect(request).to.have.header('content-type', /^text/);
expect(response).to.have.headers;
expect('127.0.0.1').to.be.an.ip;

\\Verifying Response body
expect(response).to.be.json;
expect(response).to.be.html;
expect(response).to.be.text;

response.should.have.status(200); 
response.body.should.not.be.empty;
response.ok.should.be.true;            // sucess with code 2XX
response.error.should.be.true; //failures

\\Verifying request
expect(req).to.have.param('orderby', 'date');
expect(req).to.not.have.param('orderby');
expect(req).to.have.cookie('session_id', '1234');
expect(req).to.not.have.cookie('PHPSESSID');


If we use above assertions in proper BDD format, it will look like below

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
eval(globals.postmanBDD);
describe('Example for Blog using SHOULD', function(){
   it("Tests using SHOULD", function() {
      response.should.have.status(200); 
      response.should.not.be.empty;
      response.should.have.header('content-type', 'application/json; charset=utf-8');
      response.type.should.equal('application/json');
      
      var user = response.body.results[0];
      user.name.should.be.an('object');
      user.name.should.have.property('first').and.not.empty;
      //user.name.should.have.property('first','david');
      user.should.have.property('gender','male');
   
   
   }) 
})
describe('Example for Blog using Expect', function(){
   it("Tests using EXPECT", function() {
      expect(response).to.have.status(200);
      expect(response).not.empty;
      expect(response).to.be.json;
      expect(response).to.have.header('content-type', 'application/json; charset=utf-8');
   }) 
})

it('should contain the un-parsed JSON text', () => {
    response.text.should.be.a('string').with.length.above(50);
    response.text.should.contain('"results":[');
});

Postman

In Previous blog post,we discussed about how to use postman and how to use collections using newman and data file. If you haven’t read that , please have a read through first .

In previous examples, we discussed about writing tests/assertions in postman. We followed normal Javascript syntax for writing test cases including asserting various factors of response ( like content , status code etc). Eventhough this is a straightforward way of writing, many people would like to use existing javascript test library like Mocha. They can use postman - bdd libraries.

Let us take a deep dive into how to use setup postman bdd.

Note: It is assumed that user already have postman and newman installed on their machine along with their dependencies.

Installing Postman BDD

Installation is done triggering a Get request and setting the response as Global environment variable.

  • Create a GET request to http://bigstickcarpet.com/postman-bdd/dist/postman-bdd.js
  • Set Global environment variable by using below command in test tab. postman.setGlobalVariable('postmanBDD', responseBody);

PostManRequest

Once we trigger above get request, postman bdd will be available for use. We can make use of postman BDD features by below command eval(globals.postmanBDD);

Writing Tests

Postman bdd library provide us with flexibility to write tests and assertions using fluent asserts and have best features of Chai and Mocha. Inorder to demonstrate this, I am using sample Tutorial given with postman client.

Open up the sample Request in Postman Tutorial folder under collections. It will already have some test predefined in Test tab. Remove them and add below test to it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
eval(globals.postmanBDD)
//eval(postman.getGlobalVariable('postmanBDD'));
var jsonData = JSON.parse(responseBody);
describe('Testing Sample Request in Postman Tutorial', function () {
  it('CASE 1: Should respond with statusCode = 200', function () {
      response.should.have.status(200);
  });
  it('CASE 2: Should response time less than 500 ms', function () {
      pm.response.responseTime.should.be.below(500);
  });
  it('CASE 3: User ID should be 1', function () {
      jsonData.userId === 1;
  });

});

Note: You can find more details of various type of asserts in http://www.chaijs.com/api/bdd/

Once it is done, trigger the request

PostManRequest.

Gulp is a toolkit for automating painful or time-consuming task in your development workflow, so you can stop messing around and build something. Gulp can be used for creating a simple task to run automated test cases.

Firstly, we will create package.json file for this project. This can be done by below command from project folder. It will prompt you to enter a list of information required for creating package.json file

npm init

Once this is done, install gulp. It can be done by below command. This will add gulp as a dev dependency.

npm install --save-dev gulp-install

In order to run acceptance test cases, we will need to install nunit/xunit test runners. It can be done by below command from the root folder.

npm install --save-dev gulp-nunit-runner
        OR
npm install --save-dev gulp-xunit-runner

Detailed usage of above test runners are available here.

Once above are installed, we need to create gulpfile.js inside root folder. This file will have details of various gulp tasks

Sample Usage of test runner is below. Insert this code into gulpfile.js

1
2
3
4
5
6
7
8
9
10
11
12
13

var gulp = require('gulp'),
    nunit = require('gulp-nunit-runner');
 
gulp.task('unit-test', function () {
    return gulp.src(['**/*.Test.dll'], {read: false})
        .pipe(nunit({
            executable: 'C:/nunit/bin/nunit-console.exe',
            options : {
              where : 'cat == test'
            }
        }));
});
  • {read: false} means, it will read only file names and not the entire file.
  • Executable is the path to nunit console runner, which should be available.
  • gulp.src is that path to acceptance test solution dll. Since we use wild character, we may have to modify this path to reflect the exact path of dll.( something like ./**/Debug/Project.acceptancetest.dll)

Once we have above in gulpfile.js, it can be run by below command

gulp unit-test

Out of above command will be something like

C:/nunit/bin/nunit-console.exe "C:\full\path\to\Database.Test.dll" "C:\full\path\to\Services.Test.dll"

Note: If it complains about assembly missing, it means path to acceptance test solution is incorrect . Retry after fixing the path.

Gulp Nunit runner provide lot options to configure test run, like selecting test cases based on category, creating output files etc. Detailed options can be found here.

Below is an example with few options

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16

var gulp = require('gulp'),
    nunit = require('gulp-nunit-runner');
 
gulp.task('unit-test', function () {
    return gulp.src(['**/*.Test.dll'], {read: false})
        .pipe(nunit({
            executable: 'C:/nunit/bin/nunit-console.exe',
            options : {
              where : 'cat == test',
              work : 'TestResultsFolder',
              result : 'TestResults.xml',
              config : 'Debug'
            }
        }));
});
  • Where - Selects the category which needs to be run
  • Work - Create a folder with specified path/name for output files
  • result - create test results in xml
  • config - select the config which needs to be run

If we run gulp unit-test now, it will execute only the test cases having category test. It will create a folder named TestResultsFolder and will have an xml report of the test run inside it . The folder will be created in root where we have gulpfile.js.

TeamCity is a java based build management and continuous Integration server from JetBrains. Very often , we will have to extract various metrics from TeamCity for tracking and trend analysis. TeamCity provides versatile api for extracting various metrics which can then be manipulated or interpreted as we need.

Below are basic api calls which can be used for extracting mertics. Please note that TeamCity api is powerful enough to do much more than extraction of data. However, for this blog post, I am focussing on metrics extraction part alone. All of these are GET request to TeamCity api with a valid user credentials ( use any id/password which can access TeamCity)

  1. Get List of Projects - http://teamcityURL:9999/app/rest/projects
  2. Get details of a project - http://teamcityURL:9999/app/rest/projects/(projectlocator) Project locator can be either “id:projectID” or “name:projectName”
  3. Get List of Build configuration - http://teamcityURL:9999/app/rest/buildTypes
  4. Get List of Build configuration for a project- http://teamcityURL:9999/app/rest/projects/(projectLocator)buildTypes
  5. Get List of Build - http://teamcityURL:9999/app/rest/builds/?locator=(buildLocator)
  6. Get details of a specific Build - http://teamcityURL:9999/app/rest/builds/(buildLocator) Build locator can be “id:BuildId” or “number:buildNumber” Or a combination of these like “id:BuildId,number:buildNumber,dimension3:dimensionvalue”. We can use various different values for these dimension. Details can be found in TeamCity documentation
  7. Get List of tests in a build - http://teamcityURL:9999/app/rest/testOccurrences?locator=build:(buildLocator)
  8. Get individual test history - http://teamcityURL:9999/app/rest/testOccurrences?locator=test:(testLocator)

Recently I created a Nodejs program to extract below metrics by chaining some of the above api calls.

  • Number of builds between any two given dates and their status
  • Details of number of test cases and their status , pass percentage, fail percentage etc for each build
  • Details as above for entire period.
  • Trend of test progress, build failures etc between those dates
  • Create an output JSON with cumulative counts of passed/failed/ignored builds, passed/failed/ignored test cases , percentage of sucessful builds, frequency of pull request and their success rates etc.

What is Accessibility testing ?

It is a kind of testing performed to ensure application under test is usable by people with disabilities. One of the most common accessibility testing for web applications is to ensure it is easily usable by people with vision impairment. They normally use screen readers to read the screen and use key board to navigate.

Web Content Accessibility Guideline (WCAG) list down guidelines and rules for creating accessible website. There are various browser extensions and developer tools available for scanning web pages to find out obvious accessibility issues. aXe is one of the widely used extension. Details of aXe can be found here. Once browser extension is installed, you can analyze any web page to find out accessibility issues. They also have a javascript API for aXe core .

I recently came across axe-selenium-csharp , which is a .NET wrapper around aXe. It is relatively very easy to setup and use. Below are the steps

  1. Install Globant.Selenium.Axe nuget package for solution. This will add reference to dll
  2. Import namespace using Globant.Selenium.Axe
  3. Call “Analyze” method to run accessibility check on the current page.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
using Globant.Selenium.Axe

public void PerformAccessbilityAudit(IWebDriver _driver) {

 private AxeResult _results;
 _results = _driver.Analyze();

  foreach (var xyz in _results.Violations)
            {
                log.Info(xyz.Impact.ToString());
                log.Info(xyz.Description.ToString());
                log.Info(xyz.Id.ToString());
            }
  Assert.True(_results.Violations.Length == 0, "There are accessibility violations. Please check log file");
}

Automated accessibility testing is NOT a completely foolproof solution. We will still require someone to scan the page using screen reader software later. But this will help to move accessibility testing to left and have more frequent runs and reduce the need for regression.

In previous blog post, I have explained about how to create a json response in mountebank. You can read about that here and here. Recently , I had to test a scenario about what will happen to application if downstream API response is delayed for some time . Let us have a look about how we can use mountebank to simulate this scenario.

Mountebank supports adding latency to response by adding a behaviour. You can read about that here . Let us try to implement the wait behaviour in one of the previous examples . This is a slight modification of the files used as part of examples mentioned here and here. You can clone my github repo and look at “ExamplesForWaitBehaviour” for the files.

The only change which we need is to add a behavior to the response. This is added in “CustomerFound.json” file. After injecting file, we need to add behavior for waiting 5000 milliseconds.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17

"responses": [
{
"inject": "<%-stringify(filename, 'ResponseInjection\\GetCustomerFound.js') %>",
"_behaviors": {
    "wait": 5000
  }
}
],
"predicates": [
{
"matches": {
"method" : "GET",
"path" : "/Blog.Api/[0-9]+/CustomerView"
}
}
]

Now run Mountebank. If you are using the GitHub repo, you can do this by running RunMounteBankStubsWithExampleForWait.bat file. Else run below command inside the directory where mountebank is available. If needed, modify the path to Imposter.ejs as required.

1
mb --configfile ExamplesForWaitBehaviour/Imposter.ejs --allowInjection

When we trigger a request via postman, we will get a response after specified delay + time for getting a response. Have a look at response time in below screenshot. Response time is more than 5000 ms.

PostManRequest

I was not active in this blog over past one month due to multiple reasons - both personal and professional. Following are some highlights over past one month.

ISTQB Test Automation Engineer

As mentioned in previous post, I had a chance to attend ANZTB SIGiST conference in August 2017. There were some good talks about various certifications offered by ISTQB/ANZTB. ISTQB Test Automation Engineer was one among them. I decided to give that a try . Spend some time over past one month to prepare based on the syllabus and I gave it a shot. Fortunately, I passed the exam with good marks

Traffic to blog

I had an interesting observation when I looked into google analytics for the blog. The traffic to this blog has grown over ten times compared to previous months. I analyzed the data and found that my blog and github repo for mountebank examples are mentioned in mountebank website. This resulted in having more traffic to the blog. I hope others are also benefitted from my experience with mountebank and the tutorial which I have in this blog. Hopefully, this will give me the motivation to write more.

An agile retrospective is a meeting held at end of a sprint to analyse their ways of working over past sprint and identify how to become more effective and then adjust accordingly. This is a ritual which belongs to team and criticism is given for facts/output and not for people. Retrospective creates an environment where the team feels safe and comfortable, which will allow them to talk freely about their thoughts and let go their frustration.

As an agile team, our team was pretty matured. Everyone knows what to do and what not to do. They come with solutions for most of the impediments faced during the sprint . Even then, there will be few issues which are still not resolved. Often retrospective meetings end up being a place for the team to let out their frustration rather than focusing on identifying what worked well and what could have been done better. Also action items coming out of discussion may be already tried out during sprint and was not working as expected.

Last week, I had a chance to run retrospective for the team. I wanted to focus more on proactive actions taken by the team while dealing with impediments. I decided to run a different retro in which I tried to keep emotions out and focus more on facts.

Goals of this retrospective

Hope below exercise will help to achieve :

  • Take emotions out of discussion and focus on facts
  • Review the actions taken by team during sprint and asses its effectiveness
  • Identify improvements which were not tried out during sprint

How to do it

Sprint Goals
  • First step is to identify the sprint goals and write it down on board for everyone to see. This reminds the team about their every day work to achieve the goals.
Negatives
  • Next step is to identify the risk, issues and blockers which prevented the team from achieving it. It can be anything which team found as an impediment.
  • Each team member has to write down a unique impediment on a card. Hence for a team of 10 members, you will have 10 unique impediments.
  • Team will then rate the impediments on a scale of 1 - 10. Where 1 being least and 10 is a complete blocker.
  • Once that is done, cards are exchanged with team members.
Positives
  • Next person will then have to think about what all good ideas happened for impediment on their card ( written by someone else). It can be anything which team has tried to overcome the blockers, any innovative ideas tried out, usage of extra time for learning and development etc. Team members are free to discuss this with others to find out all positives of that issue.
  • Once it is written, team member will rate it on a scale of 1 - 10. On practical world, the rating for positives will be less than the rating for negatives. Else, it will not be an impediment to start with.
Actions
  • Now facilitator has to collect back all cards and look for three cards having a maximum difference between negative and positive ratings.
  • By end of this, team will have three pressing impediments which they could not overcome in the sprint. It takes into account of all pro-active actions taken by the team while dealing with that specific impediments. It is also based on facts and collective feedback .
  • Now it is time to discuss and come up with action items. Obviously new action items coming out of discussion should be new and was not tried earlier.