In previous blog post, we saw how to use BDD format for writing test cases in postman. Most important part of writing tests in postman is understanding various features available. Let us explore various options available . The examples specified in postman documentation, have lot of information about how to setup postman bdd, use chai http assertions, create custom assertions and use before and after hooks. Please import them into postman and try that by yourself to familiarise with postman BDD. Below is only few examples from them.

Postman BDD makes use of Chai Assertion Library and Chai-Http. We have access to both libray and postman scripting environment for writing test cases. Chai has two types of assertion styles.

  • Expect/should for BDD
  • Assert for TDD

Both styles support chainable language to construct assertions. We can use both of them to write postman test assertions. If you need details of all chainable constructs, please refer to their documentation. Major ones which we may use in postman tests are

  • Chains

      to
      be
      been
      is
      that
      which
      and
      has
      have
      with
      at
      of
      same
      but
      does
    
  • Not - Negates all conditions

  • any -
  • all -
  • inlcude
  • OK
  • true
  • false
  • null
  • undefined
  • exist
  • empty
  • match(re[, msg])

Chai-Http module provide various assertions. Read through their documentation here to know details. Below are main commands at our disposal for validation .

  • .status(code)
  • .header (key[, value])
  • .headers
  • .ip
  • .json / .text / .html
  • .redirect
  • .param
  • .cookie

Postman bdd provide response object on which we do most of assertions. It will have all information like response.text, response.body, response.status, response.ok , response.error. Postman BDD will automatically parse JSON and XML responses and hence there is no need to call JSON.parse() or xml2json(). response.text will have unparsed content. It also have automatic error handling , which will allow to continue with other test even if something fails.

Examples for various assertions done on response object are below

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
\\Verifying Header information
expect(response).to.have.status(500);
expect(response).to.have.header('x-api-key');
expect(response).to.have.header('content-type', 'text/plain');
expect(request).to.have.header('content-type', /^text/);
expect(response).to.have.headers;
expect('127.0.0.1').to.be.an.ip;

\\Verifying Response body
expect(response).to.be.json;
expect(response).to.be.html;
expect(response).to.be.text;

response.should.have.status(200); 
response.body.should.not.be.empty;
response.ok.should.be.true;            // sucess with code 2XX
response.error.should.be.true; //failures

\\Verifying request
expect(req).to.have.param('orderby', 'date');
expect(req).to.not.have.param('orderby');
expect(req).to.have.cookie('session_id', '1234');
expect(req).to.not.have.cookie('PHPSESSID');


If we use above assertions in proper BDD format, it will look like below

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
eval(globals.postmanBDD);
describe('Example for Blog using SHOULD', function(){
   it("Tests using SHOULD", function() {
      response.should.have.status(200); 
      response.should.not.be.empty;
      response.should.have.header('content-type', 'application/json; charset=utf-8');
      response.type.should.equal('application/json');
      
      var user = response.body.results[0];
      user.name.should.be.an('object');
      user.name.should.have.property('first').and.not.empty;
      //user.name.should.have.property('first','david');
      user.should.have.property('gender','male');
   
   
   }) 
})
describe('Example for Blog using Expect', function(){
   it("Tests using EXPECT", function() {
      expect(response).to.have.status(200);
      expect(response).not.empty;
      expect(response).to.be.json;
      expect(response).to.have.header('content-type', 'application/json; charset=utf-8');
   }) 
})

it('should contain the un-parsed JSON text', () => {
    response.text.should.be.a('string').with.length.above(50);
    response.text.should.contain('"results":[');
});

Postman

In Previous blog post,we discussed about how to use postman and how to use collections using newman and data file. If you haven’t read that , please have a read through first .

In previous examples, we discussed about writing tests/assertions in postman. We followed normal Javascript syntax for writing test cases including asserting various factors of response ( like content , status code etc). Eventhough this is a straightforward way of writing, many people would like to use existing javascript test library like Mocha. They can use postman - bdd libraries.

Let us take a deep dive into how to use setup postman bdd.

Note: It is assumed that user already have postman and newman installed on their machine along with their dependencies.

Installing Postman BDD

Installation is done triggering a Get request and setting the response as Global environment variable.

  • Create a GET request to http://bigstickcarpet.com/postman-bdd/dist/postman-bdd.js
  • Set Global environment variable by using below command in test tab. postman.setGlobalVariable('postmanBDD', responseBody);

PostManRequest

Once we trigger above get request, postman bdd will be available for use. We can make use of postman BDD features by below command eval(globals.postmanBDD);

Writing Tests

Postman bdd library provide us with flexibility to write tests and assertions using fluent asserts and have best features of Chai and Mocha. Inorder to demonstrate this, I am using sample Tutorial given with postman client.

Open up the sample Request in Postman Tutorial folder under collections. It will already have some test predefined in Test tab. Remove them and add below test to it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
eval(globals.postmanBDD)
//eval(postman.getGlobalVariable('postmanBDD'));
var jsonData = JSON.parse(responseBody);
describe('Testing Sample Request in Postman Tutorial', function () {
  it('CASE 1: Should respond with statusCode = 200', function () {
      response.should.have.status(200);
  });
  it('CASE 2: Should response time less than 500 ms', function () {
      pm.response.responseTime.should.be.below(500);
  });
  it('CASE 3: User ID should be 1', function () {
      jsonData.userId === 1;
  });

});

Note: You can find more details of various type of asserts in http://www.chaijs.com/api/bdd/

Once it is done, trigger the request

PostManRequest.

Gulp is a toolkit for automating painful or time-consuming task in your development workflow, so you can stop messing around and build something. Gulp can be used for creating a simple task to run automated test cases.

Firstly, we will create package.json file for this project. This can be done by below command from project folder. It will prompt you to enter a list of information required for creating package.json file

npm init

Once this is done, install gulp. It can be done by below command. This will add gulp as a dev dependency.

npm install --save-dev gulp-install

In order to run acceptance test cases, we will need to install nunit/xunit test runners. It can be done by below command from the root folder.

npm install --save-dev gulp-nunit-runner
        OR
npm install --save-dev gulp-xunit-runner

Detailed usage of above test runners are available here.

Once above are installed, we need to create gulpfile.js inside root folder. This file will have details of various gulp tasks

Sample Usage of test runner is below. Insert this code into gulpfile.js

1
2
3
4
5
6
7
8
9
10
11
12
13

var gulp = require('gulp'),
    nunit = require('gulp-nunit-runner');
 
gulp.task('unit-test', function () {
    return gulp.src(['**/*.Test.dll'], {read: false})
        .pipe(nunit({
            executable: 'C:/nunit/bin/nunit-console.exe',
            options : {
              where : 'cat == test'
            }
        }));
});
  • {read: false} means, it will read only file names and not the entire file.
  • Executable is the path to nunit console runner, which should be available.
  • gulp.src is that path to acceptance test solution dll. Since we use wild character, we may have to modify this path to reflect the exact path of dll.( something like ./**/Debug/Project.acceptancetest.dll)

Once we have above in gulpfile.js, it can be run by below command

gulp unit-test

Out of above command will be something like

C:/nunit/bin/nunit-console.exe "C:\full\path\to\Database.Test.dll" "C:\full\path\to\Services.Test.dll"

Note: If it complains about assembly missing, it means path to acceptance test solution is incorrect . Retry after fixing the path.

Gulp Nunit runner provide lot options to configure test run, like selecting test cases based on category, creating output files etc. Detailed options can be found here.

Below is an example with few options

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16

var gulp = require('gulp'),
    nunit = require('gulp-nunit-runner');
 
gulp.task('unit-test', function () {
    return gulp.src(['**/*.Test.dll'], {read: false})
        .pipe(nunit({
            executable: 'C:/nunit/bin/nunit-console.exe',
            options : {
              where : 'cat == test',
              work : 'TestResultsFolder',
              result : 'TestResults.xml',
              config : 'Debug'
            }
        }));
});
  • Where - Selects the category which needs to be run
  • Work - Create a folder with specified path/name for output files
  • result - create test results in xml
  • config - select the config which needs to be run

If we run gulp unit-test now, it will execute only the test cases having category test. It will create a folder named TestResultsFolder and will have an xml report of the test run inside it . The folder will be created in root where we have gulpfile.js.

TeamCity is a java based build management and continuous Integration server from JetBrains. Very often , we will have to extract various metrics from TeamCity for tracking and trend analysis. TeamCity provides versatile api for extracting various metrics which can then be manipulated or interpreted as we need.

Below are basic api calls which can be used for extracting mertics. Please note that TeamCity api is powerful enough to do much more than extraction of data. However, for this blog post, I am focussing on metrics extraction part alone. All of these are GET request to TeamCity api with a valid user credentials ( use any id/password which can access TeamCity)

  1. Get List of Projects - http://teamcityURL:9999/app/rest/projects
  2. Get details of a project - http://teamcityURL:9999/app/rest/projects/(projectlocator) Project locator can be either “id:projectID” or “name:projectName”
  3. Get List of Build configuration - http://teamcityURL:9999/app/rest/buildTypes
  4. Get List of Build configuration for a project- http://teamcityURL:9999/app/rest/projects/(projectLocator)buildTypes
  5. Get List of Build - http://teamcityURL:9999/app/rest/builds/?locator=(buildLocator)
  6. Get details of a specific Build - http://teamcityURL:9999/app/rest/builds/(buildLocator) Build locator can be “id:BuildId” or “number:buildNumber” Or a combination of these like “id:BuildId,number:buildNumber,dimension3:dimensionvalue”. We can use various different values for these dimension. Details can be found in TeamCity documentation
  7. Get List of tests in a build - http://teamcityURL:9999/app/rest/testOccurrences?locator=build:(buildLocator)
  8. Get individual test history - http://teamcityURL:9999/app/rest/testOccurrences?locator=test:(testLocator)

Recently I created a Nodejs program to extract below metrics by chaining some of the above api calls.

  • Number of builds between any two given dates and their status
  • Details of number of test cases and their status , pass percentage, fail percentage etc for each build
  • Details as above for entire period.
  • Trend of test progress, build failures etc between those dates
  • Create an output JSON with cumulative counts of passed/failed/ignored builds, passed/failed/ignored test cases , percentage of sucessful builds, frequency of pull request and their success rates etc.

What is Accessibility testing ?

It is a kind of testing performed to ensure application under test is usable by people with disabilities. One of the most common accessibility testing for web applications is to ensure it is easily usable by people with vision impairment. They normally use screen readers to read the screen and use key board to navigate.

Web Content Accessibility Guideline (WCAG) list down guidelines and rules for creating accessible website. There are various browser extensions and developer tools available for scanning web pages to find out obvious accessibility issues. aXe is one of the widely used extension. Details of aXe can be found here. Once browser extension is installed, you can analyze any web page to find out accessibility issues. They also have a javascript API for aXe core .

I recently came across axe-selenium-csharp , which is a .NET wrapper around aXe. It is relatively very easy to setup and use. Below are the steps

  1. Install Globant.Selenium.Axe nuget package for solution. This will add reference to dll
  2. Import namespace using Globant.Selenium.Axe
  3. Call “Analyze” method to run accessibility check on the current page.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
using Globant.Selenium.Axe

public void PerformAccessbilityAudit(IWebDriver _driver) {

 private AxeResult _results;
 _results = _driver.Analyze();

  foreach (var xyz in _results.Violations)
            {
                log.Info(xyz.Impact.ToString());
                log.Info(xyz.Description.ToString());
                log.Info(xyz.Id.ToString());
            }
  Assert.True(_results.Violations.Length == 0, "There are accessibility violations. Please check log file");
}

Automated accessibility testing is NOT a completely foolproof solution. We will still require someone to scan the page using screen reader software later. But this will help to move accessibility testing to left and have more frequent runs and reduce the need for regression.

In previous blog post, I have explained about how to create a json response in mountebank. You can read about that here and here. Recently , I had to test a scenario about what will happen to application if downstream API response is delayed for some time . Let us have a look about how we can use mountebank to simulate this scenario.

Mountebank supports adding latency to response by adding a behaviour. You can read about that here . Let us try to implement the wait behaviour in one of the previous examples . This is a slight modification of the files used as part of examples mentioned here and here. You can clone my github repo and look at “ExamplesForWaitBehaviour” for the files.

The only change which we need is to add a behavior to the response. This is added in “CustomerFound.json” file. After injecting file, we need to add behavior for waiting 5000 milliseconds.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17

"responses": [
{
"inject": "<%-stringify(filename, 'ResponseInjection\\GetCustomerFound.js') %>",
"_behaviors": {
    "wait": 5000
  }
}
],
"predicates": [
{
"matches": {
"method" : "GET",
"path" : "/Blog.Api/[0-9]+/CustomerView"
}
}
]

Now run Mountebank. If you are using the GitHub repo, you can do this by running RunMounteBankStubsWithExampleForWait.bat file. Else run below command inside the directory where mountebank is available. If needed, modify the path to Imposter.ejs as required.

1
mb --configfile ExamplesForWaitBehaviour/Imposter.ejs --allowInjection

When we trigger a request via postman, we will get a response after specified delay + time for getting a response. Have a look at response time in below screenshot. Response time is more than 5000 ms.

PostManRequest

I was not active in this blog over past one month due to multiple reasons - both personal and professional. Following are some highlights over past one month.

ISTQB Test Automation Engineer

As mentioned in previous post, I had a chance to attend ANZTB SIGiST conference in August 2017. There were some good talks about various certifications offered by ISTQB/ANZTB. ISTQB Test Automation Engineer was one among them. I decided to give that a try . Spend some time over past one month to prepare based on the syllabus and I gave it a shot. Fortunately, I passed the exam with good marks

Traffic to blog

I had an interesting observation when I looked into google analytics for the blog. The traffic to this blog has grown over ten times compared to previous months. I analyzed the data and found that my blog and github repo for mountebank examples are mentioned in mountebank website. This resulted in having more traffic to the blog. I hope others are also benefitted from my experience with mountebank and the tutorial which I have in this blog. Hopefully, this will give me the motivation to write more.

An agile retrospective is a meeting held at end of a sprint to analyse their ways of working over past sprint and identify how to become more effective and then adjust accordingly. This is a ritual which belongs to team and criticism is given for facts/output and not for people. Retrospective creates an environment where the team feels safe and comfortable, which will allow them to talk freely about their thoughts and let go their frustration.

As an agile team, our team was pretty matured. Everyone knows what to do and what not to do. They come with solutions for most of the impediments faced during the sprint . Even then, there will be few issues which are still not resolved. Often retrospective meetings end up being a place for the team to let out their frustration rather than focusing on identifying what worked well and what could have been done better. Also action items coming out of discussion may be already tried out during sprint and was not working as expected.

Last week, I had a chance to run retrospective for the team. I wanted to focus more on proactive actions taken by the team while dealing with impediments. I decided to run a different retro in which I tried to keep emotions out and focus more on facts.

Goals of this retrospective

Hope below exercise will help to achieve :

  • Take emotions out of discussion and focus on facts
  • Review the actions taken by team during sprint and asses its effectiveness
  • Identify improvements which were not tried out during sprint

How to do it

Sprint Goals
  • First step is to identify the sprint goals and write it down on board for everyone to see. This reminds the team about their every day work to achieve the goals.
Negatives
  • Next step is to identify the risk, issues and blockers which prevented the team from achieving it. It can be anything which team found as an impediment.
  • Each team member has to write down a unique impediment on a card. Hence for a team of 10 members, you will have 10 unique impediments.
  • Team will then rate the impediments on a scale of 1 - 10. Where 1 being least and 10 is a complete blocker.
  • Once that is done, cards are exchanged with team members.
Positives
  • Next person will then have to think about what all good ideas happened for impediment on their card ( written by someone else). It can be anything which team has tried to overcome the blockers, any innovative ideas tried out, usage of extra time for learning and development etc. Team members are free to discuss this with others to find out all positives of that issue.
  • Once it is written, team member will rate it on a scale of 1 - 10. On practical world, the rating for positives will be less than the rating for negatives. Else, it will not be an impediment to start with.
Actions
  • Now facilitator has to collect back all cards and look for three cards having a maximum difference between negative and positive ratings.
  • By end of this, team will have three pressing impediments which they could not overcome in the sprint. It takes into account of all pro-active actions taken by the team while dealing with that specific impediments. It is also based on facts and collective feedback .
  • Now it is time to discuss and come up with action items. Obviously new action items coming out of discussion should be new and was not tried earlier.

Today I had a discussion with a project manager about stake holder expectations about value delivered from regression test automation and how to manage stake holder expectation. The discussion soon spanned on to challenges in automating manual test cases and candidates for automation testing.

Expectation Vs Reality

Management Stakeholders always visualize test automation as a silver bullet for fixing all pain points.They envision automation tests to be quicker, cheaper and effective in identifying all defects. Automation test cases are expected to be run on a button click and with 100% pass rate (except for valid bugs). Needless to say, that expectation is about having complete test coverage for automation scripts. Thinking is always geared towards reducing manual testers based on automation progress rather than having focus on improved quality of final product, faster time to market etc.

The ground reality is different from above expectation. Automated test cases are only as good as how you script it to be. Automated checks will alert tester about problems that checks have been programmed to detect.It ignores all other problems outside of it. Cost, speed, and ROI will depend on the tool used and complexity of tests implemented. Having an automated test is not a replacement for doing exploratory test manually. We need to cater for manual exploratory testing since automated scripts can only do verification of already known check points( for which the coding is done) and miss out check points which are not automated. In other words, test automation frees up tester’s time to focus more on exploratory testing which adds value.

Challenge in this specific case is to automate E2E manual regression test cases which are not existing. The testers are supposed to identify the regression test cases first by going to through existing application and then automate them. The expectation is that testers will identify all possible error scenarios and incorporate corresponding checks in automated scripts. This is going to be time consuming and expensive. It depends on domain knowledge of the person who creates automation test cases. There are chances that all existing bugs will be considered as an expected behaviour. More over the end to end test cases done at UI level is generally time consuming to develop, slow to execute and heavily depended on UI which makes it brittle.

Testing Pyramid

Solution to improve quality of a product is to follow the testing pyramid and try to automate more at lower levels instead of focussing at E2E level. This also has to be done while the product/software is developed.

Below is a modified version of testing pyramid.

As you can see above, more emphasis is given to have automated test at Unit test level, followed by component level, integration test level, and finally E2E level through UI. It is relatively cheaper to implement automated test at the base of the pyramid and will get more expensive as we go up. Similarly, unit tests are faster to run, it can isolate issues immediately and are more stable. These characteristics will change adversely as we go up in test pyramid.

As obvious, it is not feasible to achieve this for an already existing system without having a significant investment in people, time and tools. This will impact ROI. Depending on situations, there is no right or wrong way to do test automation. Having something is always better than nothing. Hence when there is a need to automate regression test cases, it normally starts from the top. Significant investment is needed upfront to identify all critical regression test cases and corresponding validation that should be performed by automated tests. It is not feasible to automate all test or to have 100% coverage. Success rates of automation script run will depend on various factors like test data, environment stability etc. Everyone should understand that we are automating check point verifications and hence it does trigger alerts only for the checks which it is programmed to do. E2E regression through UI should be only a minimal subset of what is covered through other levels. We should be ready to invest in maintaining the automation assets over a period of time.

In this case, stakeholder expectation needs to be carefully managed. It is important to set right expectation about benefits offered by test automation for a successful project delivery. Automation testing can deliver benefits over a long period of time , provided proper planning was done upfront to automate at different levels of testing. Instead of considering it as solution for all pain points, we need to clearly articulate /set expectation about its limitations and long term benefits.

Today, I had a chance to attend SIGiST conference organised by ANZTB. It was a 2-hour session which includes a presentation, discussion, and networking opportunities. Being a first-time attendee to SIGiST, I was not sure what surprise I may have. Overall it was a fruitful session and I had a chance to meet people from other organisation and to understand what is happening at their end.

Today’s presentation was about “Test Automation – What YOU need to know”. Over all, it was a good session even though I found the presentation is more geared towards uplifting manual testers and what steps they should take to stay relevant in today’s world. Going by crowd surrounding presenters after the session, it seems topic was well received and resonated with most of the people in the room. But those who have experience in automation / performance testing will find it basic. The presentation is expected to be uploaded here in few days.

The topic for discussion was “Carriers in Testing”. This was really engaging and people participated actively sharing their experiences in career progression, experience in getting jobs etc. There was pretty lengthy discussion about how to make your resume stand out in the crowd, importance of certification, soft skill, analyticall and debugging skills and how to market yourself. Few recruiters/managers shed thoughts on what they look in prospective employee’s resume and how they short list candidates for interview.