Today, I had a chance to attend SIGiST conference organised by ANZTB. It was a 2-hour session which includes a presentation, discussion, and networking opportunities. Being a first-time attendee to SIGiST, I was not sure what surprise I may have. Overall it was a fruitful session and I had a chance to meet people from other organisation and to understand what is happening at their end.

Today’s presentation was about “Test Automation – What YOU need to know”. Over all, it was a good session even though I found the presentation is more geared towards uplifting manual testers and what steps they should take to stay relevant in today’s world. Going by crowd surrounding presenters after the session, it seems topic was well received and resonated with most of the people in the room. But those who have experience in automation / performance testing will find it basic. The presentation is expected to be uploaded here in few days.

The topic for discussion was “Carriers in Testing”. This was really engaging and people participated actively sharing their experiences in career progression, experience in getting jobs etc. There was pretty lengthy discussion about how to make your resume stand out in the crowd, importance of certification, soft skill, analyticall and debugging skills and how to market yourself. Few recruiters/managers shed thoughts on what they look in prospective employee’s resume and how they short list candidates for interview.

One of the common requirement for automated testing is to run same test case against multiple test data. Luckily postman supports this by providing facility to use data files. This is available only when we run through postman collection runner or newman.

For this example, let us take a free public API http://services.groupkt.com/country/get/iso2code/AU . This API will return the name of the country depending on the 2 digit code passed. Let us assume that, we need to test this API with multiple country codes. For eg: AU, IN, GB etc. Let us take a look to see how this can be achieved using postman data files.

Environment file

First, create an enviornment Manage Environment option at top right. Create an entry for endpoint as below.

EnivironmentSetup

Create Collection

Next step is to create a collection with a GET request and write tests to verify the response. GET request used here is {EndPoint}/country/get/iso2code/{countrycode}

Endpoint is defined in environment file and countrycode will be in data file

Now write some tests to check the results. The data coming from data file will be available under “data” dictionary ( similar to global/environment variable. It can be accessed as data.VARIABLENAME or data["VARIABLENAME"] in both test and pre requisite scripts. Below screenshot shows the test which is for validating country name based on the data file.

EnivironmentSetup

DateFile

Postman supports both CSV and JSON format. For CSV files, the first row should be the variable names as the header. All subsequent rows are data row. JSON file should be an array of the keyvalue pair where the variable name is the key.

Data file used in this example is below. It has 3 column, where the first column is test case ID and the second one is country code which is used in the request and the third one is the country name, which is used for asserting the response received. In this example, I am looking for 3 different country codes.

datafile

Running Collections

While running collections, we need to specify below inputs.

  • Collection Name
  • Environment file
  • Data File

Depending on number of records in the data file, iterations will be auto populated. The results will also show the details for each iteration using the data. Details of response can be found by expanding response body

collection

result

Running through Newman

We can run same collection through Newman as well

1
newman run PathToCollectionsFile -e PathToEnvironmentFiles -d PathToDataFile

In this cases, I should run newman run DataDriven.postman_collection.json -e DataDrivenEnvironment.postman_environment.json -d data-article.csv.

Results will be as below

NewmanResults

In previous blog, I explained about how to create a GET request, analyze its response, write test cases for API and to save details to a collection for future use. In this blog, let me explain about how to run collections using Newman.

What is Newman

Newman is a command line collection runner for postman. Newman also has feature parity with Postman and it runs collection in the same way how it is run through Postman. Newman also makes it easier to integrate API test case execution with other systems like Jenkins.

Installing Newman

Newman is built on Node.js and hence it requires Node.js to be installed as prerequisite. Newman can be installed from npm with below command

1
$ npm install -g newman

Running collection using Newman

Collections are executed by calling run command in Newman. Basic command for executing collections is

1
newman run PathToCollectionFile -e PathToEnvironmentFileIfAny

Below is an example of running collections created in previous blog post using newman. Command will look like below newman run /Users/abygeorgea/Projects/Postman/Postman\ Tutorial.postman_collection.json -e /Users/abygeorgea/Projects/Postman/Test.postman_environment.json

Results

The result of API test case execution will look like below. It has a detailed report of number of iterations, number of request, test scripts, pre-requisites, assertions etc. As per standard, passed ones are shown in green and failed in red. The results look similar to details provided if collections are executed using postman.

NewmanResult

Additional Options of run command

Newman has various options to customize run. Different options can be found by running with -h flag

1
newman run -h

Different options listed are below

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
Abys-MacBook-Pro:~ abygeorgea$ newman run -h
usage: newman run [-h] [-v VERSION] [--no-color] [--color]
                  [--timeout-request TIMEOUT_REQUEST] [--ignore-redirects]
                  [-k] [--ssl-client-cert SSL_CLIENT_CERT]
                  [--ssl-client-key SSL_CLIENT_KEY]
                  [--ssl-client-passphrase SSL_CLIENT_PASSPHRASE]
                  [-e ENVIRONMENT] [-g GLOBALS] [--folder FOLDER]
                  [-r REPORTERS] [-n ITERATION_COUNT] [-d ITERATION_DATA]
                  [--export-environment [EXPORT_ENVIRONMENT]]
                  [--export-globals [EXPORT_GLOBALS]]
                  [--export-collection [EXPORT_COLLECTION]]
                  [--delay-request DELAY_REQUEST] [--bail] [-x] [--silent]
                  [--disable-unicode] [--global-var GLOBAL_VAR]
                  collection

The "run" command can be used to run Postman Collections

Positional arguments:
  collection            URL or path to a Postman Collection

Optional arguments:
  -h, --help            Show this help message and exit.
  -v VERSION, --version VERSION
                        Display the newman version
  --no-color            Disable colored output
  --color               Force colored output (for use in CI environments)
  --timeout-request TIMEOUT_REQUEST
                        Specify a timeout for requests (in milliseconds)
  --ignore-redirects    If present, Newman will not follow HTTP Redirects
  -k, --insecure        Disables SSL validations.
  --ssl-client-cert SSL_CLIENT_CERT
                        Specify the path to the Client SSL certificate. 
                        Supports .cert and .pfx files.
  --ssl-client-key SSL_CLIENT_KEY
                        Specify the path to the Client SSL key (not needed 
                        for .pfx files).
  --ssl-client-passphrase SSL_CLIENT_PASSPHRASE
                        Specify the Client SSL passphrase (optional, needed 
                        for passphrase protected keys).
  -e ENVIRONMENT, --environment ENVIRONMENT
                        Specify a URL or Path to a Postman Environment
  -g GLOBALS, --globals GLOBALS
                        Specify a URL or Path to a file containing Postman 
                        Globals
  --folder FOLDER       Run a single folder from a collection
  -r REPORTERS, --reporters REPORTERS
                        Specify the reporters to use for this run.
  -n ITERATION_COUNT, --iteration-count ITERATION_COUNT
                        Define the number of iterations to run.
  -d ITERATION_DATA, --iteration-data ITERATION_DATA
                        Specify a data file to use for iterations (either 
                        json or csv)
  --export-environment [EXPORT_ENVIRONMENT]
                        Exports the environment to a file after completing 
                        the run
  --export-globals [EXPORT_GLOBALS]
                        Specify an output file to dump Globals before exiting
  --export-collection [EXPORT_COLLECTION]
                        Specify an output file to save the executed collection
  --delay-request DELAY_REQUEST
                        Specify the extent of delay between requests 
                        (milliseconds)
  --bail                Specify whether or not to gracefully stop a 
                        collection run on encountering the first error
  -x, --suppress-exit-code
                        Specify whether or not to override the default exit 
                        code for the current run
  --silent              Prevents newman from showing output to CLI
  --disable-unicode     Forces unicode compliant symbols to be replaced by 
                        their plain text equivalents
  --global-var GLOBAL_VAR
                        Allows the specification of global variables via the 
                        command line, in a key=value format

Recently one of my colleagues has asked me to train him on using postman and Newman for API testing. Below is a cut down version of training session which I took for him.

What is Postman

Postman is an Http client for testing web services. It has a friendly GUI for constructing request and analyzing the response. There is a command line tool called Newman for running the postman collections from command line. This will help to integrate postman to other testing tools.

How to Install

Postman is available as both chrome extension and also as a native install. Native install files can be found here.

Example - GET Request

In order to trigger a get request, we need to identify below information

  • URL of API
  • Authentication details
  • Header details

For this example, let us look at a google finance API. API URL(including parameters) is http://www.google.com/finance/info?infotype=infoquoteall&q=NSE:BHEL There is no authentication details and header details that need to be passed with this. The params button will list down various parameters passed in a tabular format , which makes it easy to edit.

In postman, Select drop down as GET and enter the API Url. Screen will look like below Request

Now hit Send button. This will trigger a call to API and get the response which will then displayed in UI. Screen will look like below

Body

Headers returned are

Headers

Writing Tests

Above is an example of calling an API and analyzing its response. Postman also has a facility to write test cases to verify the response. Test cases are written in javascript. Tests are run after the request is sent and it will allow access to response objects. The editor also provides commonly used code snippets which make it easier to write test.

The Below example is written for calling one of free API mentioned here. In this example, we have test scripts for checking status code, values in the header, values in response, response time. We can even expand the test cases to complex verifications by writing javascript tests.

Tests

We notice following from above screenshot,

  • 6 test cases written on the top part to check for the status code, response time, header and response.
  • The response received on the bottom part.
  • Test tab shows that 6/6 test cases are passed ( in Green).

Now let us dive into details of the test results. Below screenshot shows details of test cases and their status.

Test Result

Collections

We can save the current request and it associated tests ( if any) for future use in postman. It can also be exported and shared with others. Select option as Save As from drop down next to Save. We can specify request name, provide a description and select a folder and sub folder to save the response.

Collections

Once saved, it will be available for use in collections.

Collections

Environments

Very frequently, we will have to run API test in different environments. Most of the time, there will be few differences in the requests, like different URL. In such cases, we can use environments in Postman.

Click on the Settings button on top right corner and select Manage environments. This is open up a new pop up where we can add Environment or import an existing environment file. For this tutorial, we will use Add option.

Environment

Now we can specify all unique parameter for each environment. In this case, I have given a key called “URL” and entered corresponding values and saved it as an environment named Test.

Environment

Environment

Now let us run the request using environments. First step is to replace https://jsonplaceholder.typicode.com with url in double curly braces. Then select Test in the Environment drop down at the top. Now click send. This will execute the request and run all associated test cases. Postman will dynamically replace with corresponding URL value specified in selected environment file. So assuming we have different environment files, each time the request will be sent to different URL based on environment selected. We can have any number of keys and values in one environment file.

Environment

From above, we can see that one test case is failed. Let us have a look into failed test case.

Environment

Failed test case is for the time taken for the response. Current request took 1491 ms which is higher than expected 200ms.

Exporting Collections and environment files

Postman provides facility to export collections and environment files as JSON. This helps to share the details with other team members and also to use Newman for running postman collections. Let us have a look into how to export them.

Exporting Collections

  • Click on Collections Tab.
  • Click on ... next to Collections Name.
  • Click on Export.
  • Select V2 option and save the file.

    Collection

    Collection

Exporting Environment File

  • Click on Settings button on top right corner.
  • Click on Manage environment.
  • Download the file.

    Export

Running Collections Using Postman Collection Runner

Postman provides a feature to run collections using collection Runner.

  • Click on Runner button on Top left to open collection runner
  • Select Collection name in drop down and select environment and then hit Start Run.

    Collection Runner

This will trigger execution of request and test cases mentioned in collection and results will be shown. Also note that collection runner has additional options like number of iteration, delay before sending request , input from data file etc .

Once execution is complete, result will be shown like below. It will have details of all assertions done and options to export results for future verification. Collection Runner

What Next ?

In this post, I have explain basic usage of postman for API testing . However the functionalities provided by postman is much more than above. We can also use Newman , which is command line collection runner , to execute collections. I will write another post about it sometime soon.

In previous blogs here , I have explained how we return a XML response using mountebank. However , most of the time, we will have to make some modification to the template response before returning a response. Say for example, we may have to replace details like timestamp, or use an input from request parameter and update that in response etc.

One of the easiest way to do this without using other frameworks like xml2js etc is to extract the substring between the node values and replace it . Below is a code snippet which will help to achieve this

The sample xml which we need to return is

1
2
<Status>Added</Status>
<GeneratedID>12345</GeneratedID>

In above example, assume that we need to replace the inserted record value every time based on the request coming through . We can do that by below

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
var xmldata = "<Status>Added</Status>\r\n<GeneratedID>12345</GeneratedID>"

var generatedId = xmldata.match(new RegExp("<GeneratedID>"+"(.*)"+"</GeneratedID>"));
console.log(generatedId);
// Output will be as below. from Array we can extract the substring, index of its location etc
/*
[ '<GeneratedID>12345</GeneratedID>',
  '12345',
  index: 24,
  input: '<Status>Added</Status>\r\n<GeneratedID>12345</GeneratedID>' ]
  */

//so extract data from first location to get substring  
generatedId = xmldata.match(new RegExp("<GeneratedID>"+"(.*)"+"</GeneratedID>"))[1];
console.log(generatedId);
//Above will print "12345" , which is the expected value
// This can be used for extracting value of xml nodes

//if we need to replace this with another value ( possibly coming from request parameter)
var result = xmldata.replace(generatedId, "99999");
console.log(result);

Predicates in Mountebank imposter files is a pretty powerful way to configure stubs. It helps us to return different responses based on the request parameters like type, query string , headers, body etc. Let us have some quick look at extracting values from request

Based on Query String

Below is an example of extracting the records based on query string. If the request is like path?customerId=123&customerId=456&email=abc.com Note: This is slightly modified version of code in mbtest.org

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
{
  "port": 4547,
  "protocol": "http",
  "stubs": [
    {
      "predicates": [{
        "equals": {
          "query": { "customerId": ["123", "456"] }
        }
      }],
      "responses": [{
        "is": {
          "body": "Customer ID is either 123 or 456"
        }
      }]
    },
    {
      "predicates": [{
        "equals": {
          "query": { 
              "customerId": "123",
              "email" :"abc.com"
               }
        }
      }],
      "responses": [{
        "is": {
          "body": "Customer ID is 123 and email is abc.com"
        }
      }]
    }
  ]
}

Based on Header Content

If input data is shared through values in header, that can be extracted. Below snippet is directly from mbtest.org

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
{
  "port": 4545,
  "protocol": "http",
  "stubs": [
    {
      "responses": [{ "is": { "statusCode": 400 } }],
      "predicates": [
        {
          "equals": {
            "method": "POST",
            "path": "/test",
            "query": {
              "first": "1",
              "second": "2"
            },
            "headers": {
              "Accept": "text/plain"
            }
          }
        },
        {
          "equals": { "body": "hello, world" },
          "caseSensitive": true,
          "except": "!$"
        }
      ]
    }
   ]
 }

In previous post, I mentioned that we can use Galen for automated lay out testing. Galen offers a simple solution to test location of objects relative to each other on the page. Galen is implemented using Selenium Web driver. Hence we can use it for normal functional automation testing as well.

Documentation of Galen

  • Galen has its own domain specific language to define Specs. Detailed documentation can be found here.
  • Galen has its own javascript API which provides a list of functions which make writing test cases easier. Detailed documentation can be found here.
  • Galen pages javascript API is light weight javascript test framework. Details are available here.
  • Details of galen test suite syntax are here.
  • Galen framework has a detailed documentation of its usage and functions here.

Installation

Below are high-level steps to help you get started.

  1. Ensure Java is installed. Galen needs Java version above 1.8
  2. Download binary from http://galenframework.com/download/
  3. Extract the zip file
  4. Add the location of extracted files to PATH environment variables. A detailed guide for older versions of Windows is available here.
  5. Alternatively, on Windows , you can create a bat file to run Galen by changing Path on the fly. Details are in below steps.

Setting up Galen Framework

There are different framework available for testing responsive design based on Galen. Galen bootstrap is one of such framework which can be reused.

  • Download and extract the project from Github. Keep relevant files only. You can remove
  • Create an init.js file to load galen-bootstrap/galen-bootstrap.jsscript and configure all devices and a website URL for testing. URL mentioned below is an example of responsive web design template.
1
2
3
4
5
load("galen-bootstrap/galen-bootstrap.js");
//$galen.settings.website = "https://alistapart.com/d/responsive-web-design/ex/ex-site-FINAL.html";
//$galen.registerDevice("mobile", inLocalBrowser("mobile emulation", "450x800", ["mobile"]));
//$galen.registerDevice("tablet", inLocalBrowser("tablet emulation", "600x800", ["tablet"]));
//$galen.registerDevice("desktop", inLocalBrowser("desktop emulation", "1024x768", ["desktop"]));

Note: Uncomment the lines above. Octopress blog engine was throwing error when it tries generate post.

  • Run galen config from the command line with the project directory. This will create Galen config file in the location where the command is run.

Create Galen Config

  • Modify galen.config file to make chrome as default browser and add path to chrome driver. There are other useful configs like range approximation, screenshot, selenium grid etc in the config.
1
2
galen.default.browser=chrome
$.webdriver.chrome.driver=.\\..\\WebProject\\Driver\\chromedriver.exe
  • Create a folder named Test for keeping test cases and create test files example.test.js. Copy below content to example.test.js. Make sure to update the relative location of the init.js file created in previous steps. Below content loads init.js file which lists out website URL, device sizes that need to be tested.It then calls a function to test on all devices. Check layout is one of the available javascript API function.
1
2
3
4
load (".\\..\\init.js")
testOnAllDevices("Welcome page test", "/", function (driver, device) {
    checkLayout(driver, "specs/homepage.gspec", device.tags, device.excludedTags);
});
  • Create a folder named specs and create a spec file named homepage.gspec. We need to update the specs with layout checks . Below is the sample spec for checking image and section intro for the sample URL from init.js. First section defines the objects and its identifier. Second section says that on desktop, image will on left side of section intro and on mobile and tablet, it will be above section intro
1
2
3
4
5
6
7
8
9
10
11
@objects
  image        id         logo
  menu         css        #page > div > div.mast > ul
  sectionintro    css     #page > div > div.section.intro

= Main  Section =
  image:
      @on desktop
          left-of sectionintro
      @on mobile, tablet
          above sectionintro
  • Now create a bat file in the main folder to run the galen test cases. Make sure to give relative paths to test file, configs, reports correctly. Modify Path variable to include path location to galen bin. This is not needed if we manually set pah while installing. However, I prefer to have galen bin files as well in source control and point the path to that location so that we don’t have any specific dependency outside the project.
1
2
SET PATH=%PATH%;.\galen-bin
galen test .\\test\\example.test.js  --htmlreport .\reports   --jsonreport .\jsonreports --config .\galen.config

once all files are created, folder structure will look like below

  • run the bat file created above. This will ideally run example.test.js file which invokes chrome driver, navigate to the URl, resizes the browser and then check for the specs.It will list out the results in command prompt. Once it completes are all test execution, it creates both HTML report and JSON report in corresponding folder location mentioned in bat file. Below is a sample HTML report, which is self-explanatory.

Main report

If we expand the result for desktop emulation, it will look like below.It will list down each assertion made and indicate whether it is passed or failed.

If we click on the assertion point, it will show the screenshot taken for that assertion by highlighting the objects which will help for easier verification. Below screenshot shows that image is on left side of section intro as defined in spec file.

In a world where mobile first seems to be the norm, testing of look and feel of websites on various mobile/tablet devices are essential. More businesses are now adopting Responsive Web designs for developing their web applications and sites.

What is Responsive Web Design

According to Wikipedia, Responsive web design (RWD) is an approach to web design aimed at allowing desktop webpages to be viewed in response to the size of the screen or web browser one is viewing with. In addition, it’s important to understand that Responsive Web Design tasks include offering the same support to a variety of devices for a single website. A site designed with RWD adapts the layout to the viewing environment by using fluid, proportion-based grids, flexible images, and CSS3 media queries, an extension of the @media rule, in the following ways:

  • The fluid grid concept calls for page element sizing to be in relative units like percentages, rather than absolute units like pixels or points.
  • Flexible images are also sized in relative units, so as to prevent them from displaying outside their containing element.
  • Media queries allow the page to use different CSS style rules based on characteristics of the device the site is being displayed on, most commonly the width of the browser

How do we test responsiveness

An ideal option for testing is to test on different physical devices of various screen size. However, it is impossible to get hold of all available mobile/tablet devices in the market. Even if we prioritize the devices using analytics, it is very expensive to buy enough number of devices. Along with this, we need to upgrade to newer version of devices frequently when apple/google/Samsung releases an upgraded version.

Next possible option is to use device emulators like device mode in Chrome dev tools. As pointed out in their documentation, it is only a close approximation of how the website will look on a mobile device.It have its own limitations which are listed here

Best approach will be to use emulators early in development cycle and once UX design is stabilized, then test it on physical device based on priority obtained by analytics.

Challenges in testing Responsive Websites

Testing of responsive websites has its own challenges.

  • Number of Options to be tested or number of breakpoints which needs to be validated are high
  • Distinctive UI designs for different device screen sizes makes testing time consuming. This adds complexity to testing since it will require testing of below in various screen sizes

    • All UI elements like image, text, controls are aligned properly with each other and doesn’t overflow from screen display area
    • Consistency in font size, color, shades , padding, display orientation etc
    • Resizing of controls which take inputs (like text) to cater for long content typed in by users.
    • Other CSS validation specific for mobile and tablet devices
  • It is hard to test all of the above on every iteration manually.

  • Comparing UI & UX & Visual design will require more efforts.
  • Hard to keep track of every feature that needs to be tested and will have testing fatigue which will result in Non-obvious changes to UI

Automated Responsive Design testing - Galen Framework

As mentioned above, one of the pain points in responsive design testing is the user fatigue happening over multiple iterations of testing. This can be easily avoided by having an automated test framework. I recently came across galen framework which is an open source framework to test layouts of webpages. You can read about Galen framework here. Galen framework can be used for automation of CSS testing easier. It has evolved over time and has its own Domain specific language and commands which can be used for CSS testing. I will go through galen framework in more details in next post

Very often we will be committing smaller pieces of work in our local machine as we go. However before we push them to a centralized repository, we may have to combine these small commits to single large commit, which makes sense for rest of the team. I will explain how this can be achieved by using interactive rebasing.

To start with, let us assume the initial commits history look like below. It have 4 minor commits done to the same file. Initial Commit Structure

Now we need to squash last for commits into a single commit. The command required for that is as below. This tells git to rebase head with previous 4 commits in an interactive mode.

    $ git rebase -i HEAD~4

This will pop up another editor with details of last 4 commits and some description about possible actions on this. Initially, all of them will have a default value of PICK. Since we are trying to squash commits together, we can select one of the commits as PICK and rest all needs to be changed as SQUASH. Save and close the editor once all changes are made.

Interactive Rebasing

After this, another popup will appear with comments given for each of the commits. We can comment out unnecessary comments by using # and also modify required comments as we need. In below screen, I have modified comments for the first commit and commented out rest all. Save and close the editor once all changes are made.

Selecting comments

Now Git will continue rebasing and it will squash all commits as selected in the previous step.

Git rebase

If we look at commit history, we can see that commits are now squashed to single commit.

Squashed Result

Over the past weekend, I noticed that my blog is not available since azure has disabled hosting of my WordPress blog. It happened because I ran out of my free credits for the current month. I started looking for alternate options for hosting WordPress. That’s when I came across (Static Generator is All a Blog Needs - Moving to Octopress). I decided to give it a try.

Below are the main steps which I followed for migrating to Octopress

Documentation

  • Read documentation of Octopress here and Jekyll here

Setup

  • Install Chocolatey as mentioned in documentation here Below command can be run on cmd.exe open as administrator
cmd.exe
1
@powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"
  • As mentioned in octopress documentation, ensure Git, ruby and devkit are installed. Cholocatey way of installation can be found in git, ruby , devkit. Below commands can be run on cmd.exe
cmd.exe
1
2
3
choco install git.install
choco install ruby
choco install ruby2.devkit
  • By default, devkit is installed in C:\tools\. Move in devkit folder and run below commands
cmd.exe
1
2
3
ruby dk.rb init
ruby dk.rb install
gem install bundler

Install Octopress

cmd.exe
1
2
3
4
git clone git://github.com/imathis/octopress.git octopress
cd octopress
bundle install
rake install // Install default Octopress theme

Install Octostrap3 theme & Customize

  • Since I didn’t like the default theme much, I installed Octostrap3 theme as mentioned here
cmd.exe
1
2
git clone https://github.com/kAworu/octostrap3.git .themes/octostrap3
rake "install[octostrap3]"
  • Fix up all issues. The date displayed as “Ordinal” can be fixed by updating _config.yml file as mentioned in their blog. Below is the config which I used
_config.yml
1
date_format: "%e %b, %Y"
  • I made few more changes for changing the navigation header color, color of code blocks and also to include a side bar with categories. The changes are as below

Changing color of code blocks is done by commenting below line in octopress\sass\custom\_colors.scss

1
\\$solarized: light;

Navigation header color is changed by adding below to octopress\sass\custom\_styles.scss

1
2
3
4
5
6
7
8
9
.navbar-default {
    background-image: -webkit-gradient(linear,left top,left bottom,from(#263347),to(#263347));
}
.navbar-default .navbar-brand {
    color: #fff;
}
.navbar-default .navbar-nav>li>a {
    color: #fff;
}

Adding category side bar is done by following steps mentioned in Category List Aside

Google Analytics Integration

Next step was google analytics integration. Detailed steps for this is available on various blogs. Below is what I followed

  • Sign up for google analytics ID in here
  • Update _config.yml with google analytics ID
1
2
# Google Analytics
google_analytics_tracking_id: UA-XXXXXXXX-1
  • Update google_analytics.html file with below
1
2
3
4
5
6
7
8
9
10
   <script>
    (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
    (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
    m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
    })(window,document,'script','//www.google-analytics.com/analytics.js','ga');

    ga('create', 'UA-XXXXXXXX-1', 'auto');
    ga('send', 'pageview');

  </script>
  • UA-XXXXXXXX-1 can be replaced with site.google_analytics_tracking_id enclosed in double braces/curly brackets
  • Log in to Google Analytics site and navigate to Admin >> View >> Filters
  • Add a new filter to exclude all traffice to hostname “localhost”. This will help to exclude all site visit done for development/ preview purpose.

Sample Post

  • Now create a Hello World post and check how it look
cmd.exe
1
2
3
rake new_post["Hello World"]
rake generate
rake preview

rake preview mounts a webserver at http://localhost:4000. By opening a browser window and navigating to http://localhost:4000 will preview the Hello World Post

Deploying to GitHub Pages

Detailed instructions can be found in Deploying to Github Pages. Below are high-level steps copied from there - Create a GitHub repository with name yourusername.github.io - Run below command. It will prompt for GitHub URL, which needs to be filled in

cmd.exe
1
2
3
rake setup_github_pages // This does all configurations
rake generate
rake deploy
  • Now we can commit the source
cmd.exe
1
2
3
git add .
git commit -m 'your message'
git push origin source

Custom Domain

  • Create a file named CNAME in blog source
  • Update it with custom domain name. It has to be a sub domain (www.examplesubdomain.com)
  • Update the CNAME dns setting in your domain provider to point to https://username.github.io
  • If top-level domains (exampletopdomain.com) are needed, then configure A record to point to IP address 192.30.252.153 or 192.30.252.154.

Migrating Old blog Post from word press

After completing above steps, a new octopress blog is ready to go . Below are the steps which I followed to migrate old blog posts from word press.

  • Clone Exitwp
  • Follow the steps mentioned in readme.md.

    • Export old wordpress blog using WordPress exporter in tools/export in WordPress admin
    • Copy xml file to wordpress-xml directory
    • Run python exitwp.py in the console from the same directory of unzipped archive
    • All blogs will be created as separate directory under build directory
    • Copy relevant folders to source folder of the blog
  • Find broken redirection links and fix

    • The redirection links are now changed to something like {site.root}blog/2017/04/07/mountebank-creating-a-response-based-on-a-file-template-and-modifying-it-based-on-request-part-1/
  • Find broken image links and fix
    • Inorder to make it easier for migrating to another platform later, I created a new config value in _config.yml as below . images_dir: /images
    • The image links are not pointing to {site.images_dir}/2017/04/27/Mountebank_XML_Response_Folder-Tree.jpg

SEO Optimisation in Octopress

  • In rake file, add below two lines post.puts "keywords: " and post.puts "description: "
  • Final content will look like below
cmd.exe
1
2
3
4
5
6
7
8
9
post.puts "---"
post.puts "layout: post"
post.puts "title: \"#{title.gsub(/&/,'&')}\""
post.puts "date: #{Time.now.strftime('%Y-%m-%d %H:%M:%S %z')}"
post.puts "comments: true"
post.puts "categories: "
post.puts "keywords: "
post.puts "description: "
post.puts "---"
  • Add relevant Keyword and description to all pages