Validate JSON response with URLs | Scripting Postman

I have been in a situation with an API url testing with the postman, where the server response has got a collection of URLs and I want to assure that these URLs are active i.e. Http OK.

To achieve this objective, I have to do the following:

  1. Call API, and store url result
  2. Call each URL and assert

Let us go one by one.

Call API, and Store URL result.
(PS: Instead of making an API call, which will return urls, I am faking my first API call 
with, and have coding urls).
  • Open Postman and create a folder called “ValidateResponseURL”
  • Create a postman script, named as “get-contents”. It will call your API. For this demo, I am calling
  • Go to tests tab, and check that the response code is 200 for “GET”
  • Store a collection of urls into an environment variable using postmant.setEnvironmentVariable(“…”)
  • Please note that, instead of storing url collection as it is, you need to store the first element into a separate environment variable, so that you can make a decision if there is any result from the server. In addition, it will also help you to use this dynamic url in the next step.This is how the “get-contents” step looks like in my machine.
Call each URL and assert
  • Now if you run the first postman step and check local variable, you will find the two environment variables “testurl” and “testurls”
  • Create an another postman step, named as “validate-urls”
  • Select a “Get” verb, and use “testurl” environment variable i.e. {{testurl}} as your url.
  • Now go to the tests tab of your postman step, and validate that the response code is 200.
  • Fetch a next url from “testurls” environment variable, and execute “validate-urls” step again.
  • When there is nothing left in the “testurls” collection, clear environment variables.Your script should look something like this:

Once everything is settled, you can now execute the same using Runner and you will find the result as follows:

As you can see in the result that my all websites are alive and responding with Http OK response code i.e. (200).

Thats all folks!




“Branching and Looping”,

“Test script” ,


Browsemate – Search made easy in your open tabs, bookmark and history

Have you ever been in any situation, when you wanted to search for open – tab within your chrome windows or you are facing problem in finding the website you had visited earlier! Then this plugin is the correct choice for you.

With the help of this plugin you can search for a link in your open tabs, saved bookmarks and even your browser history.

You can search by page title, or a URL/link.

How to use it?

We have provided you the Browsemate search in your browser address bar or you can use this feature through a popup window, by pressing an icon on your browse right top corner.

Address Bar:

Type “bm <search criteria>” in your address bar and it will make all the similar links visible.

Popup Windows:

Click on browsemate icon, presented next to your address bar, and then type your search criteria. You can even use arrow keys.



Docker commands

Create docker:

> docker run -d -P --name jenkins-master -v /Users/codebased/docker:/jenkins-data jenkins

To start/restart/kill the existing docker by name

> docker start/restart/kill jenkins-master


It will get you a list of volumes that your containers are referring to.

> docker volume list

If you want to access the docker file system as a root user.

> docker exec -it --user root <containerid/name> bash

By default the vim is not present in docker. you can install with these two command:

> apt-get update
> apt-get install vim


“BEAUTY” – what is it?

“BEAUTY” what is it?

  • Is beauty in applying makeup?
  • Is beauty in wearing trendy clothes?
  • Is beauty in having long-coloured nails?
  • Is beauty in the way you look & attract people?


  • Beauty is the way you come up!
  • Beauty is the way, you handle with your environment
  • Beauty is the way, you take care of your loved ones
  • Beauty is the way, you cuddle a new born
  • Beauty is the way, you make someone smile
  • Beauty is the way, you help an old being
  • Beauty is the way, you apologise at your fault
  • Beauty is  when, you say nothing when you are been scolded
  • Beauty is when, you see someone falling & pick him up
  • Beauty is  when, you see someone crying & wipe the tears
  • Beauty is when, you give your first look to your mother
  • Beauty is when, you see the first flash of your sibling
  • Beauty is when, you ask for something from your father & get more than your need
  • Beauty is when, you fight & say SORRY to save the bond
  • Beauty is when, you meet your best friend after a long time
  • Beauty is when, you share your personal stuff with close ones


  • It is what you do & how you do.
  • It is in your scars, made at first step of your learnings.
  • It is in, what you face while climbing up the ladder of success.
  • It is in, what difference you can feel in yourself.
  • It is not only present in your outer look…..but, its is present in you!

Beauty is

  • Beauty is present in your mind!
  • Beauty is present in your thoughts!
  • Beauty is present in your intentions!
  • Beauty is present in your experiences!
  • Beauty is present in your actions!
  • Beauty is present in your words!
  • Beauty is present in your heart!

It was there in yesterday’s, It is there in today’s & It would be there in tomorrow’s….It is there in days & nights…

Beauty revolves around you! Your whole life is beautiful…..real just needs to be find.


Git: Renaming myfile to MyFile on case insensitive file systems, such as Windows System.


gitiwinIt all started when I wanted to change one solution file name from to ESD.Wealth.Services.sln.

I thought it is going to be as easy as renaming a file in the operating system and then I can push my changes. However, it was not that easy with my Windows machine and Gits.

I could manage to change file name in my file system. Then git could not identified these changes.

Then, I could sense that it has something to do with case insensitivity of the file name in Windows.


After researching a bit I found this link. It tells you about how easy it is to change the folder name:

git mv foldername tempname && git mv tempname folderName

So I applied the same with my file.

git mv esd.wealth.sevices1.sln && git mv esd.wealth.services1.sln ESD.Wealth.Sevices.sln

Now I can see my changes and it is ready to push.

Is it not simple ?



npm –save or –save-dev. Which one to use?


If you have ever worked in NodeJs, you must have install one or two packages through “npm install <package>” command.  By running this command, the nodeJs will install this package on your working directory, under node_modules.

To save this packages as your dependencies, under package.json, you have two choices:

  • –save-dev
  • –save
What is package.json? 

All npm packages contain a file, usually in the project root, called package.json - this file holds various metadata relevant to the project. This file is used to give information to npm that allows it to identify the project as well as handle the project's dependencies. It can also contain other metadata such as a project description, the version of the project in a particular distribution, license information and et al.

Let us understand the difference that it can make.


Say you have a package.json within your root folder of your project.

If you don't have one then create a package file using npm init command.

My package.json looks like this:

 "name": "TMSPA",
 "version": "1.0.0",
 "description": "Single page application for TM",
 "main": "index.html",
 "scripts": {
 "test": "echo \"Error: no test specified\" && exit 1"
 "repository": {
 "type": "git",
 "url": ""
 "author": "Am",
 "license": "ISC",
 "bugs": {
 "url": ""
 "homepage": ""

Now I want to install some dependencies.

Before I install one, I need to search for the package name. If you know the package that you want to install thats good. Otherwise, you can use npm search command:

npm search bootstrap

or try one of the following search tools:

Once you have identified the right package that you want to install, you can use the mentioned command i.e. npm install <package name>.

Here you have two, actually three, options.

1. use –save-dev
e.g. npm install should --save-dev

You will use this option when you want to download a package for developers , such as grunt, gulp, then use this option. Thus, when you are distributing your code to production, these dependencies will not be available.

As an example, let say, you want to use grunt as your task runner. This package is required for development purpose. Thus, you should use –save-dev here.

npm install grunt --save-dev

The above command will save grunt dependency under devDependencies section of your package.json, shown below:

 "name": "TMSPA",
 "version": "1.0.0",
 "description": "Single page application for TM",
 "main": "index.html",
 "scripts": {
 "author": "Codebased",
 "devDependencies": {
 "gulp": "^3.8.11"
2. Use –save flag

You will use this option when you want to save a package dependency for distribution. Item such as angularjs, or any other module that is is required at run time by your program, you will use –save switch.

npm install angularjs --save

Now my package.json looks like this:

 "name": "TMSPA",
 "version": "1.0.0",
 "description": "Single page application for TM",
 "angularjs": "^1.4."
 "devDependencies": {
 "gulp": "^3.8.11"

3. Use nothing

If you call npm install command without any flag then it will install package. However, there is no way the package.json will be updated with your dependencies.

This option is not recommended because there is no way others will get to know about the dependencies that your module has.


In conclusion, we understand that the –save-dev, and –save flags are used for limiting the scope of your dependencies.


Implementing SOLID principle For TDD on Web Service Reference

What is SOLID?

SOLID are five basic principles which help to create good software architecture. SOLID is an acronym where:-

•S stands for SRP (Single responsibility principle
•O stands for OCP (Open closed principle)
•L stands for LSP (Liskov substitution principle)
•I stands for ISP ( Interface segregation principle)
•D stands for DIP ( Dependency inversion principle)

There are many good article present over the Internet explaining SOLID in details. However, my entire focus will be on how I have implemented this principle on Web Service proxy that I’ve to currently worked on.
Since I am following TDD approach, I will start the development from explaining Unit test first:

Unit testing is all about testing current method (High cohesion and Low coupling). If this method is relaying on another method or an object then you should mock that object/ method so that you can test your method independently.

If you not sure about High cohesion and low coupling principle in OOPS, then I recommend you to learn about SOLID Principles.
You should also read about TDD.

This principle applies even when the dependent code has not been written as yet (well that is why we call it as TDD ;-).

In case of my integration project with sharepoint I was dependent on an external resource. I’d to call a sharepoint web service method GetUserProfileByName, with the endpoint at http://<yourwebsite>/_vti_bin/UserProfileService.asmx

When we used Visual Studio, Add Service reference tool, it creates a proxy class to call the web service:

[System.Web.Services.WebServiceBindingAttribute(Name="UserProfileServiceSoap", Namespace="")]
 public partial class UserProfileService : System.Web.Services.Protocols.SoapHttpClientProtocol {…


When I saw this proxy file, it has got more than 50+ methods that are exposed. As a principle of SOLID, we should expose only those methods that are important to expose. In our case it is just one method i.e. GetUserProfileByName.
Another important factor we should consider into our design is to how can I assure that the implementation of the sharepoint can be easily replaced with any new external end point such as Active Directory.

To solve all these problems, I just need to follow a I and D pont from SOLID principle.

I – Interface segregation principle – many client-specific interfaces are better than one general-purpose interface.

D – Dependency inversion principle – one should “Depend upon Abstractions. Do not depend upon concretions.”

Now I will not go into the detail of SOLID but these are the steps that you should follow:

1. Import Web service proxy into the project. Now keep in mind that the web service proxy is of partial class. Thus we can create an another partial class and implement a contract.
2. Create a Contract that must contain only those members that you need to expose.

public interface IServiceClient

 bool UseDefaultCredentials { get; set; }
 string Url { get; set; }
public interface IUserProfileServiceClient: IServiceClient
 PropertyData[] GetUserProfileByName(string accountName);

3. Now that I have got contract ready, all I have to do is create a partial class with same signature. In case my case it:

public partial class QueryService : IQueryServiceClient {

NOTE: You don’t need to implement any member because it has been already created by the proxy class.

4. Create a contract of your implementation that you want to test:

public interface IExternalUserService
 User GetUserProfile(string accountName);

5. Create a concrete class:

public class DirectoryUserService : IExternalUserService 
 private readonly IUserProfileServiceClient _profileService;
public DirectoryUserService(IUserProfileServiceClient profileService) {
_profileService = profileService;
public User GetUserProfile(string accountName)
var propertyBag = _profileService.GetUserProfileByName(accountName);

Now that we have implemented our method that we want to test, the following UnitTest implementation is using Moq Framework for Stub that is going to pass as IUserProfileServiceClient object.

public class DirectoryUserServiceTest
private PropertyData[] _fakePropertyData;
public void Setup()
// Arrange
_fakePropertyData = new[]
{ new PropertyData() {
Name = "WorkEmail",
Values = new[] {new ValueData {Value = ""}}
new PropertyData()
Name = "PictureURL",
Values = new[] {new ValueData {Value = "http://x/xtra/photos/photo152173.jpg"}}
new PropertyData() {Name = "FirstName", Values = new ValueData[] {new ValueData() {Value = "Amit"}}},
new PropertyData() {Name = "LastName", Values = new ValueData[] {new ValueData() {Value = "Malhotra"}}},
new PropertyData() {Name = "DirectoryID", Values = new ValueData[] {new ValueData() {Value = "152173"}}},
new PropertyData() {Name = "UserProfile_GUID",Values = new ValueData[] {new ValueData() {Value = Guid.NewGuid()}}
},new PropertyData() {Name = "BusinessUnit", Values = new ValueData[] {new ValueData() {Value = "RBS"}}},

new PropertyData(){Name = "Department",Values = new ValueData[] {new ValueData() {Value = "Partner Engagement"}}
public void TestGetUserProfileNotNull()
// Arrange
var mockService = new Mock<IUserProfileServiceClient>();
mockService.Setup(r => r.GetUserProfileByName(It.Is<string>(x => x.Equals("amit"))))

IExternalUserService directoryService = new DirectoryUserService(mockService.Object, null);

// Act
User user = directoryService.GetUserProfile("amit");


Point of Interest:

Understand SOLID:

S stands for SRP (Single responsibility principle):- A class should take care of only one responsibility.
O stands for OCP (Open closed principle):- Extension should be preferred over modification.
L stands for LSP (Liskov substitution principle):- A parent class object should be able to refer child objects seamlessly during runtime polymorphism.
I stands for ISP (Interface segregation principle):- Client should not be forced to use a interface if it does not need it.
D stands for DIP (Dependency inversion principle) :- High level modules should not depend on low level modules but should depend on abstraction.

Understand Moq:

Now if you are following the discipline of TDD and are building large scale enterprise application that interacts with services, databases, or other data sources, writing data driven unit tests becomes a challenge. Test setup (for each test) becomes more complicated and challenging than the actual unit test. This is where the concept of Mock comes into picture

SQL Queries to IIS Logs


There is always a case that you want to integrated IIS Logs into your project. You can relay on third party services/ interface. However, the best approach is to have one application where you can integrate the logic of IIS log reading.

Since 2009, Microsoft has introduced a tool called Log Parser that provides sql like queries against IIS Log files.

You can get the latest version of this tool from here.

Otherwise, you can also download the same through choco command.

choco install

Get Started


Go to the path C:\Program Files (x86)\Log Parser 2.2\

Log Parser directory

Log parser by default provide you a COM DLL that you can import in Native C/C++ projects, or you can also import the same in .NET project that uses Interop facility.

To use COM DLL into .NET project you can also use tlbimp.exe command as well.

Open command prompt and run this simple select statement:

 "C:\Program Files (x86)\Log Parser 2.2\LogParser.exe" "select * from c:\inetpub\logs\logfiles\w3svc1\u_ex150101.log"

You can also view output in a GUI through the -o:DataGrid  switch value.

 "C:\Program Files (x86)\Log Parser 2.2\LogParser.exe" "select * from c:\inetpub\logs\logfiles\w3svc1\u_ex150101.log" -I:w3c -o:datagrid

log parser datagrid

Command Reference:

C# Integration

First thing you got to do is Add reference into your .NET project using Visual Studio IDE.


 public interface IWebLogService
        List<IISLogCount> GetLogs(string fileName = null, string api = null);
        List<IISLog> GetLogDetails(string uri, string fileName = null);
  public class WebLogService : IWebLogService
        public List<IISLogCount> GetLogs(string fileName = null, string api = null)
            if (string.IsNullOrWhiteSpace(fileName))
                fileName = "{0}\\*.log".FormatMessage(ConfigurationManager.AppSettings["IISLOGPATH"]);

            if (string.IsNullOrWhiteSpace(fileName))
                throw new ArgumentNullException(fileName);

            string query = string.Empty;

            if (string.IsNullOrWhiteSpace(api))
                query = @"
                SELECT date, cs-uri-stem, cs-method, count(cs-uri-stem) as requestcount from {0}
                WHERE STRLEN (cs-username ) > 0 
                GROUP BY date, cs-method, cs-uri-stem 
                ORDER BY date, cs-uri-stem, cs-method, count(cs-uri-stem) desc".FormatMessage(fileName);
                query = @"
            SELECT date, cs-uri-stem, cs-method, count(cs-uri-stem) as requestcount from {0}
                WHERE cs-uri-stem LIKE {1} and STRLEN (cs-username ) > 0 
                GROUP BY date, cs-method, cs-uri-stem 
                ORDER BY date, cs-uri-stem, cs-method, count(cs-uri-stem) desc".FormatMessage(fileName, " '%/api/{0}%' ".FormatMessage(api));

            var recordSet = this.ExecuteQuery(query);
            var records = new List<IISLogCount>();
            int hit = 0;
            for (; !recordSet.atEnd(); recordSet.moveNext())
                var record = recordSet.getRecord().toNativeString(",").Split(new[] { ',' });
                if (int.TryParse(record[3], out hit))
                {                }
                    hit = 0;

                records.Add(new IISLogCount { Hit = hit, Log = new IISLog { EntryTime = Convert.ToDateTime(record[0]), UriQuery = record[1], Method = record[2] } });

            return records;
        public List<IISLog> GetLogDetails(string uri, string fileName = null)
            if (string.IsNullOrWhiteSpace(fileName))
                fileName = "{0}\\*.log".FormatMessage(ConfigurationManager.AppSettings["IISLOGPATH"]);

           if (string.IsNullOrWhiteSpace(fileName))
                throw new ArgumentNullException(fileName);
           string query = string.Empty;

            query = @"SELECT"
            + " TO_TIMESTAMP(date, time) AS EntryTime"
            + ", s-ip AS ServerIpAddress"
            + ", cs-method AS Method"
            + ", cs-uri-stem AS UriStem"
            + ", cs-uri-query AS UriQuery"
            + ", s-port AS Port"
            + ", cs-username AS Username"
            + ", c-ip AS ClientIpAddress"
            + ", cs(User-Agent) AS UserAgent"
            + ", cs(Referer) AS Referrer"
            + ", sc-status AS HttpStatus"
            + ", sc-substatus AS HttpSubstatus"
            + ", sc-win32-status AS Win32Status"
            + ", time-taken AS TimeTaken"
            + " from {0} WHERE cs-uri-stem = '{1}' and STRLEN (cs-username ) > 0  ORDER BY EntryTime".FormatMessage(fileName, uri);

            var resultSet = this.ExecuteQuery(query);

            var records = new List<IISLog>();
            for (; !resultSet.atEnd(); resultSet.moveNext())
                var record = resultSet.getRecord().toNativeString(",").Split(new[] { ',' });

                records.Add(new IISLog { EntryTime = Convert.ToDateTime(record[0]), UriQuery = record[1], Method = record[2], UriStem = record[3], UserAgent = record[6] });

            return records;

        internal ILogRecordset ExecuteQuery(string query)
           LogQueryClass logQuery = new LogQueryClass();
            MSUtil.COMW3CInputContextClass iisLog = new MSUtil.COMW3CInputContextClass();
            return logQuery.Execute(query, iisLog);

This is how my POC looks like:

public class IISLog
        public string LogFilename { get; set; }
        public int RowNumber { get; set; }
        public DateTime EntryTime { get; set; }
        public string SiteName { get; set; }
        public string ServerName { get; set; }
        public string ServerIPAddress { get; set; }
        public string Method { get; set; }
        public string UriStem { get; set; }
        public string UriQuery { get; set; }
        public int Port { get; set; }
        public string Username { get; set; }
        public string ClientIpAddress { get; set; }
        public string HttpVersion { get; set; }
        public string UserAgent { get; set; }
        public string Cookie { get; set; }
        public string Referrer { get; set; }
        public string Hostname { get; set; }
        public int HttpStatus { get; set; }
        public int HttpSubstatus { get; set; }
        public int Win32Status { get; set; }
        public int BytesFromServerToClient { get; set; }
        public int BytesFromClientToServer { get; set; }
        public int TimeTaken { get; set; }



public class IISLogCount
        public IISLog Log

        public int Hit { get; set; }

Once the Service class has been defined, you can create your proxy whichever you want it.

Here I am using ApiController driven class as a proxy. This class will be passing any Http request to this service and returning back the response in Http protocol.

        public IHttpActionResult GenerateIISLog(string fileName = null, string api = null)
            return Ok(_weblogService.GetLogs(fileName, api));
        public IHttpActionResult GenerateIISLogDetails(string uri, string fileName = null)
            return Ok(_weblogService.GetLogDetails(uri, fileName));

I’m using NInject IoC/ DI framework for injecting service to the controller:



Points of Interest

As you can see in my above sample that I could managed to read a text file using SQL commands. LogParser is fantastic tool for retrieving usage stats/ bandwidth , slow pages and many more details.

One thing I like the most with LogParser is it can even communicate with Directory Service Logs, Mail Logs, Event View Logs and et al; that too using SQL commands.


Agile in Primary Schools

Instructors in primary schools as far and wide as possible are starting to utilize Agile to make a society of learning. This disposition is the thing that has headed training pioneers, to integrateAgile learning into these schools. Agile learning focuses on individuals and associations over procedures and instruments, and meaningful adapting over the estimation of learning.

Despite the fact that a huge piece of Agile includes routine state sanctioned testing, it isn’t the sort of testing that measures substance knowledge–it’s the kind that measures considering. Genuine learning in primary schools means that young students will discover the importance of learning for the rest of their lives.

Over recent years since its first integration, the Agile philosophy has been found to energize ceaseless change. As of yet, Agileintegratedschools have been found to have a good number of qualities. For one, their most astounding necessity is to fulfill the needs of understudies and their families through ahead of schedule and ceaseless conveyance of serious learning. They convey genuine adapting often, from several days to a few weeks, with an inclination to the shorter timescale. Agile learning and the participating families cooperate every day to make learning open doors for all members. This is found to really help the learning especially in primary schools where confined spaces often result in restless children.


On Your Mark Get Set Go! Finish Line
(Notes) (Notes) (Notes) (Notes)

(Agile learning schools integrate a child-friendly version of Scrum, with terms such as “On Your Mark” and “Go!” making children look forward to work. It also makes them visualize the work they have completed. )

Agileintegrated schools also assemble ventures around inspired people, provide for them nature and help they need, and trust them to accomplish the employment. They perceive that the most productive and successful strategy for passing on data to and inside a group is vis-à-vis discussion. Primary schools that have Agile learning have courses of action push maintainability. Instructors, students, and families ought to have the capacity to keep up a steady pace uncertainly. This new kind of learning accepts that persistent consideration regarding specialized greatness and great outline upgrades flexibility. Interestingly enough, these schools have the specialty of expanding the measure of work done–is crucial. The best thoughts and activities rise up out of sorting toward oneself out groups. Finally, over the past couple of years, all primary schools that have integratedAgile learning have seen students be much more successful in all other areas of life.

Like traditional Agile environments, Agile learning in primary schools utilize the use of a “sprint”. A “sprint” is a period boxed length of time inside which classes focus on a set of conclusions to be attained before the end of the time-box. Much the same as a sprint in Olympic style events, it is a brief time with a beginning line and a completing line, aside from for this situation, it is not separate, the time it now, time. When one Sprint closes, the following one starts.

In addition to everything else, Sprints have taken care of the issue of young students becoming lost despite a general sense of vigilance and of instructors squandering time on misguided units of study that run for months on end without any huge evaluation of learning. In the long run, this sort of learning shows students that school is indeed a place that provides useful information and applicable methods. It teaches them that education has its merit, and no merely somewhere to escape their home lives and drone out the words of their teachers.

agile learning

(Agile learning often integrates small circled groups for interactive learning).

 Agile learning in primary schools has tacked the issue of separation such a large number of educators battle with. Groups have been shaped out of sets of combined educators. This has been communicated as being a conventional group instructing, as “visitor” showing where instructors exchange off heading lessons as per individual instructional skill, and is also achieved through cross-class movement. Cooperating for a considerable time of time and offering obligation regarding the same understudies has built the regular utilization of streamlined practice, and furthermore, it gives a lot of people more chances to educators to comprehend the students’ needs.


Agile processes have proven to be most useful in the workplace, especially with software development. Because this has occurred so nicely in these environments, scholarly pioneers have agreed to integrate these similar methods into primary schools. Using such tools as “sprints” and flexibility have a strong effect on young students and how they learn. Not only are they doing better in school; they realize how education – and learning – is a positive thing they should integrate in their daily lives (for the rest of their lives). This will also make children think positively about school and institutions, making them much less likely to turn to crime later in their teens and adulthood. It encourages teamwork, quick thinking, and focuses on learning more than the result. By the end, students feel accomplished and more ready to take on other world challenges. As a whole, the positive trend that Agile learning has created in primary schools is a template that many more schools should follow.


How to do team meeting in an agile way – Explaining Lean Coffee ™

If you have been implementing Agile in your organisation, you must have been involved in one (or many) of the following meetings:

  • Sprint Planning Meeting
  • Daily Scrum Meeting
  • Sprint Retrospection Meeting
  • Random meetings.
  • Technical Meetings and so on…

Majority of times I feel that meetings have one (or more) following flaws:

  • Missing Direction,
  • There is no outcome,
  • Unexpected Discussion,
  • Meetings are over-timed,
  • All (important) points are not discussed.

Meetings like, Sprint Retrospection, has been time boxed. However, it has never been concised.

Lately, I went to one of Scrummasters discussion forum at CBA (CommonWealth Bank of Australia). There, the lean coffee ™ meeting was introduced.

I found very fascinated with this concept. I discussed the same with my Manager and then to my entire team. I am glad to tell you that all of my colleague got very excited with this concept. I thank them to accept my suggestion and here are some outlines that I would like to share with all of you.

What is Lean Coffee ™?

Lean Coffee ™ is a structured, but agenda-less meeting. Participants gather, build an agenda, and begin talking. Conversations are directed and productive because the agenda for the meeting was democratically generated. Jim Benson and Jeremy Lightsmith founded the Lean Coffee ™ movement in Seattle in 2009.

The Lean Coffee ™format is essentially an approach to facilitate learning and collaboration through group discussions. The ‘Lean’ part of the name has its roots in Lean Thinking and the ‘Coffee’ part of the name obviously comes from thenice drink that we drink to live.

How it works

The protocol, borrowed and modified from the Sydney Lean Coffee group and Limited WIP Society, is as follows:

  1. Create a small Kanban board with four columns – To Discuss | Discussing | Discussed | Outcome.
  2. Brainstorm topics that everyone would like to discuss and note down each topic on Index card (or sticky note). Everyone presents his or her topic in a couple of sentences.
  3. Use dot voting (or initials) to vote on each of the topics.  Each person gets two-three votes. You can vote ones or more than ones per topic.
  4. Add all topics to the “To Discuss” columns on a Kanban wall, with those that received the highest votes at the top.
  5. Do a quick calculation on the  time in hand divided by how many topics you would like to cover. Decide the length of conversation per topic.
  6. Discuss each topic in turn. Move the index card for the topic into the “Discussing” column. Initially, ask the proposer to explain the topic, then go round the table to give an opportunity for everyone to provide an initial comment followed by open discussion.6.1 When the topic is done, move on to the next one.  The topic proposer decides when the topic is done, and moves the index card to the “Discussed” column.

    If someone disagrees, then a quick vote can justify to discuss it further Otherwise, if the majority disagree to discuss any further then mark the topic as “Discussed” on board. The outcome can be noted down on the wall for further actions.

  7. At the end of the overall Lean Coffee ™ session, run a quick retrospective.  What did you like? What  you did n’t like? What are ideas for improvement?



I am delighted to tell you that it has been a major success after implementing this process. Here is a small snippet of our meeting.

  • Because every step is time-boxed. There are very limited possibilities for time wastage,
  • Every topic was appreciated and important points were discussed first.
  • The attendees collectively  ensured that the session was a safe space where opinions were respected. There were no stupid questions and nobody lambasted anybody for what they had to say.
  • There were disagreements but those were discussed in a grown-up manner rather than becoming heated arguments.
  • There was no moderator to make this happen. The attendees moderated themselves.

So if you were waiting for the perfect time to seize the opportunity of building an effective meeting session, the time is now.

Get lean coffee process works for you.



Some sites for Lean Coffee ™ Meetups around the world:

Seattle Lean Coffee ™.

Lean Coffee(tm) San Francisco.

Lean Coffee(tm) Sydney.

Lean Coffee(tm)Toronto.