Implementing SOLID principle For TDD on Web Service Reference

What is SOLID?

SOLID are five basic principles which help to create good software architecture. SOLID is an acronym where:-

•S stands for SRP (Single responsibility principle
•O stands for OCP (Open closed principle)
•L stands for LSP (Liskov substitution principle)
•I stands for ISP ( Interface segregation principle)
•D stands for DIP ( Dependency inversion principle)

There are many good article present over the Internet explaining SOLID in details. However, my entire focus will be on how I have implemented this principle on Web Service proxy that I’ve to currently worked on.
Since I am following TDD approach, I will start the development from explaining Unit test first:

Unit testing is all about testing current method (High cohesion and Low coupling). If this method is relaying on another method or an object then you should mock that object/ method so that you can test your method independently.

If you not sure about High cohesion and low coupling principle in OOPS, then I recommend you to learn about SOLID Principles.
You should also read about TDD.

This principle applies even when the dependent code has not been written as yet (well that is why we call it as TDD ;-).

In case of my integration project with sharepoint I was dependent on an external resource. I’d to call a sharepoint web service method GetUserProfileByName, with the endpoint at http://<yourwebsite>/_vti_bin/UserProfileService.asmx

When we used Visual Studio, Add Service reference tool, it creates a proxy class to call the web service:

[System.Web.Services.WebServiceBindingAttribute(Name="UserProfileServiceSoap", Namespace="")]
 public partial class UserProfileService : System.Web.Services.Protocols.SoapHttpClientProtocol {…


When I saw this proxy file, it has got more than 50+ methods that are exposed. As a principle of SOLID, we should expose only those methods that are important to expose. In our case it is just one method i.e. GetUserProfileByName.
Another important factor we should consider into our design is to how can I assure that the implementation of the sharepoint can be easily replaced with any new external end point such as Active Directory.

To solve all these problems, I just need to follow a I and D pont from SOLID principle.

I – Interface segregation principle – many client-specific interfaces are better than one general-purpose interface.

D – Dependency inversion principle – one should “Depend upon Abstractions. Do not depend upon concretions.”

Now I will not go into the detail of SOLID but these are the steps that you should follow:

1. Import Web service proxy into the project. Now keep in mind that the web service proxy is of partial class. Thus we can create an another partial class and implement a contract.
2. Create a Contract that must contain only those members that you need to expose.

public interface IServiceClient

 bool UseDefaultCredentials { get; set; }
 string Url { get; set; }
public interface IUserProfileServiceClient: IServiceClient
 PropertyData[] GetUserProfileByName(string accountName);

3. Now that I have got contract ready, all I have to do is create a partial class with same signature. In case my case it:

public partial class QueryService : IQueryServiceClient {

NOTE: You don’t need to implement any member because it has been already created by the proxy class.

4. Create a contract of your implementation that you want to test:

public interface IExternalUserService
 User GetUserProfile(string accountName);

5. Create a concrete class:

public class DirectoryUserService : IExternalUserService 
 private readonly IUserProfileServiceClient _profileService;
public DirectoryUserService(IUserProfileServiceClient profileService) {
_profileService = profileService;
public User GetUserProfile(string accountName)
var propertyBag = _profileService.GetUserProfileByName(accountName);

Now that we have implemented our method that we want to test, the following UnitTest implementation is using Moq Framework for Stub that is going to pass as IUserProfileServiceClient object.

public class DirectoryUserServiceTest
private PropertyData[] _fakePropertyData;
public void Setup()
// Arrange
_fakePropertyData = new[]
{ new PropertyData() {
Name = "WorkEmail",
Values = new[] {new ValueData {Value = ""}}
new PropertyData()
Name = "PictureURL",
Values = new[] {new ValueData {Value = "http://x/xtra/photos/photo152173.jpg"}}
new PropertyData() {Name = "FirstName", Values = new ValueData[] {new ValueData() {Value = "Amit"}}},
new PropertyData() {Name = "LastName", Values = new ValueData[] {new ValueData() {Value = "Malhotra"}}},
new PropertyData() {Name = "DirectoryID", Values = new ValueData[] {new ValueData() {Value = "152173"}}},
new PropertyData() {Name = "UserProfile_GUID",Values = new ValueData[] {new ValueData() {Value = Guid.NewGuid()}}
},new PropertyData() {Name = "BusinessUnit", Values = new ValueData[] {new ValueData() {Value = "RBS"}}},

new PropertyData(){Name = "Department",Values = new ValueData[] {new ValueData() {Value = "Partner Engagement"}}
public void TestGetUserProfileNotNull()
// Arrange
var mockService = new Mock<IUserProfileServiceClient>();
mockService.Setup(r => r.GetUserProfileByName(It.Is<string>(x => x.Equals("amit"))))

IExternalUserService directoryService = new DirectoryUserService(mockService.Object, null);

// Act
User user = directoryService.GetUserProfile("amit");


Point of Interest:

Understand SOLID:

S stands for SRP (Single responsibility principle):- A class should take care of only one responsibility.
O stands for OCP (Open closed principle):- Extension should be preferred over modification.
L stands for LSP (Liskov substitution principle):- A parent class object should be able to refer child objects seamlessly during runtime polymorphism.
I stands for ISP (Interface segregation principle):- Client should not be forced to use a interface if it does not need it.
D stands for DIP (Dependency inversion principle) :- High level modules should not depend on low level modules but should depend on abstraction.

Understand Moq:

Now if you are following the discipline of TDD and are building large scale enterprise application that interacts with services, databases, or other data sources, writing data driven unit tests becomes a challenge. Test setup (for each test) becomes more complicated and challenging than the actual unit test. This is where the concept of Mock comes into picture

The most valuable Asset – Agile team – Rules


  1. Be conscious of your current level of productivity and happiness and make continual changes to grow.
  2. People are at their best when they are healthy, energized, focused and at ease. Design your day to day activities to enrich your mental, physical and professional selve.
  3. Work on yourself just like you do on a project. Plan the necessary disengagement from the project just as carefully as you’d plan the time you work on it. If you can systematically improve and expand your skills, then whether the project works out or not, you’ll always be in an increasingly better position as the weeks and months pass.
  4. Share early in the decision process to avoid big revelation.
  5. Focus on listening rather than responding. Take the approach that everything is a hypothesis and you could be wrong. You are suggestive rather than instructive, replacing phrases such as “certainly”, “undoubtedly” with “perhaps”, “I think” and “my intuition right now”.
  6. Talk, code, design and write in a clear way instead of being clever.
  7. Admit when you are wrong.
  8. Respect each other’s time and space, if someone has their headphones on don’t physically disturb them, ping them on Chat or through another onilne medium, create a task if you have a question.
  9. Make decisions based on facts over rank.
  10. Open to be responsible over blame culture.
  11. Remove any negative thoughts over the coffee/ tea or a beer – we are family.
  12. Open to change. We may know what is right but may be there is a better way to do something.
  13. Wake up fresh and work with clear and focused mind over working that extra hour in the end of the day. Always aim to be fully engaged in an activity or rest instead. Focus on expanding the capacity of your mental, physical, emotional and spiritual energy (yes, spiritual).
  14. It is everyone’s responsibility to make every day fun.
  15. Don’t be afraid to actively seek out and provide constructive criticism.
  16. Have the courage to think differently and discuss your inner thoughts with others.
  17. Committed to excellence in everything that we do and a clear understanding that the only way we can get there is as a team.
  18. Always try to put a positive spin on things.
  19. Do what we say we are going to do.
  20. Be humble.
Teamwork without email
  1. About to write an email. Write a task comment instead.
  2. Need to ask a question or get an updated file. Create a task.
  3. Have a team announcement. Publish on a blog or Wiki.
You are the business
  1. You are run your own business and just like any business owner you want your business to succeed. Share your thoughts on how you believe the business might succeed.
  2. Fix the problems even if they are not yours.
  3. Don’t complain, suggest solutions.
  4. Treat “failures” as opportunities for growth.
  5. Understanding the bigger picture of the business even if it is not directly related to the work we do.
When we miss a deadline we ask questions
  1. We underestimated the time it would take. Why? We didn’t have the right tools. Why? By the time you get to your fifth why you get the root of the problem. Then you can better avoid the same issue in the future.

Note to a Software Developer – Sigma 3: Personal Growth

This article is the part of a series - Note to a Software Developer Six Sigma. Before you read this article, I highly recommend you to read my introduction note.

This is my second article, Sigma 3, of my series “Note to a Software Developer”.


I’ve come across many developers, who are proactive in the beginning of their job, then they loose their interest and go blank; thus they are waiting for Employer to offer some training or a kick from their Managers’.

Let me be honest here that “Personal (or Professional) development is not somebody’s responsibility”. You have been employed because of your knowledge and it is solely your responsibility  to maintain this rhythm. I am repeating here your Personal Growth is your responsibility to manage and pursue your professional (and yes, personal — they are in fact connected) goals.

In this article, I am going to talk you about three main points that you should use while building a Personal Development Process.

1. Power

List down your Powers (strengths) and most importantly keep a track of these strengths. Your strengths will be your result card to track your progress.

You don’ t need to build a list on “Weaknesses”. However, my recommendation is to build a 5 key strengths that you need improvements and categorize those as your strengths with Power meter (level).


power card

Keep it simple as much you can. And I recommend you carry this list as a hard copy so that you can view anytime, every time. This will help you to keep your mind focused on 5 strengths at a time. 

Once the power level of specific item is 100%, remove it from your strength list and add another item that you would like to improve.  Remember, you want to build strength that you don’t have and that is what this Power List is for.

Do a few things at mastery versus many things at mediocrity.

2. Planing and Correcting

As you will go to the path of self-improvement you need to understand that there is no correct path and if there is, you will find an another that is better than the current one. Of course, you need some sort of a plan that will lead your Power meter to 100%. But it is hard to find one when you don’t know what is right. That is why Planning (NOT PLAN) is required.

There is a difference between  “plans” and “planning”. Planning is an active way of discussing the goals, objectives, strategies and tasks that we need to accomplish. Plans are the documentations of planning.  Since things change, plans need to get updated on a regular basis.  Planning is a continuous process that helps us adjust course, keep on-track and make accomplishing our goals more likely.


In this planning process (Plan+Act), you’ll make mistakes, as will your team, your customers, and your board. And you’ll fix those mistakes by honestly taking stock and changing the way you do things. That is why you need Correcting.

Correcting is the path of self-improvement and it is a MUST for leaders.

 Make the work you are doing today better than the work you did yesterday.


The bottom line is, Keep planning, Keep Correcting.

3. Show and Learn

I believe learning is the part of life. Just like Business, where change is a must, we need to have a reflection of those changes.

Till now you have identified strengths and made some planning with accommodating feedbacks (correcting). Now, it is your time to show some output. The output that can be measured against the power that you are trying to achieve. This output could be in any form and you should be able to “SHOW” (measure).

The best leaders are the most dedicated learners. Read great books daily. Investing in your self-development is the best investment you will ever make.


Personal Growth

Message: You are a Leader.


In conclusion, you are the owner of your own future. You must know your powers that you want to achieve. You need to define your planning with continuous improvements. Whatever you do, the output should be measured against the power that you have act against. I will end this session with the great quote from “Swami Vivekananda”:

We are responsible for what we are, and whatever we wish ourselves to be, we have the power to make ourselves. If what we are now has been the result of our past actions, it certainly follows that whatever we wish to be in future can be produced by our present actions; so we have to know how to act. 

Note to a Software Developer – Series


A little bit of a background

Today software development is not same as what it used to be.  In last few years, we have come long way. Technology has changed the way we operate our life. Changing from CRT to LED, Desktop computers to handheld devices, and now we are going toward eyes that can compute for you (Google eyes). It is wise to say that Technology has touched billions of hearts.

As a developer, technology affect me first then user. Every day I hear about new tools and techniques – a new way of solving same problems. I have to keep on the top of everything and deliver the best solutions as I can. Now, the question that I have asked myself is – Can technology knowledge is the only reason that can keep us on job or there is something else that can also makes the difference?

This article is about self-realization and asking this question – What quality I should maintain that can help me in my career?

Why you may read this article?

Just like me, you must be putting all your energy in keeping up with new technologies with the belief that it will lead us to the right career path and I believe that we need to think this belief again. There are other, most critical, factors that we must adapt to be hired by next Gen Business.

This series is about giving you some insights on career progression that every Software Developer needs to know

The whole series is divided into 6 parts termed as Six Sigma.

six sigma


I prefer to give an importance on Agility first over anything else and that is why it is on the top. However, I will share my thoughts on it at last.

Please feel free to read through any section.

1. Agility (coming soon)
2. Technology Selection
3. Personal Growth
4. Representative (coming soon)
5. Brand – Branding yourself (coming soon)
6 .Interaction (coming soon)

Note to a Software Developer – Sigma 2: Technology Selection

This article is the part of a series - Note to a Software Developer Six Sigma. Before you read this article, I highly recommend you to read my introduction note.


As I have explained to you in my series note that there are total six main areas for career progression.

Relevance check, sigma 2, is about to identifying, authenticating and authorizing best technology that should be used for your project. In this article I am going to emphasis three main points that can be used as your checkpoints in technology selection process.

1. Relevance Check

Every now and then there is a new technology coming into the market. It is trying to resolve one problem that another may not solve. Thus, it is always advisable to check the relevance of the technology that you would want to work on and would like to encourage in your organization.

Just because it is hot in the market does not mean it is the best, even if solves current problem. The best example from my experience is Silverlight that was just adapted because of Leader of the origination (Microsoft) felt that it is one of the best way to go; we all know that “Silverlight” is dead in technology.

 Select what is best for your project, instead of what is best for you – this thought will lead you to be an Architect.

2. Maturity Level

You don’t want to select just any technology because you know what it is. You should check its age that how mature it is to adapt, who is backing up this technology and how soon any bugs are fixed.

As an example, which Javascript framework would you select – Knockout, AngularJS, EmberJS and now Sweet.js.

You got to check the selected technology, future, and most importantly check on how your selection will make an impact on Software Maintenance.

I do agreed that today’s technology will obsolete tomorrow. However, the foundation should be irreversible. Now what I meant with the foundation that the Web services will exist but not DCOM, or Remoting.

 Technology should be backed by solid foundation.

3. Make informative decision

Do use informational rather emotional decision that you tend to make. You will affect many lives and the future of your company (feeding many families). You don’t introduce any technology just because you feel it is correct. You should be unbiased on making the most crucial decision on technology selection process.


 Come with the facts, and time-box your finding.


technology selection

Message: You are Scientist.

In conclusion, the technology you select will define the success of  your project. Most importantly, it will affect your future as well. Hence, we must define the process of selecting a right technology.

 Next Read | Sigma 3

RESTFul Webapi Implementation Guidelines

REST-WebApi Implementation Guidelines



As we are going into the era where all devices needs to be connected with the server there is a need to build an API that is easy to use and all devices speaks the same langugage.

What is REST?

Representational State Transfer. It relies on a stateless, client-server, cacheable communications protocol — and in virtually all cases, the HTTP protocol is used.

What is Hypermedia?

In layman term, when the REST response comes back then it should include some webapi resources as a reference for the client. This way the client can get to know some valid URI that can be called by the client.

  • Method of self-describing messages
  • Hypermedia as The Engine of Application State == HATEOAS
  • Hypermedia can create dynamic, self-documenting APIs
  • But you can be a great API without HATEOAS


Web Services Vs REST

The concept of API is in development industry since COM, however, what has been changed now is how easy this API is available for different devices.

COM       Limitation: COM should be packaged along with the program. COM is accessible for only specific programming langugage, which means C++ dll is not directly accessible to Java.
 DCOM   Limitation: A proxy needs to be installed on local machine. Again a compatiblity issue, permission headache.
ASMX    Web service that is accessible through HTTP protocal and the client only needs to know how to call through a proxy that uses SOAP as a message protocol.
SVC Client still needs to have a knowledge to interpret SOAP.

Why Web service era is drying… Web service calls require a standard to be followed such as SOAP. Not all devices knows about these standards. This adds a complexity to provide a library at client level that can communicate in SOAP message format and web service can be connected. In case of REST WebApi it uses http. Thus, it has less overhead as compare to previous technology. Now what it means to your coding that your functions are available as is from Mobile to a Desktop. All you need is HTTP to call, which is available even in 20 $ mobile phones.

Should you use REST?
  • REST or REST-ful APIs are easy to consume and maintain
  • If your API does not fit the Resource Model, RPC or custom APIs are fine
  • Your new development should starts from REST principle and if does not fit then please proceed with the old procedure.

What will you learn here?

Idea behind this article  is to give you guidelines on what should be done for making a a good Web API. This document is listing some points that I have learnt and implemented (some :-)) for WebApi architecture. You can implement these rule in any programming language. Let us understand these rules.


1. » Don’t use verb as a part of the URL. So rather than saying that /todos/update it should be URL: /todos METHOD: POST or PUT.

The main reason of not using verb is because as the project extend there are too many URI and it is hard to main these URI. Thus, it makes it difficult to manage your API.

1.1» Use Identifiers to locate individual items in URIs. It does not have to be internal key.

e.g. (internal key)

or (non internal key - however unique identifier to determine that single entity)

1.2» Use either Plural or Singular noun. However, don’t mix it.

Preferred Preferred Not Preferred
/polls/23 /poll/23 /polls/23
 /todos/2384  /todo/2384  /todo/2384

2. » Concrete is Good over Abstract; so instead of saying /items/23 it should says the type of items like /cars/23

3.» Don’t use query string (?) in the URL, where possible; Set Association resources/id/resources through url.


4.» Use pre defined HTTP status code in your returned header of Api methods. refer available HTTP code at

5.» You should also give an option to supress the code by giving an optional parameter which will force API to always return 200. suppress_response_codes. /customers/1? suppress_response_codes=true 

Response should be:

HTTP status code: 200 {"error":"Could not authenticate you."}

6.»  Make the version mandatory. Specify the version with a ‘v’ prefix. But keep in mind that API version is not product version. There are 4 types to implement Versioning:

  • Version as a part of URL – The main drawback is maintaining client on url change.If you are defining URL based version then move it all the way to the left in the URL so that it has the highest scope (e.g. /v1/customers).
  • Version as a part of Querystring – Same drawback as above.
  • Version as Content Type in Accept Header – The main drawback of this approach is it adds complexity implementing custom header on all platform. It can also encourage increased versioning which cause more code churning.e.g. Accept : vnd.api.v1.customer
  • Version as Custom Header – Same drawback as Content Type.e.g. x-api-version: 2 or  x-api-version: 10-07-2014

Whichever way you go, use a simple ordinal number; Don’t use the dot notation like v1.2; Before obsoleting previous versions give developers at least one cycle to react.

7.» Implement partial response; Partial response allows you to give developers just the information they need.

  • Implement Partial GET
  • Implement Partial Patch

8. » Make it easy for developers to paginate objects in a database; Use limit and offset to make it easy for developers to paginate objects.

9.» Accept different format and content negotiation is a Best practice. Use Accept header to determine which formats are allowed.

GET api/customers/1 HTTP/1.1
Accept: application/json, text/xml
Host: localhost...

10. »  Use JSON as a default return type, following Javascript camelCasing conventions for naming attributes.

e.g. {"firstName":"Joe","lastName":"Public"}

11.» Standard place to host is and if the request is coming from the browser, then redirect it to Also, redirect any request coming for dev, developer to

12.» On successful update, return 200 (or 204 if not returning any content in the body) from a PUT. If using PUT for create, return HTTP status 201 on successful creation.

13.» It is strongly recommended to use POST for non-idempotent requests. POST is neither safe or idempotent. It is therefore recommended for non-idempotent resource requests.

14.» PUT vs POST for Creation In short, favor using POST for resource creation. Otherwise, use PUT when the client is in charge of deciding which URI (via it’s resource name or ID) the new resource will have: if the client knows what the resulting URI (or resource ID) will be, use PUT at that URI. Otherwise, use POST when the server or service is in charge of deciding the URI for the newly-created resource. In other words, when the client doesn’t (or shouldn’t) know what the resulting URI will be before creation, use POST to create the new resource.

15.» Choose CORS whenever and wherever possible.

16.» Non-Resource API

  • Be sure that the user can tell it is a different type of operation
  • Be pragmatic and make sure that these parts of your API are documented
  • Don’t use them as an execute to build a RPC API

17.» It is recommended to return an Entity Tag (ETag) header for each GET (read) operation. It should support Week Tag, starts with W/ and supports Strong Tag. if-None-Match header key is used for specifying the ETag value.

GET: /api/customers/1 HTTP /1.1
Accept: application/json, text/xml
Host: localhost
If-None-Match: 823748374783

Request response should return 304 status code.

18.» Protect your Api

  • SSL is almost always appropriate.
  • Secure the API itself using:
    • Cross Origin Security:
    1. JSON with Padding (JSON)  – not recommended coz the callback has to be maintained.
    2. Cross Domain Resource Sharing (CORS) –

19.» Authorization & Authentication 1. Authorization – Use API Keys 2. Authentication  – Options:

    • Website security such as Form, Windows Auth, 1st Party Developer (internal auth.)
    • Use OAuth 3rd party Developer (external auth.)

In the next series of post, I will explain & implement these rules with the help of .NET WEB API programming language.    stay tuned…. Connect with me: View Amit Malhotra's profile on LinkedIn.

Tips #2 Problem connecting with nuget.


For some reason I could not connect with the nuet from my visual studio. I was getting an error from package installer console, so the manage console for this project IDE.


It came into light that there is a proxy issue because in my organisation the proxy is configured in IE settings.

Hence I have to set the same settings for devenv configuration too.


1. Open devenv.exe.config of your Visual Studio configuration; in my case it is Visual Studio 2013 and it is located at: C:Program Files (x86)Microsoft Visual Studio 12.0Common7IDE

2. Now open this text file, you should be running an editor program as admin, and add the node „<defaultProxy>“ inside the node „<>“.

 <defaultProxy useDefaultCredentials=“true“ enabled=“true“>
 <proxy bypassonlocal=”true” proxyaddress=”http://yourproxyaddres...” />

In the attribute “proxyaddress” you can specify the address and port of your proxy server.


Tips #1 camelCase Json from webApi


I have came into the situation where I need to build webapi that returns value in camel case so that it is defined as per javascript writing notation standard.



I have two options that either I define json property for each POCO ( Plain old CLR object).

 public class TodoItem

public int ID { get; set; }
 public string Title { get; set; }

 public bool Completed { get; set; }

This way Json seialization woudl know how it should represent the json result.

The another solution by using Newtonsoft.Json.Serialization.CamelCasePropertyNamesContractResolver class.

Use this below snippet to configure the serialization format.

config.Formatters.JsonFormatter.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();

I have wrote this line at WebApiConfig.cs file for WEB API project:

public static void Register(HttpConfiguration config)
// configure serialize settings so that the response comes as camel case serialization.
 config.Formatters.JsonFormatter.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();

An introduction to Gulp Build System

What is Gulp?

Just like Grunt, which is a build tool for front-end web development, Gulp is a new block that runs predefined tasks on static files such as CSS, HTML, Javascript, images. Some of these tasks includes:

  • minimise files size
  • watch file changes and automate the deployment

What is the major difference between Grunt and Gulp?

  • Grunt plug-ins often perform multiple tasks where as Gulp plug-ins are designed to do one thing only; it is clean and neat.
  • Grunt requires plug-ins for basic functionality such as file watching; Gulp has them built-in.
  • Grunt uses JSON-like data configuration files; Gulp uses leaner, simpler JavaScript code. You don’t create multiple configuration files in Gulp for each task like in Grunt.

How to use Gulp?

I have recorded a video about that explains you about:

  1. How to Install Gulp
  2. How to Install Gulp Plugins
  3. How to find Plugins
  4. How to define Gulp tasks
  5. How to execute bunch of tasks
  6. How to watch changes in static resources and execute tasks automatically.










Version control is probably the single most important piece of the development environment after the IDE itself, essential for teams and I’d argue extremely valuable even for a solo developer. So, if you’re about to start a new project it’s a good time to consider what version control solution you’re going to use.

Before we go into our friendly debate, let us think a list of items that is important to the Business.

What Business needs?

CHECK LIST – Business Requirement for Source Control
Source Code repository is backed up with minimal efforts
Source Code repository is available inhouse
Only authorised person can view and access Source Control
Easy to recover code at any point in time.
If one person is not available then it should not make an impact to others’ productivity
Branching and Merging is allowed.
(Probably most important) What developers wants….

What Developer needs?

Check List
Branching and merging should be as easy it can
Integration with my other developer tools such as Visual Studio, Eclipse etc…
A way to shelve (TAG) the code into the repository for backup or sharing a code purpose
Muliple users can work on same file
Easy to manage any conflicts or even rollback.
Compare any changes and recover code anytime.

Knowing TFS and Gits.

Well, I like Team Foundation Server and I like Git, but which is better?


Team Foundation Version Control (TFVC) uses a single, centralized server repository to track and version files. Local changes are always checked in to the central server where other developers can get the latest changes.

Git is a Distributed Version Control System (DVCS) that uses a local repository to track and version files. Changes are shared with other developers by pushing and pulling changes through aremote, shared repository.

NOTE: With the help of TFS 2012 or above we can support local as well as centralized model.

It is quite clear that you can go both ways and Gits has no reason to win over TFS except that it supports Open source community.
You can get Visual Studio Online free of cost for 5 developers:

Even though Microsoft TFS has started supported DVCS model, the support is very limited. You can only use it for some purposes such as modifying file offline.

TFS with TFVC, with either model (Edit Commit with local workspaces, Check-in Check-out with server workspaces), is still centralised, so a developer working locally without a connection to the server has some limitations such as not being able to create branches, merge changes, view history or perform compares without the server connection. In other words, TFVC is not a DVCS, and if that’s what you need then you’ll probably be looking at the most popular DVCS tool, Git.


Git arose from the Linux kernel development community (including and in particular Linus Torvalds) in 2005 and has become the leading DVCS tool. It has a totally different model to centralised version control systems like TFVC by keeping a full repository locally on the developer’s machine. The repository is essentially a file based system with metadata in hidden files. The developer can then perform any task offline, including checking out, editing, committing, viewing and diffing on the full history, creating and destroying branches, merging and more.

At appropriate points the developer’s local repository can be synchronised with a remote repository either by fetching, pulling or pushing changes. This, of course, does require a connection to the remote repository.

Git is popular with Open Source teams, who are typically highly distributed, and suits modular codebases where individuals can work entirely locally for extended periods.

Most of the Windows Git tools are command line based, but Visual Studio also has Git integration, either via the Visual Studio Gallery or in 2013 it’s out of the box.

Having added this solution to a Git repository there are commands available within Visual Studio to work with Git, in this case right-clicking on the solution to commit changes locally to the Git repository.

TFS with Gits.

TFS 2013 now includes Git repository support out of the box. When you create a new Team Project there is a new page in the wizard asking you to specify whether you want TFVC or Git for the version control aspect of the Team Project.



I won’t go into the step-by-step how to for using Git in TFS, as this is covered very well elsewhere, amongst others, but the Git features and the TFS features are now integrated. For example, when I commit local changes to Git in TFS, I can also associate work items (such as a user story or task)


This means that I can continue to use the capabilities of TFS (Agile tools, work items, build and so on), whilst selecting Git as the version control solution within TFS, and having the two areas integrated.


Is this a special “Microsoft” version of Git? No, it’s absolutely standard, meaning that you can use the tools with Visual Studio and/or your own preferred command line or other Git tools, as you wish.




I hope that this clarifies the new options around version control within TFS, and that where you might have been having a discussion about DVCS vs Centralised, or Git vs TFS, that new possibilities have been opened up.

If team decided to go for Gits then probably it is good to go with TFS and Gits combination. Then we probably don’t need to go for any other agile modeling tools.

Infect, we can probably use Team rooms feature of Visual Studio 2013.