Prototype in Javascript

What is prototype in JavaScript?

An encapsulation of properties that an object links to. Simply put it allows you to share functions or properties with every single object of some specific type. Now every JavaScript object has a prototype. The prototype is also an object. All JavaScript objects inherit their properties and methods from their prototype.

There are two types of prototypes:

1. A function prototype is the object instance that will become the prototype for all objects created using this function as a constructor.
2. Where as an object prototype is the object instance from which the object is inherited.

Let us take an example:

function Employee(name)
{ 
  this.name = name;
}

Now the Employee.prototype is the function prototype. If I create an object of type Employee, then it has an object prototype, which is accessible as

employee.__proto__

Please note that the Employee.prototype is pointing to the same object as employee.__proto__.

var employee = new Employee(name);
Employee.prototype ===  employee.__proto__

Similarly, if I create an another employee object then also a new object is  pointing to the Employee.prototype object.

var employee1 = new Employee(name);
employee1.__proto__ === employee.__proto__  === Employee.prototype

It means that the prototype object is shared between objects of type Employee.

How the property is read by the javascript engine?

If I type employee.salary(), it will return an error that salary method does not exist. However, if I can modify the prototype object, so that the salary() function is attached with the Employee prototype then the function salary() will be available for every single object of type Employee.

Employee.prototype.salary = function () {return 0; };

Now employee.salary() will return 0. Which means that if the object does not have a property then javascript engine will check with __proto__ object.

You can check who owns the salary object with this line of code:

employee.__proto__.hasOwnProperty('salary')

It will return true because salary is assigned to Employee prototype.

How can I change prototype object?

If you assign something to the prototype variable, a new prototype object is created. However, any existing objects are still pointing to the old prototype. so far I have two variables, employee and employee1. Both of these variables are pointing to the same prototype. Which means that the employee.age and employee1.age will return the same type i.e. undefined. However, if I change the value of a prototype age of an Employee like this:

Employee.prototype = {age: 10}

Then employee.age == employee1.age are still same, pointing to the type undefined but employee.age != Employee.prototype.

When I create a new object of an Employee like this:

var employee2 = new Employee();

The employee2 is pointing to a new prototype and employee2.age will return 10.

How inheritance works?

  • Make a call to parent function by using <base function>.call(this, <parameters>).
  • Set prototype for derived class.
  • Set prototype.constructor for derived class.
'use strict'
 
 function Employee(name){
 this.name = name
 }
 
Employee.prototype.age = 5; 
Employee.prototype.calculateSalary = function() { return 1000 };

function Manager(name) {
 // if you are not going to call the base constructor then
 // you are not going to have a name.
 Employee.call(this, name) // 1 
 this.hasCar = false;
}
 
Manager.prototype = Object.create(Employee.prototype); // 2.
Manager.prototype.constructor = Manager; //3.
 
var manager = new Manager('test');

console.log(manager.calculateSalary());

 How Prototyping is done with Classes?

'use strict'
 
 class Employee {
 constructor(name) {
 this.name = name;
 } 
 
 calculateSalary() {
 return 1000;
 }
 }
 
 class Manager extends Employee { 
 
 constructor(name, hasCar) {
 super(name);
 this.hasCar = hasCar;
 } 
 }

var manager = new Manager('test', true);
 console.log(manager.calculateSalary());

 

 

 

Playground – Javascript Object Properties.

How to define properties?

There are many ways to do this:

1. Assign a Json object aka bracket notations.

var employee = {
 name : {first: 'Vinod', last: 'Kumar'},
 gender: 'M'
};

2. Use . operator

var employee.fullName = "Amit Malhtora";

3. Use [] operator

var employee["fullName"] = "Amit Malhotra";

4. Use ECMAScript 5 defineProperty with accessor (get; set;) descriptor

Object.defineProperty(employee, 'fullName',
 {
 get: function() {
 return this.name.first + ' ' + this.name.last;
 },
 set: function (value){
 var nameParts = value.split(' ');
 this.name.first = nameParts[0];
 this.name.last = nameParts[1];
 }
});
 
employee.fullName = 'Amit Malhotra'
 
console.log(employee.name.first); // OUT: Amit

5. Use ECMAScript 5 defineProperty with property descriptor

Object.defineProperty(employee, 'fullName', {
 value: 'Amit Malhotra',
 writable: true,
 enumerable: true,
 configurable: true
});

What is Property Descriptor after all?

In JavaScript, you can define a metadata about the property. The following descriptor can be defined as writable, configurable and enumerable.

You can get the property descriptor using Object.getOwnPropertyDescriptor method.

Object.getOwnPropertyDescriptor(employee.name,'first');
 
// Out: Object
 
/* {
value: Amit
writable: true
enumerable: true
configurable: true
} */

writable – allow to change.

Object.defineProperty(employee, 'age', {writable: false});

Now if I try changing property as:

employee.age = 12

Then will throw this error:

TypeError: Cannot assign to read only property 'age' of 
object '#<Object>, Please note that it will throw an 
exception only if we use 'use strict'. Otherwise it will 
silently fail without changing the value of age to 12.

enumerable – allow to enum your property like this:

for(var propertyname in employee){
 display(propertyname + ': ' + employee[propertyname])
}

It returns name: [object Object], gender: M

If you set the enumerable to false:

Object.defineProperty(employee, 'gender', 
{enumerable: false})

 

The Object.keys(employee) will not return gender property. 
Similarly, if the Object.defineProperty(employee, 'gender',
{enumerable: true}) then it will return gender. You can 
still access gender like employee.gender, but you cannot 
see in Object.keys(employee)

configurable – That you can change some property descriptor

Object.defineProperty(employee,'age', {configurable: false})

Now you cannot change the property descriptor enumerable, configurable, or delete age. However, you can change writable.

 

Validate JSON response with URLs | Scripting Postman

I have been in a situation with an API url testing with the postman, where the server response has got a collection of URLs and I want to assure that these URLs are active i.e. Http OK.

To achieve this objective, I have to do the following:

  1. Call API, and store url result
  2. Call each URL and assert

Let us go one by one.

Call API, and Store URL result.
(PS: Instead of making an API call, which will return urls, I am faking my first API call 
with www.google.com, and have coding urls).
  • Open Postman and create a folder called “ValidateResponseURL”
  • Create a postman script, named as “get-contents”. It will call your API. For this demo, I am calling  www.google.com
  • Go to tests tab, and check that the response code is 200 for “GET google.com”
  • Store a collection of urls into an environment variable using postmant.setEnvironmentVariable(“…”)
  • Please note that, instead of storing url collection as it is, you need to store the first element into a separate environment variable, so that you can make a decision if there is any result from the server. In addition, it will also help you to use this dynamic url in the next step.This is how the “get-contents” step looks like in my machine.
Call each URL and assert
  • Now if you run the first postman step and check local variable, you will find the two environment variables “testurl” and “testurls”
  • Create an another postman step, named as “validate-urls”
  • Select a “Get” verb, and use “testurl” environment variable i.e. {{testurl}} as your url.
  • Now go to the tests tab of your postman step, and validate that the response code is 200.
  • Fetch a next url from “testurls” environment variable, and execute “validate-urls” step again.
  • When there is nothing left in the “testurls” collection, clear environment variables.Your script should look something like this:

Once everything is settled, you can now execute the same using Runner and you will find the result as follows:

As you can see in the result that my all websites are alive and responding with Http OK response code i.e. (200).

Thats all folks!

Namaste.

References:

“LOOPS AND DYNAMIC VARIABLES IN POSTMAN: PART 2”, https://thisendout.com/2017/02/22/loops-dynamic-variables-postman-pt2/

“Branching and Looping”, https://www.getpostman.com/docs/postman/scripts/branching_and_looping

“Test script” , https://www.getpostman.com/docs/postman/scripts/test_scripts

 

My first Sketchnotes on “The Sketchnote Handbook”

Ever since I got to know about “The Sketchnote Handbook“, I wanted to read it. The main reason of this curiosity is because there is a belief in me, which has been reflected in this book. The belief of communicating through the Visuals. What I mean with Visuals? It means global communication, which involves freedom to innovate, design, and being not restricted by any grammar, teacher, or a rule developed by so called over intellectuals. It is about communicating with anyone, and reflecting your understanding on the topic using icons, icons and random text.

English is a language not a measure of your intelligence

It did not take much time, may be a week, to read this book. As I relearned about how to give an importance of basic design techniques, in this book, I decided to keep a reflection of my understanding using sketch notes.

These two pictures speak about the my understanding with the topic that I studied through this book.

Page 2
Page 2

I hope you will use the same for your reference.

Please don’t think that you are not good at Drawing. If you need any motivation then please look at this picture of my daughter. Just like her, I am sure you were able to draw during your childhood.

Source of Motivation - Children

Once again, it does not matter how bad or good you are in Sketching or English, your focus should be on just sketch sketch sketch and speak your knowledge.

Namaste!

 

Build Android – Continuous Integration with Jenkins and Docker

header_image

This tutorial assumes that you have Jenkins in the docker. You can read about installing the same here.

 

Once the jenkins instance, named jenkins-master, is installed go to the root bash i.e. command prompt for your container.

e.g. docker exec -it --user root jenkins-master bash

From the bash command run the following:

> cd /opt

Know the sdk version and download:

> wget http://dl.google.com/android/android-sdk_r24.0.1-linux.tgz

unzip file:

> tar zxvf <filename of the just downloaded file>

You can now remove the file you just downloaded:

rm <filename of the just downloaded file>

Now some environment variables are required to be set.

vi /etc/profile.d/android.sh

——————————————————–
By default the vim is not present in docker
you need to install by the following

apt-get update
apt-get install vim
——————————————————-

Add the following lines:

export ANDROID_HOME="/opt/android-sdk-linux"
export PATH="$ANDROID_HOME/tools:$ANDROID_HOME/platform-tools:$PATH"

Then reload the file:

source /etc/profile

Now you should be able to call android from the command.

screen-shot-2016-11-05-at-3-54-19-pm

First list the available android sdk and platform tools:

android list sdk --all --extended

Get the item serial number from output and run:

> android update sdk -u -a -t <serial number>

Replace <serial number> for required: platform version,  android support library and sdk version.

Check your android gradle file for the required version.

For the Android SDK to be accessible by Jenkins, execute the following:

> chmod -R 755 /opt/android-sdk-linux

If you get this error
Cannot run program “/usr/local/android-sdk-linux/build-tools/19.0.3/aapt”: error=2, No such file or directory

then run this command: 

> apt-get install lib32stdc++6 lib32z1

 

All set and now reboot your container: 

docker restart CONTAINERNAME/ID

 If everything is done correctly, you should be able to set the gradle task and generate android apk/jar/aar :-).

screen-shot-2016-11-05-at-3-59-07-pm
Reference: https://www.digitalocean.com/community/tutorials/how-to-build-android-apps-with-jenkins

Android – Share code between multiple applications

Physical Path way

It may work well when you have a common drive location among code contributors. If you are only the one who maintain this library for a  different projects then this can your favourite option.

Open your app settings.gradle and add these lines:

include ':app'
include ':networkservices'
include ':common'

project (':networkservices').projectDir = new File('/Users/mramit/Documents/gits/lib/networkservices')
project (':common').projectDir = new File('/Users/mramit/Documents/gits/lib/common')

How to use it in app/ library?

All you have to do is add a dependency of this library: 

dependencies {
    compile project(':networkservices')
}

AAR way

Just like you create Jar for Java, you can also do the same for Android. However, it does not work well when you have resources to share e.g. string.xml

Instead of a Jar, the recommendation is to create an AAR file a.k.a Android Archive.

Why aar?

aar file is developed on top of jar file. It was invented because something Android Library needs to be embedded with some Android-specific files like AndroidManifest.xml, Resources, Assets or JNI which are out of jar file’s standard. So aar was invented to cover all of those things. Basically it is a normal zip file just like jar one but with different file structure. jar file is embedded inside aar file with classes.jar name. And the rest are listed below:

– /AndroidManifest.xml (mandatory)
– /classes.jar (mandatory)
– /res/ (mandatory)
– /R.txt (mandatory)
– /assets/ (optional)
– /libs/*.jar (optional)
– /jni/<abi>/*.so (optional)
– /proguard.txt (optional)
– /lint.jar (optional)

Then when to use to JAR?

If you are planning to provide any res in your common repo then the recommendation is *not* to use JAR.
Otherwise, you may go for Jar.

How to create aar ?

Requirement is it should be a library, and you have a library plugin applied in your library gradle.

apply plugin: 'com.android.library'

There is nothing else that needs to be done. When you will build with the gradle task, go to build/outputs/aar/ folder to copy and share this aar file.

How to use aar in your app or library?

Put the aar file in the libs directory (create it if needed), then, add the following code in your build.gradle :

dependencies {
  compile(name:'nameOfYourAARFileWithNoExtension', ext:'aar')
}
repositories{
  flatDir{
      dirs 'libs'
  }
}

Node.JS: Error Cannot find module [SOLVED]

Even though I have installed my npm package globally, I receive the following error :

Error: Cannot find module 'color'
 at Function.Module._resolveFilename (module.js:338:15)
 at Function.Module._load (module.js:280:25)
 at Module.require (module.js:364:17)
 at require (module.js:380:17)
 at repl:1:2
 at REPLServer.self.eval (repl.js:110:21)
 at Interface. (repl.js:239:12)
 at Interface.emit (events.js:95:17)
 at Interface._onLine (readline.js:202:10)
 at Interface._line (readline.js:531:8)

I did have some assumptions here that once the npm package is installed with “-g” or “–global” switch, it will find this package automatically. But after the struggle of installing, uninstalling, reinstalling, clearing cache locally, it did not solve my problem.

Overall, I knew how how the process of searching a module goes with “npm install” command. What I did not know that there is a variable called $NODE_PATH, which needs to have a right value.

For anyone else running into this problem, you need to check the value of $NODE_PATH variable with this command

root$ echo $NODE_PATH

If it is empty then this article may give you the solution that you are looking for.

What should be the value of this variable?

Lets find out the appropriate value for $NODE_PATH

Type in the following command line:

root$ which npm

This command will give you the path where npm is installed and running from.

In my case it is “/usr/local/bin/npm” and now note down the path.

Navigate to the /usr/local with the help of finder/ explorer. You will find the folder called “lib” and within that “lib” folder you will be able to see node_modules folder, which is your global level module folder. This is the place where all your global packages are installed.

All you have to do now is set the NODE_PATH with the path that you have found for node_modules.

example:

export NODE_PATH=’module path’

In my case it is /usr/local/lib/node_modules

export NODE_PATH='/usr/local/lib/node_modules'

NOTE: There is an another and probably, an easy way to find your global node_modules folder is by installing any package with –verbose flag.
For example, you can run

root$ npm install –global –verbose promised-io

It will install the npm package and it will give you the location where promised-io is installed. You can just pick the location and set the same in $NODE_PATH.

Here is an another twist.

Now everything will work fine within the session of current terminal. If you restart the terminal, and then echo $NODE_PATH, it will return and empty response.

What is the permanent solution?

You need to make the above export statement as a part of your .bash_profile file so that it is set as soon as you are logged in.

STEPS:

  1. Close all your terminal windows and open once again:
  2. type in root$: vi ~/.bash_rc file and add this line:export NODE_PATH=’module path’

    In my case:

    export NODE_PATH=’/usr/local/lib/node_modules’

  3. type root$: vi ~/.bash_profile and add this line:source ~/.bash_profile
  4. Close all terminal windows and try again with “echo $NODE_PATH” into new command window.

    If it still does not work then for the first time, just type in this command with the same window.

    source ~/.bash_profile

 

Know more about  $NODE_PATH

(Reference: https://nodejs.org/api/modules.html#modules_loading_from_the_global_folders )

Loading from the global folders

If the NODE_PATH environment variable is set to a colon-delimited list of absolute paths, then Node.js will search those paths for modules if they are not found elsewhere. (Note: On Windows, NODE_PATH is delimited by semicolons instead of colons.)

NODE_PATH was originally created to support loading modules from varying paths before the current module resolution algorithm was frozen.

NODE_PATH is still supported, but is less necessary now that the Node.js ecosystem has settled on a convention for locating dependent modules. Sometimes deployments that rely on NODE_PATH show surprising behavior when people are unaware that NODE_PATH must be set. Sometimes a module’s dependencies change, causing a different version (or even a different module) to be loaded as the NODE_PATHis searched.

Additionally, Node.js will search in the following locations:

  • 1: $HOME/.node_modules
  • 2: $HOME/.node_libraries
  • 3: $PREFIX/lib/node

Where $HOME is the user’s home directory, and $PREFIX is Node.js’s configured node_prefix.

These are mostly for historic reasons. You are highly encouraged to place your dependencies locally innode_modules folders. They will be loaded faster, and more reliably.

Setting HttpContext Response using HttpResponseMessage or Request.CreateResponse in WebApi2

Background

In my recent Continuous Improvement (CI) initiative I have been introducing few ActionFilters for the WebApi Controllers.

These action filters validates the request by using payload or the current user token against some business logics. If the request is not fulfilling business requirements then the filter should stop further processing and send response (ActionContext.Response) back from the filter pipeline.

In my projects, as of now, the default response content type is JSON.

HttpContent – ObjectContent or StringContent

The .NET Framework provides a few built-in implementations of HttpContent, here are some of the most commonly used:

  • ByteArrayContent: represents in-memory raw binary content
  • StringContent: represents text in a specific encoding (this is a specialisation of ByteArrayContent)
  • StreamContent: represents raw binary content in the form of a Stream.
  • ObjectContent is the generic implementation of the <T> type.

Problem

My basic requirement is to send JSON and I got stuck into the argument of using different objects that .net framework provides. Because my response type JSON, which is string, I can use StringContent StreamContent or even ObjectContent. The problem, what is the difference and what is the best approach in using different HttpContent sub classes.

Lets dig into each type one by one.

StringContent

StringContent is the subclass of ByteArrayContent, which is inheriting from further inheriting from HttpContent.

If I use StringContent then in that case I will have to tell lots of things when building an object such as setting content type, character set etc.. .

var errorResponse = new ResponseBase {
     Messages = new List { new Message {
             Type = MessageTypes.Error,
             Code = ErrorCode ,
             Description = ErrorMessage 
            } }
};

var response = new HttpResponseMessage {
 StatusCode = System.Net.HttpStatusCode.OK,
 Content = new StringContent(
 Newtonsoft.Json.JsonConvert.SerializeObject(errorResponse), System.Text.Encoding.UTF8, 
"application/json"),
 };

actionContext.Response = response;

I am setting the encoding as well the content type manually. Thus, what if the client negotiate the content type as XML ?

ByteArrayContent is definitely a good candidate when you have your data in the byte format such as Picture Content from the server.

ObjectContent

I can use ObjectContent class type, which is inherited from HttpContent. I can even pass a formatter object. However, it is not that easy to use because I need to pass the object type and it cannot take the formatter automatically. Again, there is a lot of hard coded settings that I need to pass to the ObjectContent.

var response = new HttpResponseMessage(HttpStatusCode.OK)
 {
 Content = new ObjectContent(typeof(ResponseBase),
 myobject,
 GlobalConfiguration.Configuration.Formatters.JsonFormatter)
 };

actionContext.Response = response;

The another most important factor of not using StringContent, ByteArrayContent, or ObjectContent directly with HttpResponseMessage is it does not recognise any serialisation configuration such as Camel case settings. Thus, either you have to pass it if it accept or do manual manipulations.

So what should be used then?

Well… the Winner is…Request.CreateResponse extension method.

Even though I have not mentioned this but the winner is somebody else. If you are using WebApi 2, like I am, it has introduced a method against Http request message object that instead of creating HttpResponse object and assigning to the response, we could just set the actionContext.Request.CreateResponse(…) extension method.

actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.OK, 
modelState.ValidationErrors(ErrorCode));

Benefits

  • It is neat and clean. I don’t have to create an HttpResponse object and set contents separately.
  • Based on the current HttpConfiguration and Content-Type passed in Request header, the CreateResponse extension negotiate the Formatter that it will from the HttpConfiguration.Formatters list. It means that I don’t have to specify any Serialization Configuration.
  • If the configuration has been modified, for example, in case of JSON Camel case then it pick up automatically with no special check from our side.
  •  It will look after for everything by default. Otherwise we have to do allot manually. Thus it has removed some potential bugs as well.
What is the value of actionContext.Response.Content.Headers.ContentType if you use 
Request.CreateResponse method?

By using CreateResponse it automatically checks the content type, from the request, and use 
that specific formatter. Otherwise, If there is no content-type in the request, then the 
default is based on which one is first in the list of HttpConfiguration.Formatters.

In case of StringContent, we'd to hard code the content type so even if the client is
negotiating for XML content types, it will send JSON. which is wrong.

 

Demystifying NodeJs “exports” keyword

Introduction

There has been a little confusion, in my mind, about using module.exports or exports in my nodejs coding. I have seen some code on github or otherwise, many of us are using the following exports statements:

  • exports.fn = function() { … }
  • module.exports.fn = function() ( … )
  • module.exports = exports = myobject.

Then my mind wonder:

  • What is the difference between “module.exports” and “exports” ?
  • Why Nodejs has introduced module.exports as well as exports ?
  • What does it means by module.exports = exports = object ?
  • What I prefer?

In this post, I will try to answer the above four questions. First, we will give a practical sense to exports.fn, module.exports.fn & module.exports = exports statement.

So let us start our journey with codebased

Explanation

Simplified Version

The “exports” is simply a global reference to module.exports, which initially is defined as an empty object that you can add properties to. Thus, exports.fn is shorthand for module.exports.fn.

As a result, if “exports” is set to anything else, it breaks the reference between module.exports and “exports”. Because module.exports is what really gets exported, “exports” will no longer work as expected. Thus, to ensure it is referencing to the same memory location, many of us reassign the value to module.exports as well as exports at the same time.

module.exports = exports = myobject

Detailed Version

In this sesion, I will try to demonstrate the same with code.

I pre-assumed that your nodejs is available through the command prompt. If you need an installation help, please click here.

Let us create a new file in node.

-------- laptop.js -----------
 
 exports.start = function(who) {
  console.log(who + ' instance has started this laptop');
 }

// calling a method within laptop.js
module.exports.start('module.exports');

Now create a new file that will import laptop file.

-------- app.js -----------
 
 require('./laptop.js')

Call app.js from the command prompt.

c:\ node app.js

Output
-----------------------------------------------

module.exports instance has started this laptop.

As you can see that even though we have defined a function with “exports” variable, it is available through “module.exports”.

Similarly if you do it opposite that will work too.

-------- laptop.js -----------
 
 module.exports.start = function(who) {
  console.log(who + ' instance has started this laptop');
 }

exports.start('exports');

Call app.js from the command prompt.

c:\ node app.js

Output
-----------------------------------------------

exports instance has started this laptop.

Now let see what happens here:

-------- laptop.js -----------
module.exports.start = function(who) {
 console.log(who + ' instance has started this laptop');
 }
 exports.start = function(who) {
 console.log(who + ' instance has started this laptop');
 }

exports.start('exports');
module.exports.start('module.exports');

Any guess what will be outcome?

Yes, the module.exports.start has overridden by the export.start.

Call app.js from the command prompt.

c:\ node app.js
 Output
 -----------------------------------------------

 exports instance has started this laptop.
 exports instance has started this laptop.

It is clear that “exports” is an alias to “module.exports”.

We will now try to recall our  questions one by one and answer those:

1. What is the difference between exports vs module.exports ?

Node.js does not allow you to override “exports” variable with any another memory address. However you can attach N number of properties to “exports”. Thus anything that assign directly to “exports” will not be available when it is exported.

You can export anything through “module.exports” but not with”exports” keyword.

You can do this:

module.exports = function() {
 }

or

module.exports = myobject;

Basically, anything that you have exported through module.exports will be available at app.js above. However, you cannot do like this in your laptop.js and then consider it is available in app.js.

exports = function () {
 }

or

exports = myobject;

It is clear now that you can export anything (function, object, constant value) through “module.exports” but not with “exports”.

Sounds crazy? Yes it is.

2. Why they have introduced module.exports as well as exports ?

I think the main reason could be to reduce number of characters to type?

3. What does it means by module.exports = exports = object ?

Many of us set module.exports and exports at the same time, to ensure exports isn’t referencing the prior exported object. By setting both you use exports as a shorthand and avoid potential bugs later on down the road (within same file).

Here is a piece of code to make it:

------ laptop.js ------

exports = "exports";
module.exports = "module.exports";

console.log(exports);
console.log(module.exports);

Call app.js from the command prompt.

c:\ node app.js
 Output
 -----------------------------------------------
exports
module.exports

Because they are pointing to a different location and by any chance the module has decided to use “exports” (not “module.exports” variable) value after it is set it will not be in sync.

Thus, to make it in sync. it is advisable that the “exports” variable has been set by anything you would want to define a rule that whenever we set module.exports with any value set the same value to exports in the same line.

Here is an example:

------ laptop.js ------
 
exports = 'i am value ';
module.exports = exports = function() {
   console.log('function is called.');
 }
console.log(typeof exports)
-------- app.js --------

var laptop = require('./lib/laptop.js');
laptop();

 

Call app.js from the command prompt.

c:\ node app.js
 Output
 -----------------------------------------------
function
function is called.

You can see in the output that because “exports” as well as “module.exports” are set to a function the output of “typeof” statement is “function”. If you don’t assign a function to exports then the typeof statement will produce “string”.

Thus, to remove any potential bugs we decide to set exports as well as “module.exports” at the same time.

Now this discussion is coming to an end with the last question i.e.

What I prefer?

Personally, I prefer to export a constructor function that can be used create an instance of a module.

example:

------ laptop.js ------

var laptop = function() {}
laptop.prototype.start = function() {};
laptop.prototype.stop = function() {};
laptop.prototype.suspend = function() {};
module.exports = exports = laptop;
 ------ app.js ------
var laptop = require('./laptop')
var mac = new laptop();
var win = new laptop();

However, if I want to give a singleton object then I replace the following line in laptop.js

------ laptop.js ------

module.exports = exports = new laptop();

and in app.js

 ------ app.js ------

var mac = require('./laptop');
mac.start();

Conclusion

  • We understand now that the exports is an alias to module.exports that can shorthand your writing in module development.
  • It is recommended that we point exports alias to module.exports value. Thus you should set exports whenever you are setting module.exports.
  • Since my background is .net, I recommend to export a class (constructor function) or an object.

– Happy coding!

Implementing SOLID principle For TDD on Web Service Reference

What is SOLID?

SOLID are five basic principles which help to create good software architecture. SOLID is an acronym where:-

•S stands for SRP (Single responsibility principle
•O stands for OCP (Open closed principle)
•L stands for LSP (Liskov substitution principle)
•I stands for ISP ( Interface segregation principle)
•D stands for DIP ( Dependency inversion principle)

There are many good article present over the Internet explaining SOLID in details. However, my entire focus will be on how I have implemented this principle on Web Service proxy that I’ve to currently worked on.
Since I am following TDD approach, I will start the development from explaining Unit test first:

Unit testing is all about testing current method (High cohesion and Low coupling). If this method is relaying on another method or an object then you should mock that object/ method so that you can test your method independently.

If you not sure about High cohesion and low coupling principle in OOPS, then I recommend you to learn about SOLID Principles.
You should also read about TDD.

This principle applies even when the dependent code has not been written as yet (well that is why we call it as TDD ;-).

In case of my integration project with sharepoint I was dependent on an external resource. I’d to call a sharepoint web service method GetUserProfileByName, with the endpoint at http://<yourwebsite>/_vti_bin/UserProfileService.asmx

When we used Visual Studio, Add Service reference tool, it creates a proxy class to call the web service:

[System.Web.Services.WebServiceBindingAttribute(Name="UserProfileServiceSoap", Namespace="http://microsoft.com/webservices/SharePointPortalServer/UserProfileService")]
 public partial class UserProfileService : System.Web.Services.Protocols.SoapHttpClientProtocol {…

}

When I saw this proxy file, it has got more than 50+ methods that are exposed. As a principle of SOLID, we should expose only those methods that are important to expose. In our case it is just one method i.e. GetUserProfileByName.
Another important factor we should consider into our design is to how can I assure that the implementation of the sharepoint can be easily replaced with any new external end point such as Active Directory.

To solve all these problems, I just need to follow a I and D pont from SOLID principle.

I – Interface segregation principle – many client-specific interfaces are better than one general-purpose interface.

D – Dependency inversion principle – one should “Depend upon Abstractions. Do not depend upon concretions.”

Now I will not go into the detail of SOLID but these are the steps that you should follow:

1. Import Web service proxy into the project. Now keep in mind that the web service proxy is of partial class. Thus we can create an another partial class and implement a contract.
2. Create a Contract that must contain only those members that you need to expose.

public interface IServiceClient
 {

 bool UseDefaultCredentials { get; set; }
 string Url { get; set; }
 }
public interface IUserProfileServiceClient: IServiceClient
 {
 PropertyData[] GetUserProfileByName(string accountName);
 }

3. Now that I have got contract ready, all I have to do is create a partial class with same signature. In case my case it:

public partial class QueryService : IQueryServiceClient {
 }

NOTE: You don’t need to implement any member because it has been already created by the proxy class.

4. Create a contract of your implementation that you want to test:

public interface IExternalUserService
 {
 User GetUserProfile(string accountName);
 }

5. Create a concrete class:

public class DirectoryUserService : IExternalUserService 
{
 private readonly IUserProfileServiceClient _profileService;
public DirectoryUserService(IUserProfileServiceClient profileService) {
_profileService = profileService;
}
public User GetUserProfile(string accountName)
{
var propertyBag = _profileService.GetUserProfileByName(accountName);
}

Now that we have implemented our method that we want to test, the following UnitTest implementation is using Moq Framework for Stub that is going to pass as IUserProfileServiceClient object.

[TestClass]
public class DirectoryUserServiceTest
{
private PropertyData[] _fakePropertyData;
[TestInitialize]
public void Setup()
{
// Arrange
_fakePropertyData = new[]
{ new PropertyData() {
Name = "WorkEmail",
Values = new[] {new ValueData {Value = "amit.x@xxxxx.com.au"}}
},
new PropertyData()
{
Name = "PictureURL",
Values = new[] {new ValueData {Value = "http://x/xtra/photos/photo152173.jpg"}}
},
new PropertyData() {Name = "FirstName", Values = new ValueData[] {new ValueData() {Value = "Amit"}}},
new PropertyData() {Name = "LastName", Values = new ValueData[] {new ValueData() {Value = "Malhotra"}}},
new PropertyData() {Name = "DirectoryID", Values = new ValueData[] {new ValueData() {Value = "152173"}}},
new PropertyData() {Name = "UserProfile_GUID",Values = new ValueData[] {new ValueData() {Value = Guid.NewGuid()}}
},new PropertyData() {Name = "BusinessUnit", Values = new ValueData[] {new ValueData() {Value = "RBS"}}},

new PropertyData(){Name = "Department",Values = new ValueData[] {new ValueData() {Value = "Partner Engagement"}}
}
};
}
[TestMethod]
public void TestGetUserProfileNotNull()
{
// Arrange
var mockService = new Mock<IUserProfileServiceClient>();
mockService.Setup(r => r.GetUserProfileByName(It.Is<string>(x => x.Equals("amit"))))
.Returns(_fakePropertyData);

IExternalUserService directoryService = new DirectoryUserService(mockService.Object, null);

// Act
User user = directoryService.GetUserProfile("amit");

//Assert
Assert.IsNotNull(user);
}

Point of Interest:

Understand SOLID:

S stands for SRP (Single responsibility principle):- A class should take care of only one responsibility.
O stands for OCP (Open closed principle):- Extension should be preferred over modification.
L stands for LSP (Liskov substitution principle):- A parent class object should be able to refer child objects seamlessly during runtime polymorphism.
I stands for ISP (Interface segregation principle):- Client should not be forced to use a interface if it does not need it.
D stands for DIP (Dependency inversion principle) :- High level modules should not depend on low level modules but should depend on abstraction.

Understand Moq:

Now if you are following the discipline of TDD and are building large scale enterprise application that interacts with services, databases, or other data sources, writing data driven unit tests becomes a challenge. Test setup (for each test) becomes more complicated and challenging than the actual unit test. This is where the concept of Mock comes into picture