My take on Dependency Injection in ASP.NET Core

Application Settings – Configuration

Everything that you used to store in .net web.config is now stored in .net core appsettings.json content file. To manipulate, you will use microsoft.extensions.configuration.json package. Once it is loaded through the BuildWebHost(…) method defined in Program.cs, you will be able to use IConfiguration DI object for appsettings data reading.

Apart from reading settings as an indexer (_configuration[key]), there are various other ways you can read app settings and the most commonly used is reading by section.

// in appsettings.json
"ApiConfiguration": {
 "URL": ""
}


// in ApiConfiguration.cs 
public class ApiConfiguration {
 public string URL {
  get;
  set;
 }
}

// in Startup.cs, Inject under ConfigureService(...)
services.Configure <ApiConfiguration> (Configuration.GetSection("ApiConfiguration"));

// in apiproxy.cs, consume ApiConfiguration object 

public class ApiProxy {
 private readonly ApiConfiguration _apiConfiguration;
 public ApiProxy(IOption <ApiConfiguration> options) {
  this._apiConfiguration = options.Value;
 }
}

If you don’t like the IOption<> way injection, you can use a strongly typed configuration.

// declare class
public interface IClientApiConfiguration {
 string URL {
  get;
  set;
 }
}
// startup.cs
services.Configure <ApiConfiguration> (Configuration.GetSection("ApiConfiguration"));
services.TryAddSingleton <IClientApiConfiguration> (sp => sp.GetRequiredService <IOptions<ApiConfiguration>>().Value);


// ApiProxy.cs
public class ApiProxy {
	 private readonly IClientApiConfiguration _clientApiConfiguration;
	 public ApiProxy(IClientApiConfiguration clientApiConfiguration) {
		  this._clientApiConfiguration = clientApiConfiguration;
	 }
}

DI Service Lifetime

ASP.NET Core supports the dependency injection (DI) software design pattern, which is a technique for achieving Inversion of Control (IoC) between classes and their dependencies. 

Dependency Injection Container also called inversion of control container is the part of Microsoft.Extensions.DependencyInjection namespace.There are two main services that we will be dealing with all the time:

  • Class IServiceCollection collects all DI registrations.
  • Class IServiceProvider resolves service instances

Main thing to understand about this DI is the service lifetime, which are of three types.

  • Transient – Transient objects are always different; a new instance is provided to every controller and every service. 
    • Objects are not requierd to be thread-safe
    • Potentionally less effective because it creates a new object, everytime it is resolved.
  • Singleton – Singleton objects are the same for every object and every request.
    • More performance improvement by not creating too many objects and reduce load on GC
    • Majority of middlerware constructor DI are singleton
    • It must be thread-safe
    • Suited for functional stateless services
  • Scoped – Scoped objects are the same within a request, but different across different requests.

Scope Validation

There are some rules on mixing different types of service lifetime. As an example – Singleton service should not store Transient service because it will lead the transient service being locked by singleton service.

  • transit service can depend on – transient, scoped singleton
  • scoped service can depend on – scoped singleton
  • singletone service can depend on – singleton

You can enable a .net core framework runtime validation code and please enable it only for debug mode, as it has some performance impact.

// In program.cs
 WebHost.CreateDefaultBuilder(args)
                .UseDefaultServiceProvider(options =>
                {
# if DEBUG
                    options.ValidateScopes = true;
#endif
                })

If you have multiple implementations of the same interface, and you have injected twice in that case by default you will get the last implementation that you have registered. However, if you want to receive all DI objects, you can use IEnumerable<> for constructor injection.

// startup.cs 
services.AddSingleton<INotification,EmailNotification>();
services.AddSingleton<INotification,SMSNotification>();
services.AddSingleton<INotification,NativeNotification>();
services.AddSingleton<INotification,PaperNotification>();
services.AddSingleton<INotifyService,NotifyService>();


// NotifyService.cs

public class NotifyService : INotifyService{

 public NotifyService(IEnumerable<INotification> notifications) {
  ...
 }

 public void Notify() {
  _notificationGateeway.Foreach().notify();
 }
}

Otherwise, if you don’t want to add multiple time, but at the same you don’t know if it has been added already then you can try with the following extension methods:

services.TryAdd<scope/transient/singleton>(...) or services.RemoveAll<interface>() or services.Replace<interface,implementation>();

Generic Registration

In order to register generic type, use this.

services.TryAddSingleton<IService<Payments>,Service<Payments>>();
services.TryAddSingleton<IService<Accounts>,Service<Accounts>>();
// Or
services.TryAddSingleton(typeof(IService<>), typeof(Service<T>));

Extension Methods

Because Startup.cs looks messy with so many injections, it is better you create custom extension methods, one for Services, one for Proxies, for dependancy injection and it is advised to use Microsoft.Extensions.DependencyInjection namespace.

namespace Microsoft.Extensions.DependencyInjection
{
    public static class ConfigurationServiceCollectionExtensions {
           public static IServiceCollection AddProxies(this IServiceCollection services, IConfiguration configuration)
          { ...
           return services;
          }
    }
}

Then call the same in startup.cs

services.AddProxies(_configuration).AddServices().AddProxiesHttpClients(_configuration);

Using Autofac

It is recommended by the Microsoft that you use their built-in DI. However, if you think it is too much of work for you to remove existing DI from your helix application, then you can mix .net core DI it with other DI such as Autofac, which is heavily used in helix projects.

In order to do that you basically have to do the following:

  • Install-Package Autofac.Extensions.DependencyInjection
  • Program.cs

public static IWebHostBuilder BuildWebHost(string[] args) =>
            WebHost.CreateDefaultBuilder(args)
          .ConfigureServices(services => services.AddAutofac())
                …

  • Startup.cs : This method will be called by the .net core automatically to set the service collection
public void ConfigureContainer(ContainerBuilder builder) { 
  builder.RegisterType<Service>().As<IService>().InstancePerLifetimeScope();
}

Injection types

Constructor Injection :

  • Assign default values for arguments that are not provided by the container.
  • when service are resolved a public constructor is required
  • Only one constructor should be there to resolve parameters
    • You cannot have like this

   public class Service {
    public Service (INotification notification, IParam param){}
    public Service (INotification notification){}
   }
   Otherwise, it will throw an exception like this: InvalidOperationException: Multiple constructors accepting all given argument types have been found in type &#x27; 27;. There should only be one applicable constructor.</div>

Action Injection

 Inject through controller using [FromServices] attribute in your action
  public void DoAction([FromServices] IConfiguration confugration ) {
  …
  }

Middleware Injection

  • Middleware components are constructed once, thus any dependency via the constructor should singleton only. Otherwise, validation scope will throw an exception. However, the factory middlewares are the exception.
  • Instead inject in InvokeAsync method where the scope is injected for every single request

  public async Task InvokeAsync(IConfiguration configuration ) {
  ..
  }

Transient or Scoped for Stateless Services

Microsoft states in their document that it is better to use Transient for Stateless services, such as REST API. However, it does not back this suggestion with any reasoning (Smith, Addie and Latham, 2019) .

Smith, S., Addie, S. and Latham, L. (2019). Dependency injection in ASP.NET Core. [online] Docs.microsoft.com. Available at: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection?view=aspnetcore-2.2 [ Accessed 3 Jun. 2019].

Some useful packages

Assembly scanning and decoration extensions for Microsoft.Extensions.DependencyInjection https://github.com/khellang/Scrutor

services.Scan(scan=>scan.FromAssemblyOf<IProxy>().AddClasses(c=> c.AssignableTo<IProxy>()).AsImplementedInterface().WithScopedLifetime());
 // or
 services.Scan(scan=>scan.FromAssemblyOf<IProxy>().AddClasses(c=> c.AssignableTo<IApiProxy>()).As<IProxy>().WithScopedLifetime());

Mixin Pattern (Class?) in C#

Mixin class is not a pattern because mixin is really not a pattern. It is, rather, a peculiar way in which class inheritance can be accomplished. The famous design patterns book defines mixin class in its opening chapter using these words:

A mixin class is a class that is intended to provide an optional interface or functionality to other classes. It’s similar to an abstract class in that it is not intended to be instantiated. Mixin classes require multiple inheritance.

Gamma et al., Design Patterns: Elements of Reusable Object-Oriented Software

This definition is basically correct, but it requires one additional note before putting it into the C# environment.

Mixin class requires multiple inheritance, plus it is abstract. In terms of C# as a programming language with single inheritance only, this means that mixin will in fact be the interface. Interfaces are abstract, and they can be inherited, that is, implemented even when the class already inherits from another class.

In C# we can create a mixin with a combination of an interface plus extension methods. LINQ is the canonical example of this with two core interfaces IEnumerable<T> and IQueriable<T> and a collection of extension methods on those interfaces.

It’s easy to create your own mixins.

For example, say I want to provide location functionality to various entities. I can define an interface like this:

public interface IPlay {  
  string File
  { 
     get; set; 
  }    
} 

And then some extension methods for that interface:

static class PlayExtensions {    
    public static void PlayAudio(this IPlay play) { 
            ///
    }
   public static void PlayVideo(this IPlay play) { 
        ////
   }
}




Now, if we’ve got some entities like:

public class CameraPlayer: IPlay 
{
public string File { get; set; }
}

public class DVDPlayer : IPlay {
public string File { get; set; }
}

We can use the mixin like this:

var dvdPlayer = new DVDPlayer { File = “..?” } ;
dvdPlayer.Audio();
dvdPlayer.Video();

Take Away is:

  • Extension method acts as any other method defined on the class. However, it can only see public members of the class.


Understanding The Boyer-Moore Horspood algorithm using C# code

The Boyer-Moore Horspood algorithm is consider the most efficient string-matching algorithm. It can be used in text editors search, commands substitutions, highlight matching sub string.

It works the fastest when the search pattern is relatively long.

What is Boyer-Moore-Horspool Algorithm ?

Boyer-Moore-Horspool algorithm is for finding substrings into strings. This algorithm compares each characters of substring to find a word or the same characters into the string. When characters do not match, the search jumps to the next matching position in the pattern by the value indicated in the Bad Match Table.

What is Bad Match Table?

The Bad Match Table indicates how many jumps should it move from the current position to the next. This table contains the length to shift our substring by number of steps when a bad match occurs.

The whole process of this algorithm has been divided into two stages:

  • Generate Bad match table
  • Search Process

Stage 1 – Generate Bad match table

As explained above, Bad match table is a two dimensional array, where first row contains each character in substring (search pattern), and second row contains integer values. The value for each character is calculated using this formula:

 Value = length of substring – index of each letter in the substring – 1

For e.g. if our substring is “abcdbb” then the batch match table should be:

substring characterabcd*
value (first value calculated as 6-0-1) 51
3
2
6
  • Note that the value of the last letter and other letters that are not in the sub-string will be the length of the sub-string . In our case above, it is set to 6.
  • Also note that if there is any duplicate character in our search string, the previous value for that character is replaced with new value. That is why you can see that in our case the value of character b has been changed from 4 to 1.

How Bad match table helps?

Because of Bad match table, this technique gives a good search performance because it avoid lots of needless comparisons by significantly shifting pattern relative to text.

Here is a quick C# class that is responsible to to generate a bad match table for any search pattern (substring):

using System;
using System.Collections.Generic;

namespace Learning.BoyerMooreHorspoolSearch
{
    public interface IBadMatchTable
    {
        Dictionary<int, int> Table { get; }
        int NextJump(int character);
    }

    public class BadMatchTable : IBadMatchTable
    {
        private readonly Lazy<Dictionary<int, int>> _table;
        private readonly string _pattern;

        public BadMatchTable(string pattern)
        {
            _table = new Lazy<Dictionary<int, int>>(() => GenerateTable(pattern));
            _pattern = pattern;
        }

        public Dictionary<int, int> Table => _table.Value;

        public int NextJump(int character)
        {
            try
            {
                return _table.Value[character];
            }
            catch
            {
                // return default value when there is nothing in the bad match table.
                return _pattern.Length;
            }
        }

        private Dictionary<int, int> GenerateTable(string pattern)
        {
            var table = new Dictionary<int, int>(pattern.Length);

            // Last character distance value has to be equal to pattern length, so we will just ignore that for now.
            for (int idx = 0; idx < pattern.Length - 1; idx++)
            {
                table[pattern[idx]] = pattern.Length - idx - 1;
            }

            return table;
        }
    }
}

You can try running this code by using these test cases:

using Learning.BoyerMooreHorspoolSearch;
using Microsoft.VisualStudio.TestTools.UnitTesting;

namespace LearningTests
{
    [TestClass]
    public class BadMatchTableTests
    {
  
        [TestMethod]
        public void BadMatchTable_Duplicate_Character_MustReturnsValidResponse()
        {
            var sut = new BadMatchTable("happily");
            Assert.AreEqual(5, sut.Table.Count);
            var expectedValues = new int[] { 6, 5, 3, 2, 1 };
            int i = 0;
            foreach (var item in sut.Table)
            {
                Assert.AreEqual(expectedValues[i++], item.Value);
            }
        }
        [TestMethod]
        public void BadMatchTable_NextJumpForDuplicateMustMatch()
        {
            var sut = new BadMatchTable("happily");
            Assert.AreEqual(3, sut.NextJump('p'));
        }
        [TestMethod]
        public void BadMatchTable_NextJumpForNonMatchedCharacter_Should_return_substringlength()
        {
            var sut = new BadMatchTable("happily");
            Assert.AreEqual(7, sut.NextJump(' '));
        }
    }
}

Now we have generated a “bad match table” for our search phrase “happily“, we will now use this table to search “happily” in a text:

mobile citi was happy to oblige to another request happily.”

substring characterha
p
i
l
*
value (first value calculated as 6-0-1) 653
2
1
7

Stage 2 – Search Process

In this part of the process, the substring is compared from the last character. If the character is not matched then the bad match table is used to skip characters. We search for that character from text, inside of the bad match table. If the character is not found then we use default value. In our case it will be 7.

Let us execute couple of steps to understand the whole process of searching text:

By now you must have got an idea about how the whole process of searching works. If not, don’t worry, I have got you a nice piece of C# code that you can copy and and paste in your project and then run couple of test cases to see the magic of this search:

namespace Learning.BoyerMooreHorspoolSearch
{
    public class StringSearchMatch
    {
        public int StartIndex { get; set; }
        public int Length { get; set; }
    }

 public class BoyerMooreHorspool
    {
        private IBadMatchTable _badMatchTable;

        public BoyerMooreHorspool(IBadMatchTable badMatchTable)
        {
            _badMatchTable = badMatchTable;
        }

        public IEnumerable<StringSearchMatch> Search(string text, string pattern)
        {
            int currentStartIndex = 0;
            while (currentStartIndex <= text.Length - pattern.Length)
            {
                int charactersLeftToMatch = pattern.Length - 1;
                while (charactersLeftToMatch >= 0 && string.Equals(pattern[charactersLeftToMatch], text[currentStartIndex + charactersLeftToMatch]))
                {
                    charactersLeftToMatch--;
                }
                if (charactersLeftToMatch < 0)
                {
                    yield return new StringSearchMatch { StartIndex = currentStartIndex, Length = pattern.Length };
                    currentStartIndex += pattern.Length;
                }
                else
                {
                        
                    currentStartIndex += _badMatchTable.NextJump(text[currentStartIndex + pattern.Length - 1]) ;
                }
            }
        }
    }
}

You can try running this code by using these test cases:

using Learning.BoyerMooreHorspoolSearch;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using System.Linq;

namespace LearningTests
{
    [TestClass]
    public class BoyerMooreHorspoolSearchTests
    {

        [TestMethod]
        public void BoyerMooreHorspool_MustReturn_SingleCount()
        {
            var text = "Mobile citi was happy to oblige to another request happily.".ToLower();
            var sut = new BoyerMooreHorspool(new BadMatchTable("happily"));
            var searchResult = sut.Search(text, "happily");
            Assert.AreEqual(1, searchResult.Count());
            Assert.AreEqual(51, searchResult.First().StartIndex);
        }
    }
}

In summary, this whole search process is based on these three steps:

  • If the substring letter matches, then compare with the preceding (backward) letter .
  • If it doesn’t match, check mismatched character (from text) value in the Bad Match Table. In case there is no character value found, use the default value that is search pattern length.
  • Then, skip the number of spaces that the table value indicates, and repeat the whole process until you reached till the end.

Analysis:

  • Boyer-Moore algorithm is extremely fast on large alphabet (relative to the length of the pattern).
  • Preprocessing: Uses only Bad-character shift. Efficient bad matah table for small alphabets.
  • Best case Θ(n/m)
  • Worst case Θ(nm)

Prototype in Javascript

What is prototype in JavaScript?

An encapsulation of properties that an object links to. Simply put it allows you to share functions or properties with every single object of some specific type. Now every JavaScript object has a prototype. The prototype is also an object. All JavaScript objects inherit their properties and methods from their prototype.

There are two types of prototypes:

1. A function prototype is the object instance that will become the prototype for all objects created using this function as a constructor.
2. Where as an object prototype is the object instance from which the object is inherited.

Let us take an example:

function Employee(name)
{ 
  this.name = name;
}

Now the Employee.prototype is the function prototype. If I create an object of type Employee, then it has an object prototype, which is accessible as

employee.__proto__

Please note that the Employee.prototype is pointing to the same object as employee.__proto__.

var employee = new Employee(name);
Employee.prototype ===  employee.__proto__

Similarly, if I create an another employee object then also a new object is  pointing to the Employee.prototype object.

var employee1 = new Employee(name);
employee1.__proto__ === employee.__proto__  === Employee.prototype

It means that the prototype object is shared between objects of type Employee.

How the property is read by the javascript engine?

If I type employee.salary(), it will return an error that salary method does not exist. However, if I can modify the prototype object, so that the salary() function is attached with the Employee prototype then the function salary() will be available for every single object of type Employee.

Employee.prototype.salary = function () {return 0; };

Now employee.salary() will return 0. Which means that if the object does not have a property then javascript engine will check with __proto__ object.

You can check who owns the salary object with this line of code:

employee.__proto__.hasOwnProperty('salary')

It will return true because salary is assigned to Employee prototype.

How can I change prototype object?

If you assign something to the prototype variable, a new prototype object is created. However, any existing objects are still pointing to the old prototype. so far I have two variables, employee and employee1. Both of these variables are pointing to the same prototype. Which means that the employee.age and employee1.age will return the same type i.e. undefined. However, if I change the value of a prototype age of an Employee like this:

Employee.prototype = {age: 10}

Then employee.age == employee1.age are still same, pointing to the type undefined but employee.age != Employee.prototype.

When I create a new object of an Employee like this:

var employee2 = new Employee();

The employee2 is pointing to a new prototype and employee2.age will return 10.

How inheritance works?

  • Make a call to parent function by using <base function>.call(this, <parameters>).
  • Set prototype for derived class.
  • Set prototype.constructor for derived class.
'use strict'
 
 function Employee(name){
 this.name = name
 }
 
Employee.prototype.age = 5; 
Employee.prototype.calculateSalary = function() { return 1000 };

function Manager(name) {
 // if you are not going to call the base constructor then
 // you are not going to have a name.
 Employee.call(this, name) // 1 
 this.hasCar = false;
}
 
Manager.prototype = Object.create(Employee.prototype); // 2.
Manager.prototype.constructor = Manager; //3.
 
var manager = new Manager('test');

console.log(manager.calculateSalary());

 How Prototyping is done with Classes?

'use strict'
 
 class Employee {
 constructor(name) {
 this.name = name;
 } 
 
 calculateSalary() {
 return 1000;
 }
 }
 
 class Manager extends Employee { 
 
 constructor(name, hasCar) {
 super(name);
 this.hasCar = hasCar;
 } 
 }

var manager = new Manager('test', true);
 console.log(manager.calculateSalary());

 

 

 

Playground – Javascript Object Properties.

How to define properties?

There are many ways to do this:

1. Assign a Json object aka bracket notations.

var employee = {
 name : {first: 'Vinod', last: 'Kumar'},
 gender: 'M'
};

2. Use . operator

var employee.fullName = "Amit Malhtora";

3. Use [] operator

var employee["fullName"] = "Amit Malhotra";

4. Use ECMAScript 5 defineProperty with accessor (get; set;) descriptor

Object.defineProperty(employee, 'fullName',
 {
 get: function() {
 return this.name.first + ' ' + this.name.last;
 },
 set: function (value){
 var nameParts = value.split(' ');
 this.name.first = nameParts[0];
 this.name.last = nameParts[1];
 }
});
 
employee.fullName = 'Amit Malhotra'
 
console.log(employee.name.first); // OUT: Amit

5. Use ECMAScript 5 defineProperty with property descriptor

Object.defineProperty(employee, 'fullName', {
 value: 'Amit Malhotra',
 writable: true,
 enumerable: true,
 configurable: true
});

What is Property Descriptor after all?

In JavaScript, you can define a metadata about the property. The following descriptor can be defined as writable, configurable and enumerable.

You can get the property descriptor using Object.getOwnPropertyDescriptor method.

Object.getOwnPropertyDescriptor(employee.name,'first');
 
// Out: Object
 
/* {
value: Amit
writable: true
enumerable: true
configurable: true
} */

writable – allow to change.

Object.defineProperty(employee, 'age', {writable: false});

Now if I try changing property as:

employee.age = 12

Then will throw this error:

TypeError: Cannot assign to read only property 'age' of 
object '#<Object>, Please note that it will throw an 
exception only if we use 'use strict'. Otherwise it will 
silently fail without changing the value of age to 12.

enumerable – allow to enum your property like this:

for(var propertyname in employee){
 display(propertyname + ': ' + employee[propertyname])
}

It returns name: [object Object], gender: M

If you set the enumerable to false:

Object.defineProperty(employee, 'gender', 
{enumerable: false})

 

The Object.keys(employee) will not return gender property. 
Similarly, if the Object.defineProperty(employee, 'gender',
{enumerable: true}) then it will return gender. You can 
still access gender like employee.gender, but you cannot 
see in Object.keys(employee)

configurable – That you can change some property descriptor

Object.defineProperty(employee,'age', {configurable: false})

Now you cannot change the property descriptor enumerable, configurable, or delete age. However, you can change writable.

 

Validate JSON response with URLs | Scripting Postman

I have been in a situation with an API url testing with the postman, where the server response has got a collection of URLs and I want to assure that these URLs are active i.e. Http OK.

To achieve this objective, I have to do the following:

  1. Call API, and store url result
  2. Call each URL and assert

Let us go one by one.

Call API, and Store URL result.
(PS: Instead of making an API call, which will return urls, I am faking my first API call 
with www.google.com, and have coding urls).
  • Open Postman and create a folder called “ValidateResponseURL”
  • Create a postman script, named as “get-contents”. It will call your API. For this demo, I am calling  www.google.com
  • Go to tests tab, and check that the response code is 200 for “GET google.com”
  • Store a collection of urls into an environment variable using postmant.setEnvironmentVariable(“…”)
  • Please note that, instead of storing url collection as it is, you need to store the first element into a separate environment variable, so that you can make a decision if there is any result from the server. In addition, it will also help you to use this dynamic url in the next step.This is how the “get-contents” step looks like in my machine.
Call each URL and assert
  • Now if you run the first postman step and check local variable, you will find the two environment variables “testurl” and “testurls”
  • Create an another postman step, named as “validate-urls”
  • Select a “Get” verb, and use “testurl” environment variable i.e. {{testurl}} as your url.
  • Now go to the tests tab of your postman step, and validate that the response code is 200.
  • Fetch a next url from “testurls” environment variable, and execute “validate-urls” step again.
  • When there is nothing left in the “testurls” collection, clear environment variables.Your script should look something like this:

Once everything is settled, you can now execute the same using Runner and you will find the result as follows:

As you can see in the result that my all websites are alive and responding with Http OK response code i.e. (200).

Thats all folks!

Namaste.

References:

“LOOPS AND DYNAMIC VARIABLES IN POSTMAN: PART 2”, https://thisendout.com/2017/02/22/loops-dynamic-variables-postman-pt2/

“Branching and Looping”, https://www.getpostman.com/docs/postman/scripts/branching_and_looping

“Test script” , https://www.getpostman.com/docs/postman/scripts/test_scripts

 

My first Sketchnotes on “The Sketchnote Handbook”

Ever since I got to know about “The Sketchnote Handbook“, I wanted to read it. The main reason of this curiosity is because there is a belief in me, which has been reflected in this book. The belief of communicating through the Visuals. What I mean with Visuals? It means global communication, which involves freedom to innovate, design, and being not restricted by any grammar, teacher, or a rule developed by so called over intellectuals. It is about communicating with anyone, and reflecting your understanding on the topic using icons, icons and random text.

English is a language not a measure of your intelligence

It did not take much time, may be a week, to read this book. As I relearned about how to give an importance of basic design techniques, in this book, I decided to keep a reflection of my understanding using sketch notes.

These two pictures speak about the my understanding with the topic that I studied through this book.

Page 2
Page 2

I hope you will use the same for your reference.

Please don’t think that you are not good at Drawing. If you need any motivation then please look at this picture of my daughter. Just like her, I am sure you were able to draw during your childhood.

Source of Motivation - Children

Once again, it does not matter how bad or good you are in Sketching or English, your focus should be on just sketch sketch sketch and speak your knowledge.

Namaste!

 

Build Android – Continuous Integration with Jenkins and Docker

header_image

This tutorial assumes that you have Jenkins in the docker. You can read about installing the same here.

 

Once the jenkins instance, named jenkins-master, is installed go to the root bash i.e. command prompt for your container.

e.g. docker exec -it --user root jenkins-master bash

From the bash command run the following:

> cd /opt

Know the sdk version and download:

> wget http://dl.google.com/android/android-sdk_r24.0.1-linux.tgz

unzip file:

> tar zxvf <filename of the just downloaded file>

You can now remove the file you just downloaded:

rm <filename of the just downloaded file>

Now some environment variables are required to be set.

vi /etc/profile.d/android.sh

——————————————————–
By default the vim is not present in docker
you need to install by the following

apt-get update
apt-get install vim
——————————————————-

Add the following lines:

export ANDROID_HOME="/opt/android-sdk-linux"
export PATH="$ANDROID_HOME/tools:$ANDROID_HOME/platform-tools:$PATH"

Then reload the file:

source /etc/profile

Now you should be able to call android from the command.

screen-shot-2016-11-05-at-3-54-19-pm

First list the available android sdk and platform tools:

android list sdk --all --extended

Get the item serial number from output and run:

> android update sdk -u -a -t <serial number>

Replace <serial number> for required: platform version,  android support library and sdk version.

Check your android gradle file for the required version.

For the Android SDK to be accessible by Jenkins, execute the following:

> chmod -R 755 /opt/android-sdk-linux

If you get this error
Cannot run program “/usr/local/android-sdk-linux/build-tools/19.0.3/aapt”: error=2, No such file or directory

then run this command: 

> apt-get install lib32stdc++6 lib32z1

 

All set and now reboot your container: 

docker restart CONTAINERNAME/ID

 If everything is done correctly, you should be able to set the gradle task and generate android apk/jar/aar :-).

screen-shot-2016-11-05-at-3-59-07-pm
Reference: https://www.digitalocean.com/community/tutorials/how-to-build-android-apps-with-jenkins

Android – Share code between multiple applications

Physical Path way

It may work well when you have a common drive location among code contributors. If you are only the one who maintain this library for a  different projects then this can your favourite option.

Open your app settings.gradle and add these lines:

include ':app'
include ':networkservices'
include ':common'

project (':networkservices').projectDir = new File('/Users/mramit/Documents/gits/lib/networkservices')
project (':common').projectDir = new File('/Users/mramit/Documents/gits/lib/common')

How to use it in app/ library?

All you have to do is add a dependency of this library: 

dependencies {
    compile project(':networkservices')
}

AAR way

Just like you create Jar for Java, you can also do the same for Android. However, it does not work well when you have resources to share e.g. string.xml

Instead of a Jar, the recommendation is to create an AAR file a.k.a Android Archive.

Why aar?

aar file is developed on top of jar file. It was invented because something Android Library needs to be embedded with some Android-specific files like AndroidManifest.xml, Resources, Assets or JNI which are out of jar file’s standard. So aar was invented to cover all of those things. Basically it is a normal zip file just like jar one but with different file structure. jar file is embedded inside aar file with classes.jar name. And the rest are listed below:

– /AndroidManifest.xml (mandatory)
– /classes.jar (mandatory)
– /res/ (mandatory)
– /R.txt (mandatory)
– /assets/ (optional)
– /libs/*.jar (optional)
– /jni/<abi>/*.so (optional)
– /proguard.txt (optional)
– /lint.jar (optional)

Then when to use to JAR?

If you are planning to provide any res in your common repo then the recommendation is *not* to use JAR.
Otherwise, you may go for Jar.

How to create aar ?

Requirement is it should be a library, and you have a library plugin applied in your library gradle.

apply plugin: 'com.android.library'

There is nothing else that needs to be done. When you will build with the gradle task, go to build/outputs/aar/ folder to copy and share this aar file.

How to use aar in your app or library?

Put the aar file in the libs directory (create it if needed), then, add the following code in your build.gradle :

dependencies {
  compile(name:'nameOfYourAARFileWithNoExtension', ext:'aar')
}
repositories{
  flatDir{
      dirs 'libs'
  }
}

Node.JS: Error Cannot find module [SOLVED]

Even though I have installed my npm package globally, I receive the following error :

Error: Cannot find module 'color'
 at Function.Module._resolveFilename (module.js:338:15)
 at Function.Module._load (module.js:280:25)
 at Module.require (module.js:364:17)
 at require (module.js:380:17)
 at repl:1:2
 at REPLServer.self.eval (repl.js:110:21)
 at Interface. (repl.js:239:12)
 at Interface.emit (events.js:95:17)
 at Interface._onLine (readline.js:202:10)
 at Interface._line (readline.js:531:8)

I did have some assumptions here that once the npm package is installed with “-g” or “–global” switch, it will find this package automatically. But after the struggle of installing, uninstalling, reinstalling, clearing cache locally, it did not solve my problem.

Overall, I knew how how the process of searching a module goes with “npm install” command. What I did not know that there is a variable called $NODE_PATH, which needs to have a right value.

For anyone else running into this problem, you need to check the value of $NODE_PATH variable with this command

root$ echo $NODE_PATH

If it is empty then this article may give you the solution that you are looking for.

What should be the value of this variable?

Lets find out the appropriate value for $NODE_PATH

Type in the following command line:

root$ which npm

This command will give you the path where npm is installed and running from.

In my case it is “/usr/local/bin/npm” and now note down the path.

Navigate to the /usr/local with the help of finder/ explorer. You will find the folder called “lib” and within that “lib” folder you will be able to see node_modules folder, which is your global level module folder. This is the place where all your global packages are installed.

All you have to do now is set the NODE_PATH with the path that you have found for node_modules.

example:

export NODE_PATH=’module path’

In my case it is /usr/local/lib/node_modules

export NODE_PATH='/usr/local/lib/node_modules'

NOTE: There is an another and probably, an easy way to find your global node_modules folder is by installing any package with –verbose flag.
For example, you can run

root$ npm install –global –verbose promised-io

It will install the npm package and it will give you the location where promised-io is installed. You can just pick the location and set the same in $NODE_PATH.

Here is an another twist.

Now everything will work fine within the session of current terminal. If you restart the terminal, and then echo $NODE_PATH, it will return and empty response.

What is the permanent solution?

You need to make the above export statement as a part of your .bash_profile file so that it is set as soon as you are logged in.

STEPS:

  1. Close all your terminal windows and open once again:
  2. type in root$: vi ~/.bash_rc file and add this line:export NODE_PATH=’module path’

    In my case:

    export NODE_PATH=’/usr/local/lib/node_modules’

  3. type root$: vi ~/.bash_profile and add this line:source ~/.bash_profile
  4. Close all terminal windows and try again with “echo $NODE_PATH” into new command window.

    If it still does not work then for the first time, just type in this command with the same window.

    source ~/.bash_profile

 

Know more about  $NODE_PATH

(Reference: https://nodejs.org/api/modules.html#modules_loading_from_the_global_folders )

Loading from the global folders

If the NODE_PATH environment variable is set to a colon-delimited list of absolute paths, then Node.js will search those paths for modules if they are not found elsewhere. (Note: On Windows, NODE_PATH is delimited by semicolons instead of colons.)

NODE_PATH was originally created to support loading modules from varying paths before the current module resolution algorithm was frozen.

NODE_PATH is still supported, but is less necessary now that the Node.js ecosystem has settled on a convention for locating dependent modules. Sometimes deployments that rely on NODE_PATH show surprising behavior when people are unaware that NODE_PATH must be set. Sometimes a module’s dependencies change, causing a different version (or even a different module) to be loaded as the NODE_PATHis searched.

Additionally, Node.js will search in the following locations:

  • 1: $HOME/.node_modules
  • 2: $HOME/.node_libraries
  • 3: $PREFIX/lib/node

Where $HOME is the user’s home directory, and $PREFIX is Node.js’s configured node_prefix.

These are mostly for historic reasons. You are highly encouraged to place your dependencies locally innode_modules folders. They will be loaded faster, and more reliably.