Android – Share code between multiple applications

Physical Path way

It may work well when you have a common drive location among code contributors. If you are only the one who maintain this library for a  different projects then this can your favourite option.

Open your app settings.gradle and add these lines:

include ':app'
include ':networkservices'
include ':common'

project (':networkservices').projectDir = new File('/Users/mramit/Documents/gits/lib/networkservices')
project (':common').projectDir = new File('/Users/mramit/Documents/gits/lib/common')

How to use it in app/ library?

All you have to do is add a dependency of this library: 

dependencies {
    compile project(':networkservices')
}

AAR way

Just like you create Jar for Java, you can also do the same for Android. However, it does not work well when you have resources to share e.g. string.xml

Instead of a Jar, the recommendation is to create an AAR file a.k.a Android Archive.

Why aar?

aar file is developed on top of jar file. It was invented because something Android Library needs to be embedded with some Android-specific files like AndroidManifest.xml, Resources, Assets or JNI which are out of jar file’s standard. So aar was invented to cover all of those things. Basically it is a normal zip file just like jar one but with different file structure. jar file is embedded inside aar file with classes.jar name. And the rest are listed below:

– /AndroidManifest.xml (mandatory)
– /classes.jar (mandatory)
– /res/ (mandatory)
– /R.txt (mandatory)
– /assets/ (optional)
– /libs/*.jar (optional)
– /jni/<abi>/*.so (optional)
– /proguard.txt (optional)
– /lint.jar (optional)

Then when to use to JAR?

If you are planning to provide any res in your common repo then the recommendation is *not* to use JAR.
Otherwise, you may go for Jar.

How to create aar ?

Requirement is it should be a library, and you have a library plugin applied in your library gradle.

apply plugin: 'com.android.library'

There is nothing else that needs to be done. When you will build with the gradle task, go to build/outputs/aar/ folder to copy and share this aar file.

How to use aar in your app or library?

Put the aar file in the libs directory (create it if needed), then, add the following code in your build.gradle :

dependencies {
  compile(name:'nameOfYourAARFileWithNoExtension', ext:'aar')
}
repositories{
  flatDir{
      dirs 'libs'
  }
}

Node.JS: Error Cannot find module [SOLVED]

Even though I have installed my npm package globally, I receive the following error :

Error: Cannot find module 'color'
 at Function.Module._resolveFilename (module.js:338:15)
 at Function.Module._load (module.js:280:25)
 at Module.require (module.js:364:17)
 at require (module.js:380:17)
 at repl:1:2
 at REPLServer.self.eval (repl.js:110:21)
 at Interface. (repl.js:239:12)
 at Interface.emit (events.js:95:17)
 at Interface._onLine (readline.js:202:10)
 at Interface._line (readline.js:531:8)

I did have some assumptions here that once the npm package is installed with “-g” or “–global” switch, it will find this package automatically. But after the struggle of installing, uninstalling, reinstalling, clearing cache locally, it did not solve my problem.

Overall, I knew how how the process of searching a module goes with “npm install” command. What I did not know that there is a variable called $NODE_PATH, which needs to have a right value.

For anyone else running into this problem, you need to check the value of $NODE_PATH variable with this command

root$ echo $NODE_PATH

If it is empty then this article may give you the solution that you are looking for.

What should be the value of this variable?

Lets find out the appropriate value for $NODE_PATH

Type in the following command line:

root$ which npm

This command will give you the path where npm is installed and running from.

In my case it is “/usr/local/bin/npm” and now note down the path.

Navigate to the /usr/local with the help of finder/ explorer. You will find the folder called “lib” and within that “lib” folder you will be able to see node_modules folder, which is your global level module folder. This is the place where all your global packages are installed.

All you have to do now is set the NODE_PATH with the path that you have found for node_modules.

example:

export NODE_PATH=’module path’

In my case it is /usr/local/lib/node_modules

export NODE_PATH='/usr/local/lib/node_modules'

NOTE: There is an another and probably, an easy way to find your global node_modules folder is by installing any package with –verbose flag.
For example, you can run

root$ npm install –global –verbose promised-io

It will install the npm package and it will give you the location where promised-io is installed. You can just pick the location and set the same in $NODE_PATH.

Here is an another twist.

Now everything will work fine within the session of current terminal. If you restart the terminal, and then echo $NODE_PATH, it will return and empty response.

What is the permanent solution?

You need to make the above export statement as a part of your .bash_profile file so that it is set as soon as you are logged in.

STEPS:

  1. Close all your terminal windows and open once again:
  2. type in root$: vi ~/.bash_rc file and add this line:export NODE_PATH=’module path’

    In my case:

    export NODE_PATH=’/usr/local/lib/node_modules’

  3. type root$: vi ~/.bash_profile and add this line:source ~/.bash_profile
  4. Close all terminal windows and try again with “echo $NODE_PATH” into new command window.

    If it still does not work then for the first time, just type in this command with the same window.

    source ~/.bash_profile

 

Know more about  $NODE_PATH

(Reference: https://nodejs.org/api/modules.html#modules_loading_from_the_global_folders )

Loading from the global folders

If the NODE_PATH environment variable is set to a colon-delimited list of absolute paths, then Node.js will search those paths for modules if they are not found elsewhere. (Note: On Windows, NODE_PATH is delimited by semicolons instead of colons.)

NODE_PATH was originally created to support loading modules from varying paths before the current module resolution algorithm was frozen.

NODE_PATH is still supported, but is less necessary now that the Node.js ecosystem has settled on a convention for locating dependent modules. Sometimes deployments that rely on NODE_PATH show surprising behavior when people are unaware that NODE_PATH must be set. Sometimes a module’s dependencies change, causing a different version (or even a different module) to be loaded as the NODE_PATHis searched.

Additionally, Node.js will search in the following locations:

  • 1: $HOME/.node_modules
  • 2: $HOME/.node_libraries
  • 3: $PREFIX/lib/node

Where $HOME is the user’s home directory, and $PREFIX is Node.js’s configured node_prefix.

These are mostly for historic reasons. You are highly encouraged to place your dependencies locally innode_modules folders. They will be loaded faster, and more reliably.

Setting HttpContext Response using HttpResponseMessage or Request.CreateResponse in WebApi2

Background

In my recent Continuous Improvement (CI) initiative I have been introducing few ActionFilters for the WebApi Controllers.

These action filters validates the request by using payload or the current user token against some business logics. If the request is not fulfilling business requirements then the filter should stop further processing and send response (ActionContext.Response) back from the filter pipeline.

In my projects, as of now, the default response content type is JSON.

HttpContent – ObjectContent or StringContent

The .NET Framework provides a few built-in implementations of HttpContent, here are some of the most commonly used:

  • ByteArrayContent: represents in-memory raw binary content
  • StringContent: represents text in a specific encoding (this is a specialisation of ByteArrayContent)
  • StreamContent: represents raw binary content in the form of a Stream.
  • ObjectContent is the generic implementation of the <T> type.

Problem

My basic requirement is to send JSON and I got stuck into the argument of using different objects that .net framework provides. Because my response type JSON, which is string, I can use StringContent StreamContent or even ObjectContent. The problem, what is the difference and what is the best approach in using different HttpContent sub classes.

Lets dig into each type one by one.

StringContent

StringContent is the subclass of ByteArrayContent, which is inheriting from further inheriting from HttpContent.

If I use StringContent then in that case I will have to tell lots of things when building an object such as setting content type, character set etc.. .

var errorResponse = new ResponseBase {
     Messages = new List { new Message {
             Type = MessageTypes.Error,
             Code = ErrorCode ,
             Description = ErrorMessage 
            } }
};

var response = new HttpResponseMessage {
 StatusCode = System.Net.HttpStatusCode.OK,
 Content = new StringContent(
 Newtonsoft.Json.JsonConvert.SerializeObject(errorResponse), System.Text.Encoding.UTF8, 
"application/json"),
 };

actionContext.Response = response;

I am setting the encoding as well the content type manually. Thus, what if the client negotiate the content type as XML ?

ByteArrayContent is definitely a good candidate when you have your data in the byte format such as Picture Content from the server.

ObjectContent

I can use ObjectContent class type, which is inherited from HttpContent. I can even pass a formatter object. However, it is not that easy to use because I need to pass the object type and it cannot take the formatter automatically. Again, there is a lot of hard coded settings that I need to pass to the ObjectContent.

var response = new HttpResponseMessage(HttpStatusCode.OK)
 {
 Content = new ObjectContent(typeof(ResponseBase),
 myobject,
 GlobalConfiguration.Configuration.Formatters.JsonFormatter)
 };

actionContext.Response = response;

The another most important factor of not using StringContent, ByteArrayContent, or ObjectContent directly with HttpResponseMessage is it does not recognise any serialisation configuration such as Camel case settings. Thus, either you have to pass it if it accept or do manual manipulations.

So what should be used then?

Well… the Winner is…Request.CreateResponse extension method.

Even though I have not mentioned this but the winner is somebody else. If you are using WebApi 2, like I am, it has introduced a method against Http request message object that instead of creating HttpResponse object and assigning to the response, we could just set the actionContext.Request.CreateResponse(…) extension method.

actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.OK, 
modelState.ValidationErrors(ErrorCode));

Benefits

  • It is neat and clean. I don’t have to create an HttpResponse object and set contents separately.
  • Based on the current HttpConfiguration and Content-Type passed in Request header, the CreateResponse extension negotiate the Formatter that it will from the HttpConfiguration.Formatters list. It means that I don’t have to specify any Serialization Configuration.
  • If the configuration has been modified, for example, in case of JSON Camel case then it pick up automatically with no special check from our side.
  •  It will look after for everything by default. Otherwise we have to do allot manually. Thus it has removed some potential bugs as well.
What is the value of actionContext.Response.Content.Headers.ContentType if you use 
Request.CreateResponse method?

By using CreateResponse it automatically checks the content type, from the request, and use 
that specific formatter. Otherwise, If there is no content-type in the request, then the 
default is based on which one is first in the list of HttpConfiguration.Formatters.

In case of StringContent, we'd to hard code the content type so even if the client is
negotiating for XML content types, it will send JSON. which is wrong.

 

Git: Renaming myfile to MyFile on case insensitive file systems, such as Windows System.

Introduction:

gitiwinIt all started when I wanted to change one solution file name from esd.wealth.services.sln to ESD.Wealth.Services.sln.

I thought it is going to be as easy as renaming a file in the operating system and then I can push my changes. However, it was not that easy with my Windows machine and Gits.

I could manage to change file name in my file system. Then git could not identified these changes.

Then, I could sense that it has something to do with case insensitivity of the file name in Windows.

Solution:

After researching a bit I found this link. It tells you about how easy it is to change the folder name:

git mv foldername tempname && git mv tempname folderName

So I applied the same with my file.

git mv esd.wealth.services.sln esd.wealth.sevices1.sln && git mv esd.wealth.services1.sln ESD.Wealth.Sevices.sln

Now I can see my changes and it is ready to push.

Is it not simple ?

🙂

Reference:

http://www.patrick-wied.at/blog/rename-files-and-folders-with-git

Demystifying NodeJs “exports” keyword

Introduction

There has been a little confusion, in my mind, about using module.exports or exports in my nodejs coding. I have seen some code on github or otherwise, many of us are using the following exports statements:

  • exports.fn = function() { … }
  • module.exports.fn = function() ( … )
  • module.exports = exports = myobject.

Then my mind wonder:

  • What is the difference between “module.exports” and “exports” ?
  • Why Nodejs has introduced module.exports as well as exports ?
  • What does it means by module.exports = exports = object ?
  • What I prefer?

In this post, I will try to answer the above four questions. First, we will give a practical sense to exports.fn, module.exports.fn & module.exports = exports statement.

So let us start our journey with codebased

Explanation

Simplified Version

The “exports” is simply a global reference to module.exports, which initially is defined as an empty object that you can add properties to. Thus, exports.fn is shorthand for module.exports.fn.

As a result, if “exports” is set to anything else, it breaks the reference between module.exports and “exports”. Because module.exports is what really gets exported, “exports” will no longer work as expected. Thus, to ensure it is referencing to the same memory location, many of us reassign the value to module.exports as well as exports at the same time.

module.exports = exports = myobject

Detailed Version

In this sesion, I will try to demonstrate the same with code.

I pre-assumed that your nodejs is available through the command prompt. If you need an installation help, please click here.

Let us create a new file in node.

-------- laptop.js -----------
 
 exports.start = function(who) {
  console.log(who + ' instance has started this laptop');
 }

// calling a method within laptop.js
module.exports.start('module.exports');

Now create a new file that will import laptop file.

-------- app.js -----------
 
 require('./laptop.js')

Call app.js from the command prompt.

c:\ node app.js

Output
-----------------------------------------------

module.exports instance has started this laptop.

As you can see that even though we have defined a function with “exports” variable, it is available through “module.exports”.

Similarly if you do it opposite that will work too.

-------- laptop.js -----------
 
 module.exports.start = function(who) {
  console.log(who + ' instance has started this laptop');
 }

exports.start('exports');

Call app.js from the command prompt.

c:\ node app.js

Output
-----------------------------------------------

exports instance has started this laptop.

Now let see what happens here:

-------- laptop.js -----------
module.exports.start = function(who) {
 console.log(who + ' instance has started this laptop');
 }
 exports.start = function(who) {
 console.log(who + ' instance has started this laptop');
 }

exports.start('exports');
module.exports.start('module.exports');

Any guess what will be outcome?

Yes, the module.exports.start has overridden by the export.start.

Call app.js from the command prompt.

c:\ node app.js
 Output
 -----------------------------------------------

 exports instance has started this laptop.
 exports instance has started this laptop.

It is clear that “exports” is an alias to “module.exports”.

We will now try to recall our  questions one by one and answer those:

1. What is the difference between exports vs module.exports ?

Node.js does not allow you to override “exports” variable with any another memory address. However you can attach N number of properties to “exports”. Thus anything that assign directly to “exports” will not be available when it is exported.

You can export anything through “module.exports” but not with”exports” keyword.

You can do this:

module.exports = function() {
 }

or

module.exports = myobject;

Basically, anything that you have exported through module.exports will be available at app.js above. However, you cannot do like this in your laptop.js and then consider it is available in app.js.

exports = function () {
 }

or

exports = myobject;

It is clear now that you can export anything (function, object, constant value) through “module.exports” but not with “exports”.

Sounds crazy? Yes it is.

2. Why they have introduced module.exports as well as exports ?

I think the main reason could be to reduce number of characters to type?

3. What does it means by module.exports = exports = object ?

Many of us set module.exports and exports at the same time, to ensure exports isn’t referencing the prior exported object. By setting both you use exports as a shorthand and avoid potential bugs later on down the road (within same file).

Here is a piece of code to make it:

------ laptop.js ------

exports = "exports";
module.exports = "module.exports";

console.log(exports);
console.log(module.exports);

Call app.js from the command prompt.

c:\ node app.js
 Output
 -----------------------------------------------
exports
module.exports

Because they are pointing to a different location and by any chance the module has decided to use “exports” (not “module.exports” variable) value after it is set it will not be in sync.

Thus, to make it in sync. it is advisable that the “exports” variable has been set by anything you would want to define a rule that whenever we set module.exports with any value set the same value to exports in the same line.

Here is an example:

------ laptop.js ------
 
exports = 'i am value ';
module.exports = exports = function() {
   console.log('function is called.');
 }
console.log(typeof exports)
-------- app.js --------

var laptop = require('./lib/laptop.js');
laptop();

 

Call app.js from the command prompt.

c:\ node app.js
 Output
 -----------------------------------------------
function
function is called.

You can see in the output that because “exports” as well as “module.exports” are set to a function the output of “typeof” statement is “function”. If you don’t assign a function to exports then the typeof statement will produce “string”.

Thus, to remove any potential bugs we decide to set exports as well as “module.exports” at the same time.

Now this discussion is coming to an end with the last question i.e.

What I prefer?

Personally, I prefer to export a constructor function that can be used create an instance of a module.

example:

------ laptop.js ------

var laptop = function() {}
laptop.prototype.start = function() {};
laptop.prototype.stop = function() {};
laptop.prototype.suspend = function() {};
module.exports = exports = laptop;
 ------ app.js ------
var laptop = require('./laptop')
var mac = new laptop();
var win = new laptop();

However, if I want to give a singleton object then I replace the following line in laptop.js

------ laptop.js ------

module.exports = exports = new laptop();

and in app.js

 ------ app.js ------

var mac = require('./laptop');
mac.start();

Conclusion

  • We understand now that the exports is an alias to module.exports that can shorthand your writing in module development.
  • It is recommended that we point exports alias to module.exports value. Thus you should set exports whenever you are setting module.exports.
  • Since my background is .net, I recommend to export a class (constructor function) or an object.

– Happy coding!

npm –save or –save-dev. Which one to use?

Introduction

If you have ever worked in NodeJs, you must have install one or two packages through “npm install <package>” command.  By running this command, the nodeJs will install this package on your working directory, under node_modules.

To save this packages as your dependencies, under package.json, you have two choices:

  • –save-dev
  • –save
What is package.json? 

All npm packages contain a file, usually in the project root, called package.json - this file holds various metadata relevant to the project. This file is used to give information to npm that allows it to identify the project as well as handle the project's dependencies. It can also contain other metadata such as a project description, the version of the project in a particular distribution, license information and et al.

Let us understand the difference that it can make.

Detail

Say you have a package.json within your root folder of your project.

If you don't have one then create a package file using npm init command.

My package.json looks like this:

{
 "name": "TMSPA",
 "version": "1.0.0",
 "description": "Single page application for TM",
 "main": "index.html",
 "scripts": {
 "test": "echo \"Error: no test specified\" && exit 1"
 },
 "repository": {
 "type": "git",
 "url": "https://github.com/codebased/android-test.git"
 },
 "author": "Am",
 "license": "ISC",
 "bugs": {
 "url": "https://github.com/codebased/android-test/issues"
 },
 "homepage": "https://github.com/codebased/android-test"
}

Now I want to install some dependencies.

Before I install one, I need to search for the package name. If you know the package that you want to install thats good. Otherwise, you can use npm search command:

npm search bootstrap

or try one of the following search tools:

Once you have identified the right package that you want to install, you can use the mentioned command i.e. npm install <package name>.

Here you have two, actually three, options.

1. use –save-dev
e.g. npm install should --save-dev

You will use this option when you want to download a package for developers , such as grunt, gulp, then use this option. Thus, when you are distributing your code to production, these dependencies will not be available.

As an example, let say, you want to use grunt as your task runner. This package is required for development purpose. Thus, you should use –save-dev here.

npm install grunt --save-dev

The above command will save grunt dependency under devDependencies section of your package.json, shown below:

{
 "name": "TMSPA",
 "version": "1.0.0",
 "description": "Single page application for TM",
 "main": "index.html",
 "scripts": {
..
 "author": "Codebased",
..
..,
 "devDependencies": {
 "gulp": "^3.8.11"
 }
}
2. Use –save flag

You will use this option when you want to save a package dependency for distribution. Item such as angularjs, or any other module that is is required at run time by your program, you will use –save switch.

npm install angularjs --save

Now my package.json looks like this:

{
 "name": "TMSPA",
 "version": "1.0.0",
 "description": "Single page application for TM",
...,
 "dependencies":{
 "angularjs": "^1.4."
 },
 "devDependencies": {
 "gulp": "^3.8.11"
 }
}



3. Use nothing

If you call npm install command without any flag then it will install package. However, there is no way the package.json will be updated with your dependencies.

This option is not recommended because there is no way others will get to know about the dependencies that your module has.

Conclusion

In conclusion, we understand that the –save-dev, and –save flags are used for limiting the scope of your dependencies.

 

SQL Queries to IIS Logs

Overview

There is always a case that you want to integrated IIS Logs into your project. You can relay on third party services/ interface. However, the best approach is to have one application where you can integrate the logic of IIS log reading.

Since 2009, Microsoft has introduced a tool called Log Parser that provides sql like queries against IIS Log files.

You can get the latest version of this tool from here.

Otherwise, you can also download the same through choco command.

choco install

Get Started

 

Go to the path C:\Program Files (x86)\Log Parser 2.2\

Log Parser directory

Log parser by default provide you a COM DLL that you can import in Native C/C++ projects, or you can also import the same in .NET project that uses Interop facility.

To use COM DLL into .NET project you can also use tlbimp.exe command as well.

Open command prompt and run this simple select statement:

 "C:\Program Files (x86)\Log Parser 2.2\LogParser.exe" "select * from c:\inetpub\logs\logfiles\w3svc1\u_ex150101.log"

You can also view output in a GUI through the -o:DataGrid  switch value.

 "C:\Program Files (x86)\Log Parser 2.2\LogParser.exe" "select * from c:\inetpub\logs\logfiles\w3svc1\u_ex150101.log" -I:w3c -o:datagrid

log parser datagrid

Command Reference:

C# Integration

First thing you got to do is Add reference into your .NET project using Visual Studio IDE.

IWebLogService.cs:

 public interface IWebLogService
    {
        List<IISLogCount> GetLogs(string fileName = null, string api = null);
        List<IISLog> GetLogDetails(string uri, string fileName = null);
    }
  public class WebLogService : IWebLogService
    {
        public List<IISLogCount> GetLogs(string fileName = null, string api = null)
        {
            if (string.IsNullOrWhiteSpace(fileName))
            {
                fileName = "{0}\\*.log".FormatMessage(ConfigurationManager.AppSettings["IISLOGPATH"]);
            } 

            if (string.IsNullOrWhiteSpace(fileName))
            {
                throw new ArgumentNullException(fileName);
            }

            string query = string.Empty;

            if (string.IsNullOrWhiteSpace(api))
            {
                query = @"
                SELECT date, cs-uri-stem, cs-method, count(cs-uri-stem) as requestcount from {0}
                WHERE STRLEN (cs-username ) > 0 
                GROUP BY date, cs-method, cs-uri-stem 
                ORDER BY date, cs-uri-stem, cs-method, count(cs-uri-stem) desc".FormatMessage(fileName);
            }
            else
            {
                query = @"
            SELECT date, cs-uri-stem, cs-method, count(cs-uri-stem) as requestcount from {0}
                WHERE cs-uri-stem LIKE {1} and STRLEN (cs-username ) > 0 
                GROUP BY date, cs-method, cs-uri-stem 
                ORDER BY date, cs-uri-stem, cs-method, count(cs-uri-stem) desc".FormatMessage(fileName, " '%/api/{0}%' ".FormatMessage(api));
            }

            var recordSet = this.ExecuteQuery(query);
            var records = new List<IISLogCount>();
            int hit = 0;
            for (; !recordSet.atEnd(); recordSet.moveNext())
            {
                var record = recordSet.getRecord().toNativeString(",").Split(new[] { ',' });
                if (int.TryParse(record[3], out hit))
                {                }
                else
                {
                    hit = 0;
                }

                records.Add(new IISLogCount { Hit = hit, Log = new IISLog { EntryTime = Convert.ToDateTime(record[0]), UriQuery = record[1], Method = record[2] } });
            }

            return records;
        }
        public List<IISLog> GetLogDetails(string uri, string fileName = null)
        {
            if (string.IsNullOrWhiteSpace(fileName))
            {
                fileName = "{0}\\*.log".FormatMessage(ConfigurationManager.AppSettings["IISLOGPATH"]);
            }

           if (string.IsNullOrWhiteSpace(fileName))
            {
                throw new ArgumentNullException(fileName);
            }
           string query = string.Empty;

            query = @"SELECT"
            + " TO_TIMESTAMP(date, time) AS EntryTime"
            + ", s-ip AS ServerIpAddress"
            + ", cs-method AS Method"
            + ", cs-uri-stem AS UriStem"
            + ", cs-uri-query AS UriQuery"
            + ", s-port AS Port"
            + ", cs-username AS Username"
            + ", c-ip AS ClientIpAddress"
            + ", cs(User-Agent) AS UserAgent"
            + ", cs(Referer) AS Referrer"
            + ", sc-status AS HttpStatus"
            + ", sc-substatus AS HttpSubstatus"
            + ", sc-win32-status AS Win32Status"
            + ", time-taken AS TimeTaken"
            + " from {0} WHERE cs-uri-stem = '{1}' and STRLEN (cs-username ) > 0  ORDER BY EntryTime".FormatMessage(fileName, uri);

            var resultSet = this.ExecuteQuery(query);

            var records = new List<IISLog>();
            for (; !resultSet.atEnd(); resultSet.moveNext())
            {
                var record = resultSet.getRecord().toNativeString(",").Split(new[] { ',' });

                records.Add(new IISLog { EntryTime = Convert.ToDateTime(record[0]), UriQuery = record[1], Method = record[2], UriStem = record[3], UserAgent = record[6] });
            }

            return records;
        }

        internal ILogRecordset ExecuteQuery(string query)
        {
           LogQueryClass logQuery = new LogQueryClass();
            MSUtil.COMW3CInputContextClass iisLog = new MSUtil.COMW3CInputContextClass();
            return logQuery.Execute(query, iisLog);
        }
    }

This is how my POC looks like:

public class IISLog
    {
        public string LogFilename { get; set; }
        public int RowNumber { get; set; }
        public DateTime EntryTime { get; set; }
        public string SiteName { get; set; }
        public string ServerName { get; set; }
        public string ServerIPAddress { get; set; }
        public string Method { get; set; }
        public string UriStem { get; set; }
        public string UriQuery { get; set; }
        public int Port { get; set; }
        public string Username { get; set; }
        public string ClientIpAddress { get; set; }
        public string HttpVersion { get; set; }
        public string UserAgent { get; set; }
        public string Cookie { get; set; }
        public string Referrer { get; set; }
        public string Hostname { get; set; }
        public int HttpStatus { get; set; }
        public int HttpSubstatus { get; set; }
        public int Win32Status { get; set; }
        public int BytesFromServerToClient { get; set; }
        public int BytesFromClientToServer { get; set; }
        public int TimeTaken { get; set; }
    }

And:

 

public class IISLogCount
    {
        public IISLog Log
        {
            get;
            set;
        }

        public int Hit { get; set; }
    }

Once the Service class has been defined, you can create your proxy whichever you want it.

Here I am using ApiController driven class as a proxy. This class will be passing any Http request to this service and returning back the response in Http protocol.

      [HttpGet]
        [Route("applications/iislog")]
        public IHttpActionResult GenerateIISLog(string fileName = null, string api = null)
        {
            return Ok(_weblogService.GetLogs(fileName, api));
        }
        [HttpGet]
        [Route("applications/iislogdetails")]
        public IHttpActionResult GenerateIISLogDetails(string uri, string fileName = null)
        {
            return Ok(_weblogService.GetLogDetails(uri, fileName));
        }

I’m using NInject IoC/ DI framework for injecting service to the controller:

kernel.Bind(typeof(IWebLogService)).To(typeof(WebLogService)).InRequestScope();

 

Points of Interest

As you can see in my above sample that I could managed to read a text file using SQL commands. LogParser is fantastic tool for retrieving usage stats/ bandwidth , slow pages and many more details.

One thing I like the most with LogParser is it can even communicate with Directory Service Logs, Mail Logs, Event View Logs and et al; that too using SQL commands.

 

Nodejs tools

nodemon:

By default node process needs to be restarted for picking any change in your JS file. Thus, every time there is a change, you will need to stop node process and start it again.

You can automate this task by using nodemon npm package. This package will monitor any changes and it restart your application

npm install nodemon -g

Once the installation is completed you can start your node process by this command:

nodemon app.js

That’s it!

Now, as soon you will do some changes in your app.js, it will restart the process for you.

Node-inspector

Another most important tool that you would like to use during your development is node-inspector. This tool will allow you to debug your node development.

Install this tool by this command:

npm install node-inspector -g

Then call this package with this command:

node-inspector app.js --debug

Another way to run your process in debug is:

node-debug app.js

Using Node-Inspector and Nodemon together:

You can use both packages together by using this command:

node-inspector & nodemon --debug app.js

 

Extension Methods in C#

Let say you want to extend a class that you cannot inherit. These classes might be defined as sealed or even a 3rd party DLL that you have just downloaded from nugget. How would you extend these classes?

Let us take an example of struct types in C# that are sealed by default, such as DateTime, String, Int32 etc..

You know that cannot extend DateTime as:

public struct CustomDateTime: DateTime    {    }
Error at compile time: Type CustomDateTime in interface list is not an interface

One option is to Wrap DateTime variable within CustomDateTime class and then provide your custom solutions. As an example:

public class CustomDateTime
 {
 private DateTime _dateTime;
public CustomDateTime()
 {
}
 public CustomDateTime(long ticks)
 {
 }
public CustomDateTime(long ticks, DateTimeKind kind)
 {
 }
public CustomDateTime(int year, int month, int day)
 {
 }
public CustomDateTime(int year, int month, int day, Calendar calendar)
 {
 }
public CustomDateTime(int year, int month, int day, int hour, int minute, int second)
 {
 }
public CustomDateTime(int year, int month, int day, int hour, int minute, int second, Calendar calendar)
 {
 }
public CustomDateTime(int year, int month, int day, int hour, int minute, int second, CustomDateTimeKind kind)
 {
 }
public CustomDateTime(int year, int month, int day, int hour, int minute, int second, int millisecond)
 {
 }
public CustomDateTime(int year, int month, int day, int hour, int minute, int second, int millisecond,
 Calendar calendar)
 {
 }
public CustomDateTime(int year, int month, int day, int hour, int minute, int second, int millisecond,
 CustomDateTimeKind kind)
 {
 }
public CustomDateTime(int year, int month, int day, int hour, int minute, int second, int millisecond,
 Calendar calendar, CustomDateTimeKind kind)
 {
 }
 }
 

As you can see here that you are actually reinventing a wheal.

You have an another, second, option to define static class such as:

public static class DateTimeUtility
 {
 public static string CustomFormat(DateTime date)
 {
 return "Your Date is: " + date.ToLongDateString();
 }
 }
public class Program
 {
 public static void Main()
 {
 DateTimeUtility.CustomFormat(new DateTime());
 }
 }

 

However, it is not user friendly because:

  • You have to pass variable
  • Not easy to read
  • You have to write more code. For e.g. DateTimeUtility.CustomFormat

 

Extension methods are not a necessity but it is an elegant way of writing code.

With the help of extension method the CustomFormat will become an instance method of DateTime.

The above code will look like as:

 

public static class DateTimeUtility    {
        public static string CustomFormat(this DateTime date)
        {
            return "Your Date is: " + date.ToLongDateString();
        }
    }

    public class Program
    {
        public static void Main()
        {
            var dateTime = new DateTime();
            dateTime.CustomFormat();
        }

Hence, you can write your method once with Template type and you will be able to use the same again and again.
An another example could be to implement “Between” check:

public static class ComparableExtensions
{
 public static bool WithRange<T>(this T actual, T lower, T upper) where T : IComparable<T>
 {
 return actual.CompareTo(lower) >= 0 && actual.CompareTo(upper) < 0;
 }
}
Var number = 5;
if (number. WithRange (3,7))
{
 // ....
}

 

Agile in Primary Schools

Instructors in primary schools as far and wide as possible are starting to utilize Agile to make a society of learning. This disposition is the thing that has headed training pioneers, to integrateAgile learning into these schools. Agile learning focuses on individuals and associations over procedures and instruments, and meaningful adapting over the estimation of learning.

Despite the fact that a huge piece of Agile includes routine state sanctioned testing, it isn’t the sort of testing that measures substance knowledge–it’s the kind that measures considering. Genuine learning in primary schools means that young students will discover the importance of learning for the rest of their lives.

Over recent years since its first integration, the Agile philosophy has been found to energize ceaseless change. As of yet, Agileintegratedschools have been found to have a good number of qualities. For one, their most astounding necessity is to fulfill the needs of understudies and their families through ahead of schedule and ceaseless conveyance of serious learning. They convey genuine adapting often, from several days to a few weeks, with an inclination to the shorter timescale. Agile learning and the participating families cooperate every day to make learning open doors for all members. This is found to really help the learning especially in primary schools where confined spaces often result in restless children.

 

On Your Mark Get Set Go! Finish Line
(Notes) (Notes) (Notes) (Notes)

(Agile learning schools integrate a child-friendly version of Scrum, with terms such as “On Your Mark” and “Go!” making children look forward to work. It also makes them visualize the work they have completed. )

Agileintegrated schools also assemble ventures around inspired people, provide for them nature and help they need, and trust them to accomplish the employment. They perceive that the most productive and successful strategy for passing on data to and inside a group is vis-à-vis discussion. Primary schools that have Agile learning have courses of action push maintainability. Instructors, students, and families ought to have the capacity to keep up a steady pace uncertainly. This new kind of learning accepts that persistent consideration regarding specialized greatness and great outline upgrades flexibility. Interestingly enough, these schools have the specialty of expanding the measure of work done–is crucial. The best thoughts and activities rise up out of sorting toward oneself out groups. Finally, over the past couple of years, all primary schools that have integratedAgile learning have seen students be much more successful in all other areas of life.

Like traditional Agile environments, Agile learning in primary schools utilize the use of a “sprint”. A “sprint” is a period boxed length of time inside which classes focus on a set of conclusions to be attained before the end of the time-box. Much the same as a sprint in Olympic style events, it is a brief time with a beginning line and a completing line, aside from for this situation, it is not separate, the time it now, time. When one Sprint closes, the following one starts.

In addition to everything else, Sprints have taken care of the issue of young students becoming lost despite a general sense of vigilance and of instructors squandering time on misguided units of study that run for months on end without any huge evaluation of learning. In the long run, this sort of learning shows students that school is indeed a place that provides useful information and applicable methods. It teaches them that education has its merit, and no merely somewhere to escape their home lives and drone out the words of their teachers.

agile learning

(Agile learning often integrates small circled groups for interactive learning).

 Agile learning in primary schools has tacked the issue of separation such a large number of educators battle with. Groups have been shaped out of sets of combined educators. This has been communicated as being a conventional group instructing, as “visitor” showing where instructors exchange off heading lessons as per individual instructional skill, and is also achieved through cross-class movement. Cooperating for a considerable time of time and offering obligation regarding the same understudies has built the regular utilization of streamlined practice, and furthermore, it gives a lot of people more chances to educators to comprehend the students’ needs.

Conclusion:

Agile processes have proven to be most useful in the workplace, especially with software development. Because this has occurred so nicely in these environments, scholarly pioneers have agreed to integrate these similar methods into primary schools. Using such tools as “sprints” and flexibility have a strong effect on young students and how they learn. Not only are they doing better in school; they realize how education – and learning – is a positive thing they should integrate in their daily lives (for the rest of their lives). This will also make children think positively about school and institutions, making them much less likely to turn to crime later in their teens and adulthood. It encourages teamwork, quick thinking, and focuses on learning more than the result. By the end, students feel accomplished and more ready to take on other world challenges. As a whole, the positive trend that Agile learning has created in primary schools is a template that many more schools should follow.