Monday, May 2, 2016

Interview at Amazon Bangalore

Some time back i had an interview with Amazon bangalore, i had to go through a telephonic round which lasted up to an hour, once i cleared that i received an email from HR stating that i need to write a snippet about my achievements either on technical or management. So i chose what i had been doing best in that last 2 years at my previous employment.

Instances when you adopted new innovative ideas from your team or elsewhere to improve and simplify your product/ Process. How do you quantify the benefits? What was the learning?

Below is what i actually submitted to Amazon and after that i was called for a F2F interview with them.

One of the most satisfying moments in my career was while I was at XXXX.
I led two mission critical and high visibility products in the YYYYY department. To give you some context the YYY product has over 450 Hospitals as a client base and run at some of the most important hospitals in the US.

When I came into the organization, XXXX was in the end stages of porting the YYYY application to a newer technology. At the same time through the existing team we had to maintain and support the older line on an older technology. Towards the end of 2014 as the product was GA we had issues crop up from our Beta customers and we were caught up in addressing issues as soon as they came. Given the limited resources and the need to support two lines of the product it was apparent that we will soon be hitting a crunch on time and people resources.

The first thing that I set about to optimize were the following

1) Sync-up meetings (Bangalore Team -- US Team)
On an average the team were spending 2 hours updating their status of tasks one at beginning of the week and the other at the end of the week. So cumulatively given a team size of 15 that is a cumulative 30 hours.

Why did I feel this was important?
a. Breakdown in stakeholder communication
b. Lack of trust.
c. Effective hours spent in office
d. Colleagues get frustrated.

Root cause & Fix
Team tended to take on more tasks without understanding the depth of the problem. Also since the new product line was developed by consultants and handed over to the DEV team we were also stuck with issues coming out of regression testing. This led us to over commit based on our capacity and led to slip up on tasks.

I approached this issue by including my Director in Bangalore with a plan of approach and gaining consensus so as to
○ Limit number of participants to Engineering Manager, Lead, PM, QA lead and reduce the duration and frequency
○ Setup a separate meeting that included me and counterparts in the US middle of the week to sync up on team performance
○ I worked with the team to improve the communication to key stakeholders by first breaking down the task the least possible work item and providing an better estimate.
○ Spoke to my counterparts and laid out a plan of improvement over a Q-O-Q basis and then was successful to bring this down to a half hour meeting in a week as they saw improvement in the team performance.

2) Service Package Quality (SP)
We deliver code break fixes on a monthly basis to our customers, package quality is a metric used at XXXX which roughly calculates the number of client downloads of the SP versus the number of tickets raised against that down through support.
This was one of the biggest challenge that I faced, how do I improve my teams code quality and have customer base that is raving about our quality. At the beginning of my journey with XXXX the package quality roughly stood as follows, Q3, 2013 was the timeline when I joined XXXX , this was the challenge I was faced with a worse package quality that has visibilty of all the VP at XXXXX. Towards the end of my tenure I was able to move the bar higher for the team through these slew of corrections



Root Cause & Fix
• QA at XXXX is a separate organization all together and I realized that though there were one or two QA persons involved in testing, we cant improve our quality if QA leadership isn't behind us. We had to find a way to collaborate better and work on improving quality as a team.
• Co-ordination : QA and Dev were not aligned on the nature of change, for a proposed change in code, QA wasn’t given time to align their test cases or time to write new test cases as all the estimates were given by the architect who didn’t consult with QA lead on what their estimates would be.
• No formal design review meetings at engineering level. Failure to identify true impact of change.
• No formal code review mechanism, though the team already had a process of using Crucible Review they were not implementing it.
• Right size the delivery items, justify with data on how many items we can fix realistically per SP.
• QA team missing key testing areas
• Whitebox testing issues from developers
• Lead Architect and developers trust issues and breakdown in environment
• No Gated check-in concept so as to catch the failure earlier
• No automated unit testing frameworks used.

Improvements made
• Setup a collaborative environment working with the QA Manager and worked together with him to get QA resources behind our release, to ensure a stringent release process we started bug bash sessions, unscripted testing.
• Lead from the front by driving all the meetings and code reviews. We introduced a concept that architect needs to review code along with two other engineers to be allowed to check-in and any build failures to be caught early.
• Setup meeting and training with the team on identifying issues in code and how to analyze impact of change areas.
• Worked with the team and their feedback on improving team environment by ensuring that any code break is not a persons fault but a fault in process that needs to be improved upon
• Architect took up the challenge to setup a Jenkins box to allow us to build first on our local environment before code is checked-in to ensure we do it right the first time.

Measurable Improvements
• 20 to 30% increase in Service Package Quality numbers quarter on quarter compared to where we started in Q3 2013
• 100% critical enhancement and bug-fix delivery to clients for Obligations (customer funded fixes)
• Improved customer feedback and reduction in number of support tickets open

Friday, November 6, 2015

Facebook behind the scenes - TAO

Ever wondered what the technology is behind the scenes at Facebook. Take a look at this video, it is suprisingly honest and concise.
Graphs and more Graphs as a data structure.

If you are querying billions of queries having petabytes of information every second, this is what Facebook does..

Click here to see the presentation on TAO

Thursday, May 2, 2013

WCF Handling Exceptions

If you are designing any complicated service where in you want to ensure that all possible requests are served, then it becomes important that there is a proper error handling strategy.
In one of my projects i had a retry mechanism which would try 3 times to send / process the request on certain types of errors. I have listed out the function which handles the retry mechanism

  public bool IsError(System.Exception ex)
  {
   bool retry = FALSE;

   if (ex is System.ServiceModel.CommunicationException)
   {
    if (ex is FaultException)
    {
     // We get this case sometimes when connection has been closed, on next call.
     if ((ex as FaultException).Detail.Type !=
      "System.ObjectDisposedException")
     {
      ErrorLoger("generic fault - not retrying.", ex);
     }
    }
    else if (ex is System.ServiceModel.FaultException)
    {
     ErrorLoger("generic fault - not retrying.", ex);
    }
    else if (ex is System.ServiceModel.AddressAccessDeniedException ||
       ex is System.ServiceModel.Security.SecurityAccessDeniedException ||
       ex is System.ServiceModel.Security.SecurityNegotiationException ||
       ex is System.Security.Authentication.AuthenticationException)
    {
     ErrorLoger("Received access denied - not retrying.", ex);
    }
    else if (ex is System.ServiceModel.EndpointNotFoundException)
    {
     ErrorLoger("Endpoint was not found - not retrying.", ex);
    }
    else if (ex is System.ServiceModel.ServerTooBusyException)
    {
     ErrorLoger("Service AppPool may be down - not retrying.", ex);
    }
   }
   else if (ex is System.ObjectDisposedException)
   {
    var ode = ex as ObjectDisposedException;
    if (!ode.ObjectName.Contains("System.ServiceModel.ChannelFactory"))
    {
     ErrorLoger("object disposed exception not related to channel factory.", ex);
    }
    else
    {
     ErrorLoger("Received object disposed exception related to the channel factory.", ex);
     retry = TRUE;
    }
   }
   else if (ex is System.TimeoutException || ex is System.Net.Sockets.SocketException)
   {
    if ((ex is System.TimeoutException))
    {
     ErrorLoger("Received TimeoutException. Retrying", ex);
    }
    else
    {
     ErrorLoger("Received SocketException. Retrying", ex);
    }
    retry = true;
   }
   else
   {
    ErrorLoger("General error, so not retrying.", ex);
   }
   return retry;
  }
The FaultException class belongs to the System.ServiceModel library.

Tuesday, April 30, 2013

Generating an ASYNC proxy for your WCF service

If your WCF service supports ASYNC calls then we will need the right proxy that supports ASYNC so that the client can use that capability.
You have to use the Async attribute for your Interfaces.


        [OperationContractAttribute(AsyncPattern = true)]
        IAsyncResult BeginProcessStudentEnrollment(AsyncCallback callback, object asyncState);
        Result EndProcessStudentEnrollment(IAsyncResult result);

The Async pattern follows the Begin and End pairs, this internally makes use of the Async pattern in the System.IAsyncResult interface.

In your implementation class you will have to do the following, implement both the Begin and End pairs. This is case sensitive do make sure that it follows the same as in your interface definition

public IAsyncResult BeginProcessStudentEnrollment(AsyncCallback callback, object asyncState)
        {
            ServerResponse response = new ServerResponse();
            try
            {             
                    long rows = GetEnrollment(); //Business Layer call
                    if (rows > 0)
                    {
                        response.SetSuccessResponse(true, "Successfully executed");
                    }
                    else
                    {
                        response.SetSuccessResponse(false, "No records were updated");
                    }
            }
            
            catch (Exception error)
            {
              response.SetErrorResponse(500, "General type exception, see service error logs for details");
            }
            return new CompletedAsyncResult(response);
        }

        public Result EndProcessStudentEnrollment(IAsyncResult r)
        {
            CompletedAsyncResult result = r as CompletedAsyncResult;
            return result.Data;
        }
CompletedAsyncResult is the class that will handle your Response objects. I have listed this class below

    /// 
    /// This class handles the completed Async calls by implementing the IAsyncResult interface.
    /// 
    /// 
    public class CompletedAsyncResult : IAsyncResult
    {
        T data;

        public CompletedAsyncResult(T data)
        {
            this.data = data;
        }

        public T Data
        {
            get { return data; }
        }

        #region IAsyncResult Members
        public object AsyncState
        {
            get
            {
                return (object)data;
            }
        }

        public WaitHandle AsyncWaitHandle
        {
            get
            {
                return null;
            }
        }

        public bool CompletedSynchronously
        {
            get
            {
                return true;
            }
        }

        public bool IsCompleted
        {
            get
            {
                return true;
            }
        }
        #endregion
    }


and finally the generation of the Async aware proxy is done with this command line argument

svcutil http://localhost/StudentPerfService/PerfService.svc?wsdl /a /tcv:Version35

Monday, April 29, 2013

How to Mole DataReader in your Unit Tests

In one of my projects i had to write unit test cases around a DataReader object.
We have a middle tier that talks to the database and fetches the records. The middle tier then uses the C# DataReader class and populates the business layer objects.

This example shows a Moles sample on how to do this.

This is the DB layer class that makes the call to the Stored Procedure

public List GetConfiguration()
        {
            ConnectionString = connectionstring;
            List configDataList = new List();
            using (sqlconnection = GetDbConnection())
            {
                using (SqlCommand sqlCommand = GetDbSprocCommand(DataAccessConstants.LoadConfiguration, sqlconnection))
                {
                    if (sqlCommand != null)
                    {
                        if (sqlconnection.State != System.Data.ConnectionState.Open)
                        {
                            sqlconnection.Open();
                        }

                        SqlDataReader dr = sqlCommand.ExecuteReader();
                        ConfigData configdata;
                        while (dr.Read())
                        {
                            configdata = new ConfigData();
                            configdata.ConfigKeyName = dr[0].ToString();
                            configdata.ConfigKeyValue = dr[1].ToString();
                            configDataList.Add(configdata);
                        }
                        dr.Close();
                    }
                }
            }
            return configDataList;
        }

I now want to Mole out the SQLDATAReader in the code above. Since the SqlCommand is a C# native library i cannot Stub it out , so i Mole out that library by right clicking in the References and choosing 'Mole it' in the options that come up.
The idea is that i need to unit test the whole While loop above because the SQL Database is mocked we need to ensure that the test case passes in the while loop.

Listed below is how i moled out the DataReader class


public void Sanity_Test()
        {

            MSqlConnection.AllInstances.Open = (c) => { };
            MSqlConnection.AllInstances.Close = (c) => { };
         
            int readCount = 0;
            object[] data = { "STRING_NAME1", "y", "STRING_Name2", "n", "String_Name3", "ValuesAre" };
            //Create a delegate the simulates result in the record 
            MSqlCommand.AllInstances.ExecuteReader = (SqlCommand cmd) =>
            {
                MSqlDataReader x = new MSqlDataReader();
                x.Read = () => readCount++ == 0;
                x.ItemGetInt32 = index => data[index];
                return x;
            };

            MSqlDataReader.AllInstances.Read = (a) => { return false; };
            
            //Also create a delegate for the DataReader close 
            MSqlDataReader.AllInstances.Close = (a) => { };

            MySpecialClass rep = new MySpecialClass();
            var result = rep.GetConfigurationValues();
            if (null != result)
            {
                Assert.AreEqual(result[0].ConfigKeyName,"STRING_NAME1");
            }
        }


By creating the Object [] Array and then by mocking the Execute reader class as shown above i am able to do it.
The trick is to use the following lines
MSqlDataReader x = new MSqlDataReader();
x.Read = () => readCount++ == 0;
x.ItemGetInt32 = index => data[index];
return x;


Dot NET unit testing frameworks

Unit testing is defined as Testing the smallest unit of work. What that means is to test your functions independent of dependent functions. This kind of testing will just test one part of the function independent of other dependents.
If the module is dependent on an external source we will need to make a Mock or in other words generate a proxy for that class. This way we only test the function that we are interested in.

Unit tests are generally done to get code coverage stats which show how much of the code is covered by the test case. Whether all the conditional statements, Error handling conditions are tested.

It has wide acceptance in TDD methodology, unit tests are created before code is written. The unit tests are executed against the function frequently. As code is developed further either, code is changed or via an automated process with the build. If the unit tests fail, it is considered to be a bug either in the changed code or the tests themselves.

There are a lot of Mocking Frameworks out there, There are essentially two different types of mocking frameworks, one that are implemented via dynamic proxy and another one that are implemented via CLR profiler API.


Proxy based mocking frameworks

Proxy based mocking frameworks uses reflection and runtime code generation to dynamically generate classes (proxies) that either implement an interface being mocked or derive from a non-sealed class that’s being mocked.
This can be used only when good OO principles and dependency injection container been utilized for building the system (i.e. loose coupling, high cohesion and utilizes interfaces heavily)

A proxy object is an object that is used to take the place of a real object. In the case of mock objects, a proxy object is used to imitate the real object your code is dependent on. We can create a proxy object with any of below available proxy based mocking frameworks.

Pros and Cons

Pros

  • Open source and free
  • Can mock both non-sealed classes and interfaces
  • Type safe
  • Expressive and easy-to-learn syntax
  • Easy-to-use
  • Performance/Speed

Cons

  • Heavily relies on dependency injection pattern
  • Cannot mock non-virtual, non-abstract and static methods
  • Cannot mock sealed classes and private classes
  • Backward compatibility
  • Limited technical support and community
  • Limite documentation

Profiler based Mocking frameworks

Profiler based mocking frameworks uses the CLR profiler API’s to intercept and redirect calls to any method of any type. This makes it capable of mocking sealed types, system classes and even intercept and divert calls to non-virtual methods of concrete types.

In general, Proxy based frameworks require that your mocks implement an interface. But what do you do when the class you are trying to mock is static or sealed with no interface. If you can’t modify the class then your unit testing efforts are usually stuck. In that case, Profiler based mocking frameworks are generally better for developers wanting to use them on existing legacy code since it may not possible to refractor such into a design that is more testable.

Here is the list of profiler based mocking frameworks.

Pros and Cons

Pros

  • Doesn’t rely on dependency injection pattern
  • Can mock anything including
    • Non-virtual, non-abstract and static methods
    • Sealed and private classes
  • Type safe
  • Expressive and easy-to-learn syntax
  • Community & technical support and documentation
  • Backward compatibility

Cons

  • Not open source
  • Performance – profiler’s uses runtime instrumentation under the hood
  • Not easy-to-use when compared with proxy based mocking frameworks

Recommended Mocking Frameworks

Following are the recommended mocking frameworks based on above factors and external links.

Moles Framework

Moles Framework is a light weight CLR profiler based framework for test stubs and detours for .NET that is based on delegates.

This Framework actually supports two different kinds of substitution class – stub types and mole types. These two approaches allow you to create substitute classes for code dependencies under different circumstances.

1. Stub types – Provide a lightweight isolation framework that generates fake stub implementation of virtual methods, interfaces for unit testing. For every stubbed interface and class, code is generated at compile time i.e. one stub class per interface and class.

2. Mole types - use a powerful detouring framework that uses code profiler APIs to intercept calls to dependency classes and redirects the calls to a fake object. It is used to detour any .NET method, including non-virtual and static methods in sealed types.

Pros and Cons

Pros

  • It's from Microsoft
  • Support legacy code
  • Can mock any type of object that includes from .NET library

Cons

  • Mole types would be quite hefty performance impact because it uses runtime instrumentation under the hood.
  • Need to regenerate the Moles assemblies each time when there is any method signature change in the system.

Moq

Moq is the simplest proxy based mocking framework which takes advantages of the .NET 3.5 language features like lambda expression trees and C# 3.0 features like lambda expressions that make it the most productive, type-safe and refactoring-friendly mocking library available. And it supports mocking interfaces as well as classes. Its API is extremely simple and straightforward, and doesn't require any prior knowledge or experience with mocking concepts.

Pros and Cons

Pros

  • Very much popular
  • Strong technical support and community

Cons

  • Support .NET runtime 3.5 and above only
  • Difficult to mock static classes and sealed types (Code refactoring is required for legacy code)
  • Need to use "mock.Object" property if we pass the mocked interface/ class over

NSubstitute

NSubstitute is a friendly substitute for mocking frameworks which is a proxy based and makes use of extension methods on object for its API. And it is designed for Arrange-Act-Assert (AAA) testing, so you just need to arrange how it should work, then assert it received the calls you expected once you're done.

Pros and Cons

Pros

  • Simple syntax compared to Moq
  • No need to use lambdas to set return values for a mocked property. Hence improves readability.
  • Better internal error messages compared to Moq

Cons

  • Difficult to mock static classes and sealed types (Code refactoring is required for legacy code)
  • Relatively new framework compared to Moq
  • Minimal technical support and community
I highly recommend reading this article. It has some good insights like Unit testing isn't about finding bugs :) Unit Test the right way

Friday, April 19, 2013

WCF REST service hosted on IIS 5

Making your WCF service as RESTful is pretty simple in DOTNET. You have three options Get,Post and Put. To this you have to label your contract with an attribute something like below.

[WebGet(UriTemplate = "/StartProcessing")]
[WebInvoke(Method = "POST", UriTemplate = "/SyncCustomerDetails", BodyStyle = WebMessageBodyStyle.WrappedRequest)]


 
When i was working on my local development environment which had IIS 7.5 everything went fine and i was able to access the REST based services.
The problem started when i wanted to deploy this to our Development lab which has IIS 5. After it was deployed i could not even access the WCF Rest Help page, this can be accessed by appending a /help after your SVC file in the browser.

The error returned was something Like Http 403 error saying bad request and clicking on the help link below would also throw this error. Searching on the internet did not provide with much help.

Only one link that i found mentioned that this problem is with IIS 5 and that it does not allow accessing URL that are extension-less. To solve this some websites said to allow the '*' as a rule, strange this is that IIS 5 will not allow you to even put in such a parameter.

Next suggestion i tried was to add the '*' in the wild card application map extension for the website / virtual directory. To do this, i had to right click the website -> properties -> Virtual Directory Tab -> click on configuration.
In the application configuration tab that comes up, press the insert button and map to aspnet_isapi dll and remove the "Verify this file" exists.

Sad to say even this option did not work out.

The thing that worked for me was this.

My site is configured to run on AspNet 4.0, and the problem was that the Application Pool was configured to run on AspNet 4.0 but the account under which it runs did not have access to C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\Temporary ASP.NET Files.
The error that was returned also did not provide any help to this because there were no errors in IIS logs, no errors in event viewer, so it really side tracked this.

I hope someone out there finds this useful and can save some time by trying these steps.