Friday, November 6, 2015

Facebook behind the scenes - TAO

Ever wondered what the technology is behind the scenes at Facebook. Take a look at this video, it is suprisingly honest and concise.
Graphs and more Graphs as a data structure.

If you are querying billions of queries having petabytes of information every second, this is what Facebook does..

Click here to see the presentation on TAO

Thursday, May 2, 2013

WCF Handling Exceptions

If you are designing any complicated service where in you want to ensure that all possible requests are served, then it becomes important that there is a proper error handling strategy.
In one of my projects i had a retry mechanism which would try 3 times to send / process the request on certain types of errors. I have listed out the function which handles the retry mechanism

  public bool IsError(System.Exception ex)
  {
   bool retry = FALSE;

   if (ex is System.ServiceModel.CommunicationException)
   {
    if (ex is FaultException)
    {
     // We get this case sometimes when connection has been closed, on next call.
     if ((ex as FaultException).Detail.Type !=
      "System.ObjectDisposedException")
     {
      ErrorLoger("generic fault - not retrying.", ex);
     }
    }
    else if (ex is System.ServiceModel.FaultException)
    {
     ErrorLoger("generic fault - not retrying.", ex);
    }
    else if (ex is System.ServiceModel.AddressAccessDeniedException ||
       ex is System.ServiceModel.Security.SecurityAccessDeniedException ||
       ex is System.ServiceModel.Security.SecurityNegotiationException ||
       ex is System.Security.Authentication.AuthenticationException)
    {
     ErrorLoger("Received access denied - not retrying.", ex);
    }
    else if (ex is System.ServiceModel.EndpointNotFoundException)
    {
     ErrorLoger("Endpoint was not found - not retrying.", ex);
    }
    else if (ex is System.ServiceModel.ServerTooBusyException)
    {
     ErrorLoger("Service AppPool may be down - not retrying.", ex);
    }
   }
   else if (ex is System.ObjectDisposedException)
   {
    var ode = ex as ObjectDisposedException;
    if (!ode.ObjectName.Contains("System.ServiceModel.ChannelFactory"))
    {
     ErrorLoger("object disposed exception not related to channel factory.", ex);
    }
    else
    {
     ErrorLoger("Received object disposed exception related to the channel factory.", ex);
     retry = TRUE;
    }
   }
   else if (ex is System.TimeoutException || ex is System.Net.Sockets.SocketException)
   {
    if ((ex is System.TimeoutException))
    {
     ErrorLoger("Received TimeoutException. Retrying", ex);
    }
    else
    {
     ErrorLoger("Received SocketException. Retrying", ex);
    }
    retry = true;
   }
   else
   {
    ErrorLoger("General error, so not retrying.", ex);
   }
   return retry;
  }
The FaultException class belongs to the System.ServiceModel library.

Tuesday, April 30, 2013

Generating an ASYNC proxy for your WCF service

If your WCF service supports ASYNC calls then we will need the right proxy that supports ASYNC so that the client can use that capability.
You have to use the Async attribute for your Interfaces.


        [OperationContractAttribute(AsyncPattern = true)]
        IAsyncResult BeginProcessStudentEnrollment(AsyncCallback callback, object asyncState);
        Result EndProcessStudentEnrollment(IAsyncResult result);

The Async pattern follows the Begin and End pairs, this internally makes use of the Async pattern in the System.IAsyncResult interface.

In your implementation class you will have to do the following, implement both the Begin and End pairs. This is case sensitive do make sure that it follows the same as in your interface definition

public IAsyncResult BeginProcessStudentEnrollment(AsyncCallback callback, object asyncState)
        {
            ServerResponse response = new ServerResponse();
            try
            {             
                    long rows = GetEnrollment(); //Business Layer call
                    if (rows > 0)
                    {
                        response.SetSuccessResponse(true, "Successfully executed");
                    }
                    else
                    {
                        response.SetSuccessResponse(false, "No records were updated");
                    }
            }
            
            catch (Exception error)
            {
              response.SetErrorResponse(500, "General type exception, see service error logs for details");
            }
            return new CompletedAsyncResult(response);
        }

        public Result EndProcessStudentEnrollment(IAsyncResult r)
        {
            CompletedAsyncResult result = r as CompletedAsyncResult;
            return result.Data;
        }
CompletedAsyncResult is the class that will handle your Response objects. I have listed this class below

    /// 
    /// This class handles the completed Async calls by implementing the IAsyncResult interface.
    /// 
    /// 
    public class CompletedAsyncResult : IAsyncResult
    {
        T data;

        public CompletedAsyncResult(T data)
        {
            this.data = data;
        }

        public T Data
        {
            get { return data; }
        }

        #region IAsyncResult Members
        public object AsyncState
        {
            get
            {
                return (object)data;
            }
        }

        public WaitHandle AsyncWaitHandle
        {
            get
            {
                return null;
            }
        }

        public bool CompletedSynchronously
        {
            get
            {
                return true;
            }
        }

        public bool IsCompleted
        {
            get
            {
                return true;
            }
        }
        #endregion
    }


and finally the generation of the Async aware proxy is done with this command line argument

svcutil http://localhost/StudentPerfService/PerfService.svc?wsdl /a /tcv:Version35

Monday, April 29, 2013

How to Mole DataReader in your Unit Tests

In one of my projects i had to write unit test cases around a DataReader object.
We have a middle tier that talks to the database and fetches the records. The middle tier then uses the C# DataReader class and populates the business layer objects.

This example shows a Moles sample on how to do this.

This is the DB layer class that makes the call to the Stored Procedure

public List GetConfiguration()
        {
            ConnectionString = connectionstring;
            List configDataList = new List();
            using (sqlconnection = GetDbConnection())
            {
                using (SqlCommand sqlCommand = GetDbSprocCommand(DataAccessConstants.LoadConfiguration, sqlconnection))
                {
                    if (sqlCommand != null)
                    {
                        if (sqlconnection.State != System.Data.ConnectionState.Open)
                        {
                            sqlconnection.Open();
                        }

                        SqlDataReader dr = sqlCommand.ExecuteReader();
                        ConfigData configdata;
                        while (dr.Read())
                        {
                            configdata = new ConfigData();
                            configdata.ConfigKeyName = dr[0].ToString();
                            configdata.ConfigKeyValue = dr[1].ToString();
                            configDataList.Add(configdata);
                        }
                        dr.Close();
                    }
                }
            }
            return configDataList;
        }

I now want to Mole out the SQLDATAReader in the code above. Since the SqlCommand is a C# native library i cannot Stub it out , so i Mole out that library by right clicking in the References and choosing 'Mole it' in the options that come up.
The idea is that i need to unit test the whole While loop above because the SQL Database is mocked we need to ensure that the test case passes in the while loop.

Listed below is how i moled out the DataReader class


public void Sanity_Test()
        {

            MSqlConnection.AllInstances.Open = (c) => { };
            MSqlConnection.AllInstances.Close = (c) => { };
         
            int readCount = 0;
            object[] data = { "STRING_NAME1", "y", "STRING_Name2", "n", "String_Name3", "ValuesAre" };
            //Create a delegate the simulates result in the record 
            MSqlCommand.AllInstances.ExecuteReader = (SqlCommand cmd) =>
            {
                MSqlDataReader x = new MSqlDataReader();
                x.Read = () => readCount++ == 0;
                x.ItemGetInt32 = index => data[index];
                return x;
            };

            MSqlDataReader.AllInstances.Read = (a) => { return false; };
            
            //Also create a delegate for the DataReader close 
            MSqlDataReader.AllInstances.Close = (a) => { };

            MySpecialClass rep = new MySpecialClass();
            var result = rep.GetConfigurationValues();
            if (null != result)
            {
                Assert.AreEqual(result[0].ConfigKeyName,"STRING_NAME1");
            }
        }


By creating the Object [] Array and then by mocking the Execute reader class as shown above i am able to do it.
The trick is to use the following lines
MSqlDataReader x = new MSqlDataReader();
x.Read = () => readCount++ == 0;
x.ItemGetInt32 = index => data[index];
return x;


Dot NET unit testing frameworks

Unit testing is defined as Testing the smallest unit of work. What that means is to test your functions independent of dependent functions. This kind of testing will just test one part of the function independent of other dependents.
If the module is dependent on an external source we will need to make a Mock or in other words generate a proxy for that class. This way we only test the function that we are interested in.

Unit tests are generally done to get code coverage stats which show how much of the code is covered by the test case. Whether all the conditional statements, Error handling conditions are tested.

It has wide acceptance in TDD methodology, unit tests are created before code is written. The unit tests are executed against the function frequently. As code is developed further either, code is changed or via an automated process with the build. If the unit tests fail, it is considered to be a bug either in the changed code or the tests themselves.

There are a lot of Mocking Frameworks out there, There are essentially two different types of mocking frameworks, one that are implemented via dynamic proxy and another one that are implemented via CLR profiler API.


Proxy based mocking frameworks

Proxy based mocking frameworks uses reflection and runtime code generation to dynamically generate classes (proxies) that either implement an interface being mocked or derive from a non-sealed class that’s being mocked.
This can be used only when good OO principles and dependency injection container been utilized for building the system (i.e. loose coupling, high cohesion and utilizes interfaces heavily)

A proxy object is an object that is used to take the place of a real object. In the case of mock objects, a proxy object is used to imitate the real object your code is dependent on. We can create a proxy object with any of below available proxy based mocking frameworks.

Pros and Cons

Pros

  • Open source and free
  • Can mock both non-sealed classes and interfaces
  • Type safe
  • Expressive and easy-to-learn syntax
  • Easy-to-use
  • Performance/Speed

Cons

  • Heavily relies on dependency injection pattern
  • Cannot mock non-virtual, non-abstract and static methods
  • Cannot mock sealed classes and private classes
  • Backward compatibility
  • Limited technical support and community
  • Limite documentation

Profiler based Mocking frameworks

Profiler based mocking frameworks uses the CLR profiler API’s to intercept and redirect calls to any method of any type. This makes it capable of mocking sealed types, system classes and even intercept and divert calls to non-virtual methods of concrete types.

In general, Proxy based frameworks require that your mocks implement an interface. But what do you do when the class you are trying to mock is static or sealed with no interface. If you can’t modify the class then your unit testing efforts are usually stuck. In that case, Profiler based mocking frameworks are generally better for developers wanting to use them on existing legacy code since it may not possible to refractor such into a design that is more testable.

Here is the list of profiler based mocking frameworks.

Pros and Cons

Pros

  • Doesn’t rely on dependency injection pattern
  • Can mock anything including
    • Non-virtual, non-abstract and static methods
    • Sealed and private classes
  • Type safe
  • Expressive and easy-to-learn syntax
  • Community & technical support and documentation
  • Backward compatibility

Cons

  • Not open source
  • Performance – profiler’s uses runtime instrumentation under the hood
  • Not easy-to-use when compared with proxy based mocking frameworks

Recommended Mocking Frameworks

Following are the recommended mocking frameworks based on above factors and external links.

Moles Framework

Moles Framework is a light weight CLR profiler based framework for test stubs and detours for .NET that is based on delegates.

This Framework actually supports two different kinds of substitution class – stub types and mole types. These two approaches allow you to create substitute classes for code dependencies under different circumstances.

1. Stub types – Provide a lightweight isolation framework that generates fake stub implementation of virtual methods, interfaces for unit testing. For every stubbed interface and class, code is generated at compile time i.e. one stub class per interface and class.

2. Mole types - use a powerful detouring framework that uses code profiler APIs to intercept calls to dependency classes and redirects the calls to a fake object. It is used to detour any .NET method, including non-virtual and static methods in sealed types.

Pros and Cons

Pros

  • It's from Microsoft
  • Support legacy code
  • Can mock any type of object that includes from .NET library

Cons

  • Mole types would be quite hefty performance impact because it uses runtime instrumentation under the hood.
  • Need to regenerate the Moles assemblies each time when there is any method signature change in the system.

Moq

Moq is the simplest proxy based mocking framework which takes advantages of the .NET 3.5 language features like lambda expression trees and C# 3.0 features like lambda expressions that make it the most productive, type-safe and refactoring-friendly mocking library available. And it supports mocking interfaces as well as classes. Its API is extremely simple and straightforward, and doesn't require any prior knowledge or experience with mocking concepts.

Pros and Cons

Pros

  • Very much popular
  • Strong technical support and community

Cons

  • Support .NET runtime 3.5 and above only
  • Difficult to mock static classes and sealed types (Code refactoring is required for legacy code)
  • Need to use "mock.Object" property if we pass the mocked interface/ class over

NSubstitute

NSubstitute is a friendly substitute for mocking frameworks which is a proxy based and makes use of extension methods on object for its API. And it is designed for Arrange-Act-Assert (AAA) testing, so you just need to arrange how it should work, then assert it received the calls you expected once you're done.

Pros and Cons

Pros

  • Simple syntax compared to Moq
  • No need to use lambdas to set return values for a mocked property. Hence improves readability.
  • Better internal error messages compared to Moq

Cons

  • Difficult to mock static classes and sealed types (Code refactoring is required for legacy code)
  • Relatively new framework compared to Moq
  • Minimal technical support and community
I highly recommend reading this article. It has some good insights like Unit testing isn't about finding bugs :) Unit Test the right way

Friday, April 19, 2013

WCF REST service hosted on IIS 5

Making your WCF service as RESTful is pretty simple in DOTNET. You have three options Get,Post and Put. To this you have to label your contract with an attribute something like below.

[WebGet(UriTemplate = "/StartProcessing")]
[WebInvoke(Method = "POST", UriTemplate = "/SyncCustomerDetails", BodyStyle = WebMessageBodyStyle.WrappedRequest)]


 
When i was working on my local development environment which had IIS 7.5 everything went fine and i was able to access the REST based services.
The problem started when i wanted to deploy this to our Development lab which has IIS 5. After it was deployed i could not even access the WCF Rest Help page, this can be accessed by appending a /help after your SVC file in the browser.

The error returned was something Like Http 403 error saying bad request and clicking on the help link below would also throw this error. Searching on the internet did not provide with much help.

Only one link that i found mentioned that this problem is with IIS 5 and that it does not allow accessing URL that are extension-less. To solve this some websites said to allow the '*' as a rule, strange this is that IIS 5 will not allow you to even put in such a parameter.

Next suggestion i tried was to add the '*' in the wild card application map extension for the website / virtual directory. To do this, i had to right click the website -> properties -> Virtual Directory Tab -> click on configuration.
In the application configuration tab that comes up, press the insert button and map to aspnet_isapi dll and remove the "Verify this file" exists.

Sad to say even this option did not work out.

The thing that worked for me was this.

My site is configured to run on AspNet 4.0, and the problem was that the Application Pool was configured to run on AspNet 4.0 but the account under which it runs did not have access to C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\Temporary ASP.NET Files.
The error that was returned also did not provide any help to this because there were no errors in IIS logs, no errors in event viewer, so it really side tracked this.

I hope someone out there finds this useful and can save some time by trying these steps.

Wednesday, April 3, 2013

How to do Versioning in WCF

Overview

After initial deployment, and potentially several times during their lifetime, services (and the endpoints they expose) may need to be changed for a variety of reasons, such as changes in data contracts, operations, and end point addresses. The following diagram and sections describe how these changes would lead to breaking /non-breaking changes and the action to be taken accordingly.




Fig 1.Versioning Decision Tree
The decision tree above shows you what you will need to do to handle every case and it also shows the consequences of each versioning choice you might make. The decisions include changes in the service operations, data contract, and bindings.

Process Flow in Versioning

Adding and Removing Data Members

In most cases, adding or removing a data member is not a breaking change, unless you require strict schema validity (new instances validating against the old schema).
When a type with an extra field is deserialized into a type with a missing field, the extra information is ignored. (It may also be stored for round-tripping purposes; for more information, see Forward-Compatible Data Contracts).
When a type with a missing field is deserialized into a type with an extra field, the extra field is left at its default value, usually zero, or null. (The default value may be changed; for more information, see Version-Tolerant Serialization Callbacks.)
For example, you can use the CarV1 class on a client and the CarV2 class on a service, or you can use the CarV1 class on a service and the CarV2 class on a client:
 // Version 1 of a data contract, on machine V1.
[DataContract(Name = "Car")]
public class CarV1
{
    [DataMember]
    private string Model;
}
 
// Version 2 of the same data contract, on machine V2.
[DataContract(Name = "Car")]
public class CarV2
{
    [DataMember]
    private string Model;
 
    [DataMember]
    private int HorsePower;
}
The version 2 endpoint can successfully send data to the version 1 endpoint. Serializing version 2 of the Car data contract yields XML similar to the following.
[javascript:CopyCode('ctl00_rs1_mainContentContainer_ctl15other'); Copy Code]
<Car>
    <Model>Porsche</Model>
    <HorsePower>300</HorsePower>
</Car>
The deserialization engine on V1 does not find a matching data member for the HorsePower field, and discards that data.
Also, the version 1 endpoint can send data to the version 2 endpoint.
The version 2 deserializer does not know what to set the HorsePower field to, because there is no matching data in the incoming XML. Instead, the field is set to the default value of 0.

Required Data Members

A data member may be marked as being required by setting the “IsRequired” property of the DataMemberAttribute to true. If required data is missing while deserializing, an exception is thrown instead of setting the data member to its default value.
Adding a required data member is a breaking change. That is, the newer type can still be sent to endpoints with the older type, but not the other way around. Removing a data member that was marked as required in any prior version is also a breaking change.
Changing the IsRequired property value from true to false is not breaking, but changing it from false to true may be breaking if any prior versions of the type do not have the data member in question.
Note: Although the IsRequired property is set to true, the incoming data may be null or zero, and a type must be prepared to handle this possibility. Do not use IsRequired as a security mechanism to protect against bad incoming data.

Best Practice Usage and Guidelines

Best Practice Usage -Operation Change and Versioning

If you are adding a new operation, simply define a new service contract that derives from the service contract of the existing service, add the new operations to the new service contract, and then have your service type, the class that implements the existing service contract, implement the new service contract instead of the old one. Update the definition of the service endpoint so that it refers to the new, derived service contract. Now existing clients, which will only know about the old service contract, can have the operations that they already knew executed at the original endpoint, and new clients, which know about the enhanced service contract, can have the additional operations executed at the same endpoint.
[ServiceContract]
public interface IMyServiceContract 
{    
[OperationContract]    
 public void MyMethod(MyDataContract input); 
} 
 
[ServiceContract] 
public interface IMyAugmentedServiceContract: IMyServiceContract 
{    
 [OperationContract]    
 public void MyNewMethod(MyOtherDataContract input); 
}   
 
public class MyOriginalServiceType: IAugmentedServiceContract
If you are deleting an operation or changing a data contract type

Best Practice Usage-Data Contract Changes and Versioning

Breaking versus Nonbreaking Changes

Changes to a data contract can be breaking or nonbreaking. When a data contract is changed in a nonbreaking way, an application using the older version of the contract can communicate with an application using the newer version, and an application using the newer version of the contract can communicate with an application using the older version. On the other hand, a breaking change prevents communication in one or both directions.
Any changes to a type that do not affect how it is transmitted and received are nonbreaking. Such changes do not change the data contract, only the underlying type. For example, you can change the name of a field in a nonbreaking way if you then set the Name property of the DataMemberAttribute to the older version name. The following code shows version 1 of a data contract.
  
// Version 1
[DataContract]
public class Person
{
    [DataMember]
    private string Phone;
}


The following code shows a nonbreaking change.
 
// Version 2. This is a non-breaking change because the data contract 
// has not changed, even though the type has.
[DataContract]
public class Person
{
    [DataMember(Name = "Phone")]
    private string Telephone;
}
Some changes do modify the transmitted data, but may or may not be breaking. The following changes are always breaking:
  • Changing the Name or Namespace value of a data contract.
  • Changing the order of data members by using the Order property of the DataMemberAttribute.
  • Renaming a data member.
  • Changing the data contract of a data member. For example, changing the type of data member from an integer to a string, or from a type with a data contract named "Customer" to a type with a data contract named "Person".

Schema Considerations

The schema WCF produces for data contract types makes no provisions for versioning. That is, the schema exported from a certain version of a type contains only those data members present in that version. Implementing the IExtensibleDataObject interface does not change the schema for a type.
Data members are exported to the schema as optional elements by default. That is, the minOccurs (XML attribute) value is set to zero. Required data members are exported with minOccurs set to 1.
Many of the changes considered to be nonbreaking are actually breaking if strict adherence to the schema is required. In the preceding example, a CarV1 instance with just the Model element would validate against the CarV2 schema (which has both Model and Horsepower, but both are optional). However, the reverse is not true: a CarV2 instance would fail validation against the CarV1 schema.
Round-tripping also entails some additional considerations. For more information, see the "Schema Considerations" section in Forward-Compatible Data Contracts.

Guidelines

Enumerations

Adding or removing an enumeration member is a breaking change. Changing the name of an enumeration member is breaking, unless its contract name is kept the same as in the old version by using the EnumMemberAtttribute attribute. For more information, see Enumeration Types in Data Contracts.

Collections

Most collection changes are nonbreaking because most collection types are interchangeable with each other in the data contract model. However, making a noncustomized collection customized or vice versa is a breaking change. Also, changing the collection's customization settings is a breaking change; that is, changing its data contract name and namespace, repeating element name, key element name, and value element name. For more information about collection customization, see Collection Types in Data Contracts.

Other Permitted Changes

Implementing the IExtensibleDataObject interface is a nonbreaking change. However, round-tripping support does not exist for versions of the type prior to the version in which IExtensibleDataObject was implemented. For more information, see Forward-Compatible Data Contracts.