Thursday, 31 October 2013

User authentication on Android against hashed passwords created with Asp.Net Identity

Leave a Comment
Microsoft shipped a new membership system called ASP.NET Identity with Visual Studio 2013 and .Net 4.5.1. It allows us to add login features to the applications built for the web, phone, or store.
By default the ASP.NET Identity system will store all the user information in a database. ASP.NET Identity uses Entity Framework Code First to implement all of its persistence mechanism.

What if we going to build an Android app with offline capabilities, where the users would sync the stored credentials created by Asp.Net Identity on the server in order to login? We would have to verify the provided user password against locally stored hashed password to authenticate the user.
Below is the code that can be used for this task:

/**
  * Verifies provided plain text password against hash 
  * @param hashedPass
  * @param password
  * @return true or false
  */
 private boolean verifyPassword(String hashedPass, String password){
  if (hashedPass == null){
   return false;
  }
  
  byte[] numArray = Base64.decode(hashedPass.getBytes(), Base64.DEFAULT);
  if (numArray.length != 49 || (int) numArray[0] != 0){
         return false;
  }
  
  byte[] salt = new byte[16];
  System.arraycopy(numArray, 1, salt, 0, 16);
  
  byte[] a = new byte[32];
  System.arraycopy(numArray, 17, a, 0, 32);
     
  byte[] b = generateHash(password, salt);     
     
     return Arrays.equals(a, b);
  
 }
 
 /**
  * Generates a hash using PBKDF2WithHmacSHA1 algorithm 
  * @param password
  * @param salt
  * @return hash as byte array
  */
 private byte[] generateHash(String password, byte[] salt) {
  

  SecretKeyFactory secretKeyFactory = null;
  try {
   secretKeyFactory = SecretKeyFactory.getInstance("PBKDF2WithHmacSHA1");
  } catch (NoSuchAlgorithmException e) {
   e.printStackTrace();
  }    
  KeySpec keySpec = new PBEKeySpec(password.toCharArray(), salt, 1000, 256);    
  SecretKey secretKey = null;
  try {
   secretKey = secretKeyFactory.generateSecret(keySpec);
  } catch (InvalidKeySpecException e) {
   e.printStackTrace();
  }
  
  return secretKey.getEncoded();
 }

And then that can be called from some login activity:

boolean result = verifyPassword("AL09HpS8X96sH4rQjeBqrZJ4Daw+Fr4yFdLjNRLNlFZsrjxsvoRGTlICO0wnBg5N7Q==", "mypass");
Read More...

Tuesday, 27 August 2013

Synchronization algorithm for exchanging data in the “Client – Server” model via REST API

37 comments
Many mobile applications require to sync data with a server if they operate in the client – server model to exchange data with the central repository.

If the server serves up resources through a REST API, then all sync logic can be handled on the client side.
The sync logic is able to handle bi-directional sync between central server and multiple clients where only incremental changes apply on both sides with some conflict detection.

Each table (both server and client) participating in the sync process should include two additional fields: “Ts” as Number, “Deleted” as Boolean.

The Ts field is maintained by the server side. For example SQL Server automatically generates a unique incremental value on each insert or update. The Ts field is used to determine whether one record in the table was modified more recently than another so to download only the incremental changes. Also it will help to identify the new records created on the client as they will have no Ts value.
Please note: Ts in this algorithm is not an instant in time, typically expressed as a date and time of day. Rather, the Ts should be considered as an Update Sequence Number e.g. the number of data element that identifies the order in which the element was last modified. The Ts within a table start at “1” and then increase monotonically every time a record is created, modified, or deleted.

The Deleted field is used to mark records for deletion so the clients would remove such records from local repositories, whereas clients with initial sync would simply skip them during download.
Additionally the client data storage should include the “Dirty” field as Boolean.
The Dirty field designates the records changed locally. Once those records sent successfully to the server the Dirty flag reset.

The sync process will be initiated by the client either manually (pressing some Sync button) or periodically or by combination of push notifications about server changes and client change triggers.
The sync process first download the server changes then upload the client changes.
The sync procedure will process the table changes sequentially and in correct order (where parent child relationship exist) to avoid referential integrity conflicts.

Below is the sync algorithm:
  1. Initialize the list of tables required to sync
  2. /******* Download server changes **********/
    For each sync table:
  3. Get the latest Ts, e.g. using sqlite “select ifnull(ts, 0) from Table order by ts desc limit 1”
  4. Call the relevant RESTful service method to get server changes as collection for the table based on the provided latest timestamp (Ts)
  5. /* Insert or update the received changes into local storage */
  6. Iterate through each record in the server changes collection
    • Check if the server record with the same id already exists in the local storage
      • If so then check if the existing local record is dirty
        • If so then call some conflict resolution function
        • Reset dirty flag
      • Else replace the local record with the server record and reset the dirty flag
    • If the local storage doesn’t have the server record, then create it
    /******* Upload client changes **********/
    For each sync table:
  7. Get the list of local storage changes for inserts on the server
  8. Iterate through each record in the local changes for inserts
    • Insert the record on the server (using POST)
    • Get the Ts for the inserted record from the response
    • Update the record in the local storage with new Ts
    • Reset the dirty flag
  9. Get the list of local storage changes for updates on the server
  10. Iterate through each record in the local changes for updates
    • Update the record on the server (using PUT)
    • Get the updated Ts for the updated record from the response
    • Update the record in the local storage with new Ts
    • Reset the dirty flag
    /******* Post sync operation ************/
    For each sync table:
  11. Delete records where Deleted=true and Dirty=false

For the conflict resolution the easiest way is to update the records with the server version (server always wins) and at the end of sync show the conflict log.
Read More...

Monday, 22 July 2013

Managing hiring process in tech-sector

Leave a Comment

Recently we used a five stages hiring process to fill two openings for .Net Senior Developers. The process was based on Jeff Atwood’s article "How to hire a programmer" to minimize our time waste on unqualified candidates and find the good fit.

The idea was to dedicate our time only to candidates who passed online technical test. We didn’t bother to study the received CVs as recruitment agencies already used to filter applicants to match our job specs (well, maybe just short glance to ensure the candidate claims the relevant experience).

Each candidate had to go through five stages listed below:
  • Online technical test (15-30 minutes) – we used Interview Zen where we created the test with three programming questions (SQL, C# collections, JavaScript).
  • Portfolio check - successful candidates had to share some code they wrote (open-source / personal projects, blog posts, etc.)
  • Phone interview (20-30 minutes) – a few programming related questions such as: What is SOLID? When do you know your code is ready for production? Etc.
  • Audition project (few hours to complete) – we asked candidates if they are willing to solve the real business problem in their spare time.
  • Face interview (1 - 1.5 hours) – face to face interview to confirm the candidate is the right person.
This process has worked quite well for us and we hired interesting and skilled people. The problem was in preparing and managing the hiring process. We had to use some system (SharePoint website) to store and update the list of candidate details, interview results, and team feedback. Searching for correct test questions and preparing audition project was not easy. Manually sending email notifications to the candidates to ensure they notified on all stages if they are succeeded or rejected.
We definitely could save our time and be more efficient if we’d had some system to organize and manage the hiring process. After some research I came to conclusion that we need a web-based hiring management system that will help to manage the entire hiring process in one place.

So, let me introduce ManageHiring.com – A cloud-based Hiring Management System that will help manage the hiring process in tech-sector. The system will allow
  • Manage job advertisements
  • Store candidate details and interview history in systematic order
  • Track all candidate's activities
  • Keep Interview configurations for the future
  • Customize the Hiring Process
  • Create on-line technical tests and audition projects
  • Prepare questionnaires for interviews
  • Manage interview stages
  • Remind about scheduled interviews
  • Automatically send email alerts and notifications
  • Collaborate on candidate performance with team members
I have setup the landing page here where you can find the more detailed description of features with some screenshots. Please subscribe to the mailing list if you think the system will be useful or leave your comments!
Read More...

Thursday, 16 May 2013

Send emails with SQL Server using Html files

4 comments

Sending emails from MS SQL Server using the Database Mail is easy. Once the Database Mail is active and configured (see this blog for quick configuration) the emails can be send using the SQL script below:
USE msdb
GO
EXEC sp_send_dbmail @profile_name='EmailProfile',
@recipients='test@example.com',
@subject='Test message',
@body='This is a test message.'
In cases where there is a need to send emails in html format - creating the content inside of SQL script is not so convenient.
With the help of OPENROWSET function we can read the content of html file, which contains the properly formatted and tested html.

For example below is the html file that we can easily open in the browser and validate it displays correctly the way we want. It contains the html and body tags which wraps the content that will be inserted into the email’s body.

  
       

Test Email

This is a test message
Below is the SQL script which will send the emails by reading the designated html file. The script will remove the leading and ending html and body tags and assign the html content to the email’s body.
DECLARE @SmtpProfile VARCHAR( 50) = 'EmailProfile ' ;
DECLARE @html_file NVARCHAR( 100) = 'c:\test-email.html' ;
DECLARE @Subject NVARCHAR( 255) = 'Test Message' ;
DECLARE @Recipients VARCHAR( 255) = 'test@example.com' ;

DECLARE @Body NVARCHAR( MAX);
DECLARE @ParmDefinition nvarchar( MAX) = N'@Body NVARCHAR(MAX) OUTPUT' ;
DECLARE @sql NVARCHAR( MAX) = 'SET @Body = (SELECT REPLACE(REPLACE(f.BulkColumn, '''', ''''), '''', '''') FROM OPENROWSET ( BULK ''' + @html_file + ''', SINGLE_NCLOB ) f)';

EXEC sp_executesql @sql, @ParmDefinition, @Body OUTPUT;

EXEC msdb ..sp_send_dbmail @profile_name=@SmtpProfile,
                           @recipients=@Recipients,                                            
                           @subject=@Subject,
                           @body_format= 'HTML',
                           @body=@Body;

Please note that the OPENROWSET function in the example above takes SINGLE_NCLOB flag, which reads a file as nvarchar for Unicode support. For varchar use SINGLE_CLOB.

Pretty straightforward, isn’t it.
Read More...

Tuesday, 14 May 2013

Create Google Charts in Asp.Net MVC by generating JSON data on the SQL SERVER

3 comments

The purpose of this article is to share the implementation of Google Charts with data retrieved from SQL Server whereas JSON is created on the SQL Server.

Google chart tools provide a rapid way of creating charts from some data for the web developers. All charts are populated with data stored in a DataTable object. The DataTable object can be filled with data using JSON format.
There are several different ways of creating and populating a DataTable and they all have some advantages and disadvantages (more about it here).
The application I needed to add Google Charts to be Asp.Net MVC using nHibernate with MS SQL Server.
The idea is to create all required data on the SQL Server using stored procedures to gain performance benefits, minimize the efforts of creating JSON for each chart and realize the DRY principal.
Once the required SQL stored procedure is created we are going to convert the result table data into JSON and pass it into our DataTable constructor.

Let’s say we need to create the bar chart showing the number of items by location.
Our stored procedure will be like the one below, where we have to create the string of columns, populate the temp table “#chart_data” with data and call another stored procedure to convert the content of the temp table into JSON.
create procedure [dbo].[GetItemsByLocationAsJson]
as
begin
set nocount on;
/* Set your columns data here */
/* The result string should look like the one below */
/* "cols": [{"id":"","label":"Topping","type":"string"},{"id":"","label":"Slices","type":"number"}] */
declare @columns_list varchar(max) = '"cols":[' +
'{"id":"Alocation","label":"Location","type":"string"}' +
',{"id":"ItemCount","label":"Items Count","type":"number"}' +
']';

select * into #chart_data from ( 
 select 'Location 1' as Alocation, 5 as ItemCount
 union all
 select 'Location 2' as Alocation, 10 as ItemCount
 union all
 select 'Location 3' as Alocation, 15 as ItemCount
 union all
 select 'Location 4' as Alocation, 20 as ItemCount
 ) as t

exec dbo.ConvertTableToGoogleChartJson @columns_list;

drop table #chart_data;  
 
end
If we run this stored procedure we’ll get the following result:
{
  "cols":[{"id":"Alocation","label":"Location","type":"string"},{"id":"ItemCount","label":"Items Count","type":"number"}],
  "rows":[{"c":[{"v":"Location 1"},{"v":5}]} ,{"c":[{"v":"Location 2"},{"v":10}]} ,{"c":[{"v":"Location 3"},{"v":15}]} ,{"c":[{"v":"Location 4"},{"v":20}]}]
}
JSON data are generated by sorting the column names in ascending order.
Note that the order of columns and rows should be in sync, e.g. the generated JSON data in rows should related to the correct columns.
The order between columns and rows can be achieved by giving such names so they will be sorted correctly when populating the #chart_data table. In our example we name the location field as “Alocation” instead of “Location” to make it first. Thus we can ensure the data for “Alocation” and “ItemCount” will be generated in correct order.

Below is the code for the ConvertTableToGoogleChartJson stored procedure.
This generic procedure generates the appropriate JSON based on the data in the #chart_data table and pre-append the column list passed as a parameter. Most of the code based on the answer from StackOverflow (thanks to pyrospade) and use metaprogramming to boost the performance.
create procedure [dbo].[ConvertTableToGoogleChartJson] 
 @columns_list varchar(max)
as
begin
set nocount on;

    declare @columns table (
    id int identity primary key,
    name sysname,
    datatype sysname,
    is_number bit,
    is_date bit);

    insert into @columns(name, datatype, is_number, is_date)
 select columns.name, types.name,
     case when number_types.name is not NULL
    then 1 else 0
     end as is_number,
     case when date_types.name is not NULL
    then 1 else 0
     end as is_date
 from tempdb.sys.columns
 join tempdb.sys.types
  on (columns.system_type_id = types.system_type_id)
 left join (values ('int'), ('real'), ('numeric'),
       ('decimal'), ('bigint'), ('tinyint')) as number_types(name)
  on (types.name = number_types.name)
 left join (values ('date'), ('datetime'), ('datetime2'),
       ('smalldatetime'), ('time'), ('datetimeoffset')) as date_types(name)
  on (types.name = date_types.name)
 where object_id = OBJECT_ID('tempdb..#chart_data') and types.name != 'sysname';

 declare @row_list varchar(max) = STUFF((select '+'',''+''{"v":''+' + 
  case when is_number = 1 then 'COALESCE(LTRIM(' + QUOTENAME(name) + '),''null'')'
    when is_date = 1 then 'COALESCE(QUOTENAME(LTRIM(convert(varchar(max),' + QUOTENAME(name) + ', 126)),''"''),''null'')'
    else 'COALESCE(QUOTENAME(' + QUOTENAME(name) + ',''"''),''null'')'
    end + '+''}'''
  from @columns order by name  for xml path('')), 1, 5, '');

     create table #json_result (id int identity primary key, line varchar(max));

     declare @sql varchar(max) = REPLACE('insert into #json_result '  + 'select '',{"c":[''+{f}+'']}''' + 'from #chart_data', '{f}', @row_list);

     exec(@sql);

     update #json_result set line = STUFF(line, 1, 1, '') where id = 1;

     declare @row_json Varchar(max) = (select line + '' as 'data()' from #json_result for xml path(''));

     select ('{' + @columns_list + ',"rows":[' + @row_json + ']}') as Result;

     drop table #json_result;
end
The point here is that the ConvertTableToGoogleChartJson stored procedure can be reused for the other calls to speed up the development of google charts.

Here is the sample of the controller calling our stored procedure and returning the JSON result:
public ActionResult GetItemsByLocationData ()
{
    var jsonText = this.dbService.GetItemsByLocationData();
    return this.Content(jsonText, "application/json");
}
Our view can have the jQuery ajax call to pass the JSON into the DataTable constructor:
function drawPieChart() {            
    var jsonData = $.ajax({
        url: 'MyController/GetItemsByLocationData',
        dataType: 'json',
        async: false
        }).responseText;
            
    var data = new google.visualization.DataTable(jsonData);
            
    // Set chart options      
    var options = {title: 'Items by Location'};
    // Instantiate and draw our chart, passing in some options.      
    var chart = new google.visualization.PieChart(document.getElementById('pie_chart_div'));            
    chart.draw(data, options);
}

That’s it. Happy coding!
Read More...

Friday, 19 April 2013

Managing configuration differences between environments in Visual Studio 2010/2012

Leave a Comment
 


Working in a team where every developer has to keep their own custom config file can be achieved using the Visual Studio built-in configuration transformation feature ( here is more about configuration transformation ).
By default the configuration transformation is applied during project package / publish process. To enable the transformation on the project build we can follow the steps described by Tim in his answer.
Additionally there is an extension tool SlowCheetah - XMLTransforms which can be added to the project as a Nuget package. Once it is installed the Preview Transform context menu becomes available on the transform files. It allows to preview the web.config after transformation. Really cool!

Below is the summary of how to enable web configuration transformation on the build for the Web Application Project in VS2012:

  • Create a new XML file with the following content in your project and rename it to "[YourProjectName].wpp.targets".  This file will be automatically imported into project file.

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <UsingTask TaskName="TransformXml" AssemblyFile="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v11.0\Web\Microsoft.Web.Publishing.Tasks.dll"/>
    <Target Name="CustomTarget" BeforeTargets="BeforeBuild">
        <Message Text="Transforming: Web.$(Configuration).config" Importance="high" />
        <TransformXml Source="Web.base.config"
                      Transform="Web.$(Configuration).config"
                      Destination="Web.config" />
    </Target>
  <!-- Exclude the config template files from the created package -->
  <Target Name="ExcludeCustomConfigTransformFiles" BeforeTargets="ExcludeFilesFromPackage">
    <ItemGroup>
      <ExcludeFromPackageFiles Include="web.base.config"/>
    </ItemGroup>
    <Message Text="ExcludeFromPackageFiles: @(ExcludeFromPackageFiles)" Importance="high"/>
  </Target>
</Project>


  • Create the web.base.config file based on the original web.config while the original web.config should be removed from the source repository (see the provided link above). This will prevent local web.config to be included into commits by source control system.
  • Create required solution Configuration for different environments using Visual Studio’s Configuration Manager. In my case Production, Staging, Developer1, Developer2.
  • Add transformation actions into each transform file created based on the solution configuration.

Now all in one place whereas different configurations can be managed without hassle both for development and publishing needs.
Read More...

Friday, 5 April 2013

This is my new blog now

Leave a Comment


Here is my new blog. S4SME blog will be down as I have finished my experiment with S4SME.com website and see no any need to keep it running. All those blog posts will be moved to my new blog. 
Read More...