Saturday, 7 March 2015

Inspecting HTTP traffic using OWASP Zed Attack Proxy tool

1 comment

Whether I develop or debug a mobile application for Android or iOS with some networking functionality I might want to inspect HTTP traffic. Even for a web development there might be a need to intercept and modify the HTTP requests.

There are number of tools available to intercept HTTP/HTTPS traffic. One of the most popular and well documented is probably Fiddler. That’s what I used on Windows platform.
If you are on MAC, you have to configure Fiddler to run in a virtual machine or use some alternative tools.

This post is about the second option. Specifically, OWASP Zed Attack Proxy (ZAP) tool -  free, open source, easy to install and use, penetration testing tool for finding vulnerabilities in web applications.

This tool provides a lot of functionality whereas I am going to cover here only how to configure and use it as an intercepting  proxy on Mac. Also I include the steps to configure Android and iPhone devices in order to intercept the HTTP traffic. ZAP documentation already explains how to configure your web browser’s proxy, so I am not going to repeat this.


First, download and install ZAP. Once it is installed, open it. We have to generate the SSL certificate which we are going to install on our mobile device (or web browser).
Go to Tools -> Options -> Dynamic SSL Certificates. Press “Generate" button and then press “Save” to save the generated cert file to your hard drive.

Also we need to set the address and port on which ZAP will accept incoming connections. Go to Tools -> Options -> Local Proxy.
Set the address to the IP address of your Mac and the port to, let’s say, 8080. 

ZAP is configured now.


In order to use an Android device with ZAP, we have to install the generated SSL certificate (see the previous step) on our device. I assume we are going to use Android v.4.x+.
  • Copy the cert to your device (there is an Android File Transfer app for Mac available).
  • Go to Settings > Security > Install from device storage, and select the copied cert (installing the cert will require the password/pin screen lock be enable);

Now enable proxy for your WiFi connection and enter the same IP and Port on which ZAP listening (e.g. ZAP’s Local Proxy settings).


To use iPhone / iPad device, requires installation of SSL certificate as well.
  • Install Apple Configurator on your Mac from the Apple Store and connect your iPhone.
  • In the Apple Configurator app open Prepare/Settings and press on Install Profiles.

  • On the Choose or create a profile screen press New… and enter the Name of profile in the General tab.

  • Then go to Certificates and add the generated cert.

  • Press Save and then select the newly created profile and install it on the device following the wizard. 

Then enable proxy for your WiFi connection similar to what we did for Android. 

Now you should be able to see all of the requests you make to a mobile or web app and all of the responses you receive from it.

You can also set break points which allow you to change the requests and responses on the fly.

That's it!


Sunday, 26 January 2014

Inspeko - new app for inspectors

Leave a Comment

Today I have the first release of my app “Inspeko” – cloud-based mobile ready Inspection system.

This is the version 0.1 release, which contains the minimum set of features and running only on the up-to-date modern browsers.

The system designed for inspectors of various industries who requires collecting information through visual observation during a walk-down inspections.

Inspeko will empower inspectors to efficiently and effectively do their inspection while saving their time and efforts of capturing the required information.
Every inspection is different and requires different data input. Classically, most of inspection results written into some pre-designed paper form and later communicated with the interested parties.

Inspeko allows creating custom inspection forms which would match those ready-made paper forms.  Inspectors will be able to carry out most of the inspection types, e.g. property inspections, room inspections, safety inspections, vehicle inspections, road inspections, etc.

Inspeko can be accessed from any web-enabled device and supports any screen size from phones to desktop displays. The form designer is not yet available while a number of pre-built forms could be used to create the inspections.

More information is available here on the supporting website.


Thursday, 31 October 2013

User authentication on Android against hashed passwords created with Asp.Net Identity

Leave a Comment
Microsoft shipped a new membership system called ASP.NET Identity with Visual Studio 2013 and .Net 4.5.1. It allows us to add login features to the applications built for the web, phone, or store.
By default the ASP.NET Identity system will store all the user information in a database. ASP.NET Identity uses Entity Framework Code First to implement all of its persistence mechanism.

What if we going to build an Android app with offline capabilities, where the users would sync the stored credentials created by Asp.Net Identity on the server in order to login? We would have to verify the provided user password against locally stored hashed password to authenticate the user.
Below is the code that can be used for this task:

  * Verifies provided plain text password against hash 
  * @param hashedPass
  * @param password
  * @return true or false
 private boolean verifyPassword(String hashedPass, String password){
  if (hashedPass == null){
   return false;
  byte[] numArray = Base64.decode(hashedPass.getBytes(), Base64.DEFAULT);
  if (numArray.length != 49 || (int) numArray[0] != 0){
         return false;
  byte[] salt = new byte[16];
  System.arraycopy(numArray, 1, salt, 0, 16);
  byte[] a = new byte[32];
  System.arraycopy(numArray, 17, a, 0, 32);
  byte[] b = generateHash(password, salt);     
     return Arrays.equals(a, b);
  * Generates a hash using PBKDF2WithHmacSHA1 algorithm 
  * @param password
  * @param salt
  * @return hash as byte array
 private byte[] generateHash(String password, byte[] salt) {

  SecretKeyFactory secretKeyFactory = null;
  try {
   secretKeyFactory = SecretKeyFactory.getInstance("PBKDF2WithHmacSHA1");
  } catch (NoSuchAlgorithmException e) {
  KeySpec keySpec = new PBEKeySpec(password.toCharArray(), salt, 1000, 256);    
  SecretKey secretKey = null;
  try {
   secretKey = secretKeyFactory.generateSecret(keySpec);
  } catch (InvalidKeySpecException e) {
  return secretKey.getEncoded();

And then that can be called from some login activity:

boolean result = verifyPassword("AL09HpS8X96sH4rQjeBqrZJ4Daw+Fr4yFdLjNRLNlFZsrjxsvoRGTlICO0wnBg5N7Q==", "mypass");

Tuesday, 27 August 2013

Synchronization algorithm for exchanging data in the “Client – Server” model via REST API

Many mobile applications require to sync data with a server if they operate in the client – server model to exchange data with the central repository.

If the server serves up resources through a REST API, then all sync logic can be handled on the client side.
The sync logic is able to handle bi-directional sync between central server and multiple clients where only incremental changes apply on both sides with some conflict detection.

Each table (both server and client) participating in the sync process should include two additional fields: “Ts” as Number, “Deleted” as Boolean.

The Ts field is maintained by the server side. For example SQL Server automatically generates a unique incremental value on each insert or update. The Ts field is used to determine whether one record in the table was modified more recently than another so to download only the incremental changes. Also it will help to identify the new records created on the client as they will have no Ts value.
Please note: Ts in this algorithm is not an instant in time, typically expressed as a date and time of day. Rather, the Ts should be considered as an Update Sequence Number e.g. the number of data element that identifies the order in which the element was last modified. The Ts within a table start at “1” and then increase monotonically every time a record is created, modified, or deleted.

The Deleted field is used to mark records for deletion so the clients would remove such records from local repositories, whereas clients with initial sync would simply skip them during download.
Additionally the client data storage should include the “Dirty” field as Boolean.
The Dirty field designates the records changed locally. Once those records sent successfully to the server the Dirty flag reset.

The sync process will be initiated by the client either manually (pressing some Sync button) or periodically or by combination of push notifications about server changes and client change triggers.
The sync process first download the server changes then upload the client changes.
The sync procedure will process the table changes sequentially and in correct order (where parent child relationship exist) to avoid referential integrity conflicts.

Below is the sync algorithm:
  1. Initialize the list of tables required to sync
  2. /******* Download server changes **********/
    For each sync table:
  3. Get the latest Ts, e.g. using sqlite “select ifnull(ts, 0) from Table order by ts desc limit 1”
  4. Call the relevant RESTful service method to get server changes as collection for the table based on the provided latest timestamp (Ts)
  5. /* Insert or update the received changes into local storage */
  6. Iterate through each record in the server changes collection
    • Check if the server record with the same id already exists in the local storage
      • If so then check if the existing local record is dirty
        • If so then call some conflict resolution function
        • Reset dirty flag
      • Else replace the local record with the server record and reset the dirty flag
    • If the local storage doesn’t have the server record, then create it
    /******* Upload client changes **********/
    For each sync table:
  7. Get the list of local storage changes for inserts on the server
  8. Iterate through each record in the local changes for inserts
    • Insert the record on the server (using POST)
    • Get the Ts for the inserted record from the response
    • Update the record in the local storage with new Ts
    • Reset the dirty flag
  9. Get the list of local storage changes for updates on the server
  10. Iterate through each record in the local changes for updates
    • Update the record on the server (using PUT)
    • Get the updated Ts for the updated record from the response
    • Update the record in the local storage with new Ts
    • Reset the dirty flag
    /******* Post sync operation ************/
    For each sync table:
  11. Delete records where Deleted=true and Dirty=false

For the conflict resolution the easiest way is to update the records with the server version (server always wins) and at the end of sync show the conflict log.

Monday, 22 July 2013

Managing hiring process in tech-sector

Leave a Comment

Recently we used a five stages hiring process to fill two openings for .Net Senior Developers. The process was based on Jeff Atwood’s article "How to hire a programmer" to minimize our time waste on unqualified candidates and find the good fit.

The idea was to dedicate our time only to candidates who passed online technical test. We didn’t bother to study the received CVs as recruitment agencies already used to filter applicants to match our job specs (well, maybe just short glance to ensure the candidate claims the relevant experience).

Each candidate had to go through five stages listed below:
  • Online technical test (15-30 minutes) – we used Interview Zen where we created the test with three programming questions (SQL, C# collections, JavaScript).
  • Portfolio check - successful candidates had to share some code they wrote (open-source / personal projects, blog posts, etc.)
  • Phone interview (20-30 minutes) – a few programming related questions such as: What is SOLID? When do you know your code is ready for production? Etc.
  • Audition project (few hours to complete) – we asked candidates if they are willing to solve the real business problem in their spare time.
  • Face interview (1 - 1.5 hours) – face to face interview to confirm the candidate is the right person.
This process has worked quite well for us and we hired interesting and skilled people. The problem was in preparing and managing the hiring process. We had to use some system (SharePoint website) to store and update the list of candidate details, interview results, and team feedback. Searching for correct test questions and preparing audition project was not easy. Manually sending email notifications to the candidates to ensure they notified on all stages if they are succeeded or rejected.
We definitely could save our time and be more efficient if we’d had some system to organize and manage the hiring process. After some research I came to conclusion that we need a web-based hiring management system that will help to manage the entire hiring process in one place.

So, let me introduce – A cloud-based Hiring Management System that will help manage the hiring process in tech-sector. The system will allow
  • Manage job advertisements
  • Store candidate details and interview history in systematic order
  • Track all candidate's activities
  • Keep Interview configurations for the future
  • Customize the Hiring Process
  • Create on-line technical tests and audition projects
  • Prepare questionnaires for interviews
  • Manage interview stages
  • Remind about scheduled interviews
  • Automatically send email alerts and notifications
  • Collaborate on candidate performance with team members
I have setup the landing page here where you can find the more detailed description of features with some screenshots. Please subscribe to the mailing list if you think the system will be useful or leave your comments!

Thursday, 16 May 2013

Send emails with SQL Server using Html files


Sending emails from MS SQL Server using the Database Mail is easy. Once the Database Mail is active and configured (see this blog for quick configuration) the emails can be send using the SQL script below:
USE msdb
EXEC sp_send_dbmail @profile_name='EmailProfile',
@subject='Test message',
@body='This is a test message.'
In cases where there is a need to send emails in html format - creating the content inside of SQL script is not so convenient.
With the help of OPENROWSET function we can read the content of html file, which contains the properly formatted and tested html.

For example below is the html file that we can easily open in the browser and validate it displays correctly the way we want. It contains the html and body tags which wraps the content that will be inserted into the email’s body.


Test Email

This is a test message
Below is the SQL script which will send the emails by reading the designated html file. The script will remove the leading and ending html and body tags and assign the html content to the email’s body.
DECLARE @SmtpProfile VARCHAR( 50) = 'EmailProfile ' ;
DECLARE @html_file NVARCHAR( 100) = 'c:\test-email.html' ;
DECLARE @Subject NVARCHAR( 255) = 'Test Message' ;
DECLARE @Recipients VARCHAR( 255) = '' ;

DECLARE @ParmDefinition nvarchar( MAX) = N'@Body NVARCHAR(MAX) OUTPUT' ;
DECLARE @sql NVARCHAR( MAX) = 'SET @Body = (SELECT REPLACE(REPLACE(f.BulkColumn, '''', ''''), '''', '''') FROM OPENROWSET ( BULK ''' + @html_file + ''', SINGLE_NCLOB ) f)';

EXEC sp_executesql @sql, @ParmDefinition, @Body OUTPUT;

EXEC msdb ..sp_send_dbmail @profile_name=@SmtpProfile,
                           @body_format= 'HTML',

Please note that the OPENROWSET function in the example above takes SINGLE_NCLOB flag, which reads a file as nvarchar for Unicode support. For varchar use SINGLE_CLOB.

Pretty straightforward, isn’t it.

Tuesday, 14 May 2013

Create Google Charts in Asp.Net MVC by generating JSON data on the SQL SERVER


The purpose of this article is to share the implementation of Google Charts with data retrieved from SQL Server whereas JSON is created on the SQL Server.

Google chart tools provide a rapid way of creating charts from some data for the web developers. All charts are populated with data stored in a DataTable object. The DataTable object can be filled with data using JSON format.
There are several different ways of creating and populating a DataTable and they all have some advantages and disadvantages (more about it here).
The application I needed to add Google Charts to be Asp.Net MVC using nHibernate with MS SQL Server.
The idea is to create all required data on the SQL Server using stored procedures to gain performance benefits, minimize the efforts of creating JSON for each chart and realize the DRY principal.
Once the required SQL stored procedure is created we are going to convert the result table data into JSON and pass it into our DataTable constructor.

Let’s say we need to create the bar chart showing the number of items by location.
Our stored procedure will be like the one below, where we have to create the string of columns, populate the temp table “#chart_data” with data and call another stored procedure to convert the content of the temp table into JSON.
create procedure [dbo].[GetItemsByLocationAsJson]
set nocount on;
/* Set your columns data here */
/* The result string should look like the one below */
/* "cols": [{"id":"","label":"Topping","type":"string"},{"id":"","label":"Slices","type":"number"}] */
declare @columns_list varchar(max) = '"cols":[' +
'{"id":"Alocation","label":"Location","type":"string"}' +
',{"id":"ItemCount","label":"Items Count","type":"number"}' +

select * into #chart_data from ( 
 select 'Location 1' as Alocation, 5 as ItemCount
 union all
 select 'Location 2' as Alocation, 10 as ItemCount
 union all
 select 'Location 3' as Alocation, 15 as ItemCount
 union all
 select 'Location 4' as Alocation, 20 as ItemCount
 ) as t

exec dbo.ConvertTableToGoogleChartJson @columns_list;

drop table #chart_data;  
If we run this stored procedure we’ll get the following result:
  "cols":[{"id":"Alocation","label":"Location","type":"string"},{"id":"ItemCount","label":"Items Count","type":"number"}],
  "rows":[{"c":[{"v":"Location 1"},{"v":5}]} ,{"c":[{"v":"Location 2"},{"v":10}]} ,{"c":[{"v":"Location 3"},{"v":15}]} ,{"c":[{"v":"Location 4"},{"v":20}]}]
JSON data are generated by sorting the column names in ascending order.
Note that the order of columns and rows should be in sync, e.g. the generated JSON data in rows should related to the correct columns.
The order between columns and rows can be achieved by giving such names so they will be sorted correctly when populating the #chart_data table. In our example we name the location field as “Alocation” instead of “Location” to make it first. Thus we can ensure the data for “Alocation” and “ItemCount” will be generated in correct order.

Below is the code for the ConvertTableToGoogleChartJson stored procedure.
This generic procedure generates the appropriate JSON based on the data in the #chart_data table and pre-append the column list passed as a parameter. Most of the code based on the answer from StackOverflow (thanks to pyrospade) and use metaprogramming to boost the performance.
create procedure [dbo].[ConvertTableToGoogleChartJson] 
 @columns_list varchar(max)
set nocount on;

    declare @columns table (
    id int identity primary key,
    name sysname,
    datatype sysname,
    is_number bit,
    is_date bit);

    insert into @columns(name, datatype, is_number, is_date)
     case when is not NULL
    then 1 else 0
     end as is_number,
     case when is not NULL
    then 1 else 0
     end as is_date
 from tempdb.sys.columns
 join tempdb.sys.types
  on (columns.system_type_id = types.system_type_id)
 left join (values ('int'), ('real'), ('numeric'),
       ('decimal'), ('bigint'), ('tinyint')) as number_types(name)
  on ( =
 left join (values ('date'), ('datetime'), ('datetime2'),
       ('smalldatetime'), ('time'), ('datetimeoffset')) as date_types(name)
  on ( =
 where object_id = OBJECT_ID('tempdb..#chart_data') and != 'sysname';

 declare @row_list varchar(max) = STUFF((select '+'',''+''{"v":''+' + 
  case when is_number = 1 then 'COALESCE(LTRIM(' + QUOTENAME(name) + '),''null'')'
    when is_date = 1 then 'COALESCE(QUOTENAME(LTRIM(convert(varchar(max),' + QUOTENAME(name) + ', 126)),''"''),''null'')'
    else 'COALESCE(QUOTENAME(' + QUOTENAME(name) + ',''"''),''null'')'
    end + '+''}'''
  from @columns order by name  for xml path('')), 1, 5, '');

     create table #json_result (id int identity primary key, line varchar(max));

     declare @sql varchar(max) = REPLACE('insert into #json_result '  + 'select '',{"c":[''+{f}+'']}''' + 'from #chart_data', '{f}', @row_list);


     update #json_result set line = STUFF(line, 1, 1, '') where id = 1;

     declare @row_json Varchar(max) = (select line + '' as 'data()' from #json_result for xml path(''));

     select ('{' + @columns_list + ',"rows":[' + @row_json + ']}') as Result;

     drop table #json_result;
The point here is that the ConvertTableToGoogleChartJson stored procedure can be reused for the other calls to speed up the development of google charts.

Here is the sample of the controller calling our stored procedure and returning the JSON result:
public ActionResult GetItemsByLocationData ()
    var jsonText = this.dbService.GetItemsByLocationData();
    return this.Content(jsonText, "application/json");
Our view can have the jQuery ajax call to pass the JSON into the DataTable constructor:
function drawPieChart() {            
    var jsonData = $.ajax({
        url: 'MyController/GetItemsByLocationData',
        dataType: 'json',
        async: false
    var data = new google.visualization.DataTable(jsonData);
    // Set chart options      
    var options = {title: 'Items by Location'};
    // Instantiate and draw our chart, passing in some options.      
    var chart = new google.visualization.PieChart(document.getElementById('pie_chart_div'));            
    chart.draw(data, options);

That’s it. Happy coding!