Posted by: Vikas Sahni | February 8, 2011

Managing Clients – A Different Approach

Been a while since I last blogged, so here is a non-technical one about managing your clients in these difficult times. This is based on my talk at BizCamp Newry yesterday and the feedback from the audience. Special thanks to Chris McCabe, Eve Earley and Nichola Bates.

Unpaid Initial Meetings

Till you get a Purchase Order, the client is still a prospect. During this period, you may have to go for a number of meetings, which can be broadly classified into three categories:

  1. Fishing for information
  2. Genuinely looking to gauge competence
  3. Planning for Kick-off

There is no guarantee that you will get the work, these meetings are primarily marketing activitites.


Delay in POs

The next stage where you need to change your approach is when the client has given a verbal go ahead. Times are different, and anyone can go under for no fault of their own these days. Therefore, keep in mind that:

  1. Verbal confirmation NOT good enough
  2. Start work if you have time, but DO NOT tell client
  3. Milestone based agreements / statement of work
  4. Push delivery dates if PO is delayed


Things to watch out for

Now you have the PO…but times are still different. Therefore, watch out for the following incoming missiles from your client:

  1. Change Requests
  2. Unplanned Demos
  3. Pushing Delivery dates forward
  4. Management changes


Scope Creep

This is a doube-edged sword – can be the best source of revenue if handled well, and at the other extreme, result in significant losses. Keep an eye on areas where scope can get increased if you are not careful, such as:

  1. Expanding Grey Areas – pleading common sense
  2. Raising in Bug Reports
  3. Related, but Out of Scope


Delay in Payments

In order to avoid hassles, consider the following…

  1. Now you know why the PO…
  2. and the Milestones!
  3. Think before going legal – do you have the time and money?


Posted by: Vikas Sahni | December 11, 2010

Avoiding the Recession

I was invited to speak to the Dundalk/Newry Business Club last week about avoiding the recession, and so this post…

Acknowledge the Recession, and decide not to participate.  After that, read on…

The new mantras to beat the slowdown are:

  1. Core Competency – Stick to what you do best, do not branch out into unrelated areas, do not spread yourself thin
  2. Must Have Product / Service – people are being cautious, are afraid to spend.  So make sure that whatever product or service you are offering is something people need.  The days of spending on optionals and luxuries will be back, but it may be a long time…and you need to pay bills every month…
  3. Customer Return on Investment – make sure that your customers can clearly see how they will get their money back.  If they can see the RoI, they will spend.  Most businesses (except building/construction) still have funds, but are not willing to spend.  If you can demonstrate that paying you a dollar today will give them / save them two dollars tomorrow, you will get the order!
  4. Value for Money – if it takes100 to produce a good or service, do not charge 1000…order of magnitude margins are a thing of the past…unless you have something that will reverse aging…
  5. Sales Focus – sell, sell, sell…you may have a must have offering that gives a return within a month and costs very little…but if the prospecitve buyers do not hear about it, there are plenty of other people out to get the same spending money…

and the things to watch out for are:

  1. Is there an elephant in your space? A big player can crush you without even noticing…someone I knew spent a year developing a service he was going to offer to the big sellers on eBay…one day eBay announced new features including this services…idea of the year became worthless…
  2. Are you ahead of the pack?  Is there a market?  Are people ready to buy?  Creating an entirely new market was tough even in the boom days, and in these times…
  3. Can anyone catch up quickly?  There are companies who specialise in being second – you do the R&D, create market awareness, and these guys come along with a better, cheaper product by cloning and improving your offering…
  4. Have you done your homework?  Search the web, ask friends and colleagues…a lot of people are trying to float new ventures these days…and you are not unique…
  5. Will people pay???  The crunch!  Do not give away free samples….you may discover too late that people are not willing to pay…
Posted by: Vikas Sahni | November 17, 2010

WINDOWS AZURE is not just about Roles


Mainly based on the ‘Windows Azure Programming Model’ white paper by David Chapell, available at

The changes are mainly how role instances interact in three areas:

  1. Operating system
  2. Persistent storage
  3. Other role instances.


In Windows Azure, the administrator of all of the servers is the fabric controller. It decides when VMs or machines should be rebooted, and for Web and Worker roles (although not for VM roles), the fabric controller also installs patches and other updates to the system software in every instance. This is very different from a normal Windows machine, where the administrator(s) of that machine have control. S/he can reboot VMs or the machine they run on, install Windows patches, and must do whatever else is required to keep it available.

This approach creates restrictions.  As the fabric controller can modify the operating system at will, there is no guarantee that changes a role instance makes to the system it’s running on won’t be overwritten. Besides, the specific virtual (and physical) machines an application runs in change over time. This implies that any changes made to the default local environment must be made each time a role instance starts running.  Anybody creating a Windows Azure application needs to understand what the fabric controller is doing, then design applications accordingly.


Applications use data, the way data is stored and accessed must also change in order to make applications more available and more scalable. The big changes are these:

  • Storage must be external to role instances. Even though each instance is its own VM with its own file system, data stored in those file systems isn’t automatically made persistent. If an instance fails, any data it contains may be lost. This implies that for applications to work correctly in the face of failures, data must be stored persistently outside role instances. Another role instance can now access data that otherwise would have been lost if that data had been stored locally on a failed instance.
  • Storage must be replicated. Just as a Windows Azure application runs multiple role instances to allow for failures, Windows Azure storage must provide multiple copies of data. Without this, a single failure would make data unavailable, something that’s not acceptable for highly available applications.
  • Storage must be able to handle very large amounts of data. Traditional relational systems aren’t necessarily the best choice for very large data sets. Since Windows Azure is designed in part for massively scalable applications, it must provide storage mechanisms for handling data at this scale.

To allow this, Azure has blobs for storing binary data along with a non-SQL approach called tables for storing large structured data sets.

While applications see a single copy, Windows Azure storage replicates all blobs and tables three times.

This improves the application’s availability, since data is still accessible even when some copies are unavailable. And because persistent data is stored outside any of the application’s role instances, an instance failure loses only whatever data it was using at the moment it failed.

The Windows Azure programming model requires an application to behave correctly when a role instance fails. To do this, every instance in an application must store all persistent data in Windows Azure storage or another external storage mechanism (such as SQL Azure, Microsoft’s cloud-based service for relational data).

There is one more option introduced recently, Windows Azure drives.  Normally, any data an application writes to the local file system of its own VM can be lost when that VM stops running. Windows Azure drives change this, using a blob to provide persistent storage for the file system of a particular instance. These drives have some limitations—only one instance at a time is allowed to both read from and write to a particular Windows Azure drive, for example, with all other instances in this application allowed only read access—but they can be useful in some situations.


When an application is divided into multiple parts, those parts commonly need to interact with one another. In a Windows Azure application, this is expressed as communication between role instances. For example, a Web role instance might accept requests from users, and then pass those requests to a Worker role instance for further processing.

The way this interaction happens isn’t identical to how it’s done with ordinary Windows applications. Once again, a key fact to keep in mind is that, most often, all instances of a particular role are equivalent—they’re interchangeable. This means that when, say, a Web role instance passes work to a Worker role instance, it shouldn’t care which particular instance gets the work. In fact, the Web role instance shouldn’t rely on instance-specific things like a Worker role instance’s IP address to communicate with that instance. More generic mechanisms are required.

The most common way for role instances to communicate in Windows Azure applications is through Windows Azure queues.

Windows Azure queues don’t support transactional reads, and so they don’t guarantee exactly-once, in-order delivery.

Most of the time, queues are the best way for role instances within an application to communicate. It’s also possible for instances to interact directly, however, without going through a queue. To allow this, Windows Azure provides an API that lets an instance discover all other instances in the same application that meet specific requirements, then send a request directly to one of those instances. In the most common case, where all instances of a particular role are equivalent, the caller should choose a target instance randomly from the set the API returns. This isn’t always true—maybe a Worker role implements an in-memory cache with each role instance holding specific data, and so the caller must access a particular one. Most often, though, the right approach is to treat all instances of a role as interchangeable.

Next post on Azure will again be based on David Chappell’s paper, with some bits from my own experience – some tips on how to port existing Windows Server applications to Azure.

Posted by: Vikas Sahni | November 16, 2010

Why Tech Start-ups Fail – Lessons from personal experience

This is something I was planning to write about later, but have been encouraged by the response I got at my talk on this topic at BizCamp Belfast.  Therefore, started writing it on the train back home…

We failed twice!

The original intent was to provide a multimedia content development service, which never really took off.  The second attempt was a multimedia content development product that was released on time as per specs to excellent feedback.  However, it was a nice to have product…and people are no longer spending money on luxuries.  Now we are a software development service provider with a solid client base.

The usual challenges

Like any new venture, a tech start-up faces challenged due to:

  • Lower Cost Competitors – and you may not be aware of them till too late
  • Budget Constraints – unable to raise as much money as planned
  • Specifications – over/under specified product or service, you will spend too much time and money building it in the first case, and people won’t pay for it in the second one
  • End-user expectations – people want value for money.

The Comfort Zones

Technology start-ups are usually founded by technical people, who very easily drift into their comfort zone – development.  Most ventures go through the steps listed below:

  1. Market Research / Feedback
  2. Fund Raising
  3. Development
  4. Beta testing
  5. Instead of selling, loop back to 1!

You keep looping back till you run out of money or your investors run out of patience.  Go, start selling as soon as possible.  Do not go back to development over and over again, get out of your comfort zone.  Do not try to build in every suggested feature, and do not hesitate to cut your losses if there is no market.

Things to watch out for:

  • Is there an elephant in your space?  A big player, who may already be building what you are thinking of…go talk, elephants are friendly.
  • Are you ahead of the pack?  But not too far ahead, or there may be no demand!
  • Can anyone catch up quickly?  If so, there are many companies out there specialising in being the second…they will spend ten times the money that you can afford on development and marketing, add extra features, give away their version etc. etc. and overtake you
  • Have you done your homework?  There may be others who have the same idea, and they may get to market around the same time…no harm in asking your network if anyone has heard about anything similar
  • Will people pay???  The biggest question of them all…

Human Expertise

You need to know what works, and equally important – what does not work.  Learning is an evolutionary process, so ask for advice…and listen to it…

The Barriers to this are well known, a lot of people are not willing to share knowledge and many are not willing to receive it.  Admitting failures is not human nature, so we tend not to ask for help when we most need it.

In a start-up, a lot of knowledge is in the head of the core people.  When one of them leaves, the loss of knowledge can cause significant damage.


Managing a Start-up is complex.  During your first venture, it is difficult to realise that all the specialist support roles in a large enterprise are no longer available and you have to do everything yourself.  The ‘Start-up’ is the keyword, you have to start everything yourself.

The challenges to this are that over-simplification leads to failures.  Controlled experiments  are not possible and generalisations not applicable…you have to try to get things right the first time.

What worked for us

Focussing on our Core Competency was the key.  However, that alone was not enough.  We learnt through a number of near-failures that it was equally important to:

  • Communicate continuously with all stakeholders (clients, employees, partners etc.) using all possible means – emails,  voice, chat, blogs and of course in-person meetings
  • Leverage Connections – use Facebook, Twitter, Linked In, attend conferences, seminars etc.

Hope you find this useful…as always, comments are most welcome!

Posted by: Vikas Sahni | November 14, 2010

The Windows Azure Programming Model

This post is based primarily on David Chappell’s recent white paper

Windows Azure requires a DIFFERENT programming model

This is because Azure is a Paas (Platform as a Service) in the cloud, not just VMs / IaaS (Infrastructure as a Service) in the Cloud.  Most of a Windows developer’s skills still apply; however, there are clear differences.


To get the benefits it promises, the Windows Azure programming model imposes three rules on applications:

Rule 1: Built from one or more roles. Web roles, Worker roles and coming soon…VM roles

Rule 2: Runs multiple instances of each role.

Rule 3: Behaves correctly when any role instance fails.

Windows Azure can run applications that don’t follow any / all of these rules—it doesn’t actually enforce them. Instead, the platform simply assumes that every application obeys all three. If you do not understand and follow the model’s rules, the application may not benefit from Azure.

A role includes a specific set of code, such as a .NET assembly, and it defines the environment in which that code runs. Windows Azure today lets developers create two different kinds of roles:

  1. Web role: As the name suggests, Web roles are largely intended for logic that interacts with the outside world via HTTP. Code written as a Web role typically gets its input through Internet Information Services (IIS), and it can be created using various technologies, including ASP.NET, Windows Communication Foundation (WCF), PHP, and Java.
  2. Worker role: Logic written as a Worker role can interact with the outside world in various ways—it’s not limited to HTTP. For example, a Worker role might contain code that converts videos into a standard format or calculates the risk of an investment portfolio or performs some data analysis.

And coming soon…

  1. Virtual Machine (VM) role: A VM role runs an image—a virtual hard disk (VHD)—of a Windows Server 2008 R2 virtual machine. This VHD is created using an on-premises Windows Server machine, then uploaded to Windows Azure. The VHD can then be loaded on demand into a VM role and executed.

All three roles are useful. The VM role was announced at PDC2010, and not available as on date. 

A typical application should use Web roles to accept HTTP requests from users, then hand off the work requested by users to a Worker role. The primary reason for this two-part breakdown is that dividing tasks in this way can make an application easier to scale.  It’s also fine for a Windows Azure application to consist of just a single Web role or a single Worker role—you don’t have to use both. A single application can even contain different kinds of Web and Worker roles. For example, an application might have one Web role that implements a browser interface, perhaps built using ASP.NET, and another Web role that exposes a Web services interface implemented using WCF. Similarly, a Windows Azure application that performed two different kinds of data analysis might define a distinct Worker role for each one.

A developer can tell Windows Azure how many instances of each role to run through a service configuration file.  Every instance of a particular role runs the exact same code. In fact, with most Windows Azure applications, each instance is just like all of the other instances of that role—they’re interchangeable.   Windows Azure automatically load balances HTTP requests across an application’s Web role instances. This load balancing doesn’t support sticky sessions, so there’s no way to direct all of a client’s requests to the same Web role instance. Storing client-specific state, such as a shopping cart, in a particular Web role instance won’t work, because Windows Azure provides no way to guarantee that all of a client’s requests will be handled by that instance. Instead, this kind of state must be stored externally.

An application that follows the Windows Azure programming model must be built using roles, and it must run two or more instances of each of those roles. It must also behave correctly when any of those role instances fails.  A role can fail if the computer it is running on fails, if the physical network connection to that machine fails etc. etc. The failure might cause the application to run more slowly, but as seen by a user, it still behaves correctly.  This requirement to work correctly during partial failures is fundamental to the Windows Azure programming model.  If all instances of a particular role fail, an application will stop behaving as it should—this can’t be helped. In fact, the service level agreement (SLA) for Windows Azure requires running at least two instances of each role. Applications that run only one instance of any role can’t get the guarantees this SLA provides.  

One more important point to keep in mind is that all of these rules also apply to applications that use VM roles. Just like the others, every VM role must run at least two instances to qualify for the Windows Azure SLA, and the application must continue to work correctly if one of these instances fails. Even with VM roles, Window Azure still provides a form of PaaS—it’s not traditional IaaS.


Applications built using the Windows Azure programming model can be easier to administer, more available, and more scalable than those built on traditional Windows servers.

The administrative benefits of Windows Azure flow largely from the fabric controller. Like every operating system, Windows must be patched, as must other system software. In on-premises environments, doing this typically requires some human effort. In Windows Azure, however, the process is entirely automated: The fabric controller handles updates for Web and Worker role instances (although not for VM role instances). When necessary, it also updates the underlying Windows servers those VMs run on.

The Windows Azure programming model helps improve application availability in the following ways:

  1. Protection against hardware failures. Because every application is made up of multiple instances of each role, hardware failures—a disk crash, a network fault, or the death of a server machine—won’t take down the application.
  2. Protection against software failures. Along with hardware failures, the fabric controller can also detect failures caused by software. If the code in an instance crashes or the VM in which it’s running goes down, the fabric controller will start either just the code or, if necessary, a new VM for that role.
  3. The ability to update applications with no application downtime.  An application built using the Windows Azure programming model can be updated while it’s running—there’s no need to take it down. To allow this, different instances for each of an application’s roles are placed in different update domains. When a new version of the application needs to be deployed, the fabric controller can shut down the instances in just one update domain, update the code for these, then create new instances from that new code. Once those instances are running, it can do the same thing to instances in the next update domain, and so on. While users might see different versions of the application during this process, depending on which instance they happen to interact with, the application as a whole remains continuously available.
  4. The ability to update Windows and other supporting software with no application downtime. The fabric controller assumes that every Windows Azure application follows the three rules listed earlier, and so it knows that it can shut down some of an application’s instances whenever it likes, update the underlying system software, then start new instances. By doing this in chunks, never shutting down all of a role’s instances at the same time, Windows and other software can be updated beneath a continuously running application.  

 Availability is important for most applications—software isn’t useful if it’s not running when you need it—but scalability can also matter. The Windows Azure programming model helps developers build more scalable applications in two main ways:

  1. Automatically creating and maintaining a specified number of role instances. As already described, a developer tells Windows Azure how many instances of each role to run, and the fabric controller creates and monitors the requested instances. This makes application scalability quite straightforward: Just tell Windows Azure what you need.
  2. Providing a way to modify the number of executing role instances for a running application: For applications whose load varies, scalability is more complicated. Setting the number of instances just once isn’t a good solution, since different loads can make the ideal instance count go up or down significantly. To handle this situation, Windows Azure provides both a Web portal for people and an API for applications to allow changing the desired number of instances for each role while an application is running.

 Getting all of the benefits that Windows Azure offers requires conforming to the rules of its programming model. Moving existing applications from Windows Server to Windows Azure can require some work, a topic I will address in more detail in a later post. For new applications, however, the argument for using the Windows Azure model is clear. Why not build an application that costs less to administer? Why not build an application that need never go down? Why not build an application that can easily scale up and down?

« Newer Posts