Monday, July 28, 2008

Mean, Lean and the Almighty!

Is 6-Sigma the answer to managing the Cost of Poor Quality for IT projects?
Thats a mean sales gimmick...by just staring at the UCL and LCL for metrics and then determining the means and reducing the variability does not lead to reducing the CoPQ.


Is Toyotas Production System abstracted as Lean management a panacea for IT services productivity?
It is a pretty interesting question, and its answer depends on whom, when, how and with what intent you are asking.


Can applying any of the above method lead to 30% IT productivity?
Maybe Almighty can answer this!


Is it better than using Agile methodologies for Software development?
The answer to the above, in my opinion is fairly easy for anybody who is seasoned in the IT industry and has dealt with a typical SDLC/V-Process model for their Software products' development.

Analytical framework for decision criteria: Onsite-Offshore mix in outsourcing engagement

Having used the AHP and a simple linear graph to translate the AHP score to Onsite Offshore mix for a outsourcing solution in IT (some 4 years back), I realized the futility of the model premised on MECE criteria for the factors. On reading ANP (http://chern.ie.nthu.edu.tw/IEEM7103/937805-paper-1-may6.pdf) and weighted graphs, I was curious which of these technique is most suitable to determine the Onshore-Offshore mix for a outsourcing solution.

ANP: This technique is better suited than AHP as it will consider the internal dependence and cross dependence between factors, the issue however is that it still does a pairwise comparision as we did in the AHP model within the cluster. But, usually some factors can appear in more than one cluster and can have interdependency.

Weighted graph: The weighted graph model is better suited than AHP and ANP as it has the benefits of ANP, without ANP issues (factors interrelationship across clusters can be suitably represented).

The scores we get from the ANP or weighted graph can then be used to transform into a onshore-offshore mix.

Wednesday, July 23, 2008

Deciphering software complexity - Cohesion, Coupling, threading and concurrent engineering

I was just curious to understand software complexity & concurrent programming...increasingly so as I was encountering several deals where there is a requirement for software concurrent engineering to reduce cycle time from requirements to production deployment especially in AD engagements. It reminded me of the complexity assessment throuhg dependency matrices and application of DSM technique expounded by my former collegue Navneet. It reminded me then to look into cohesion, coupling, concurrent engineering and cyclometic complexity topics in software engineering that I learnt long back while in college. I never got the time to look at it, except for now as I was thinking of a novel approach to handling this problem and write a paper!

Cohesion is defined as the closeness of the relationshio between its components [Ian Somerville on Software engineering]. There are 8 difeerent levels of cohesion in order of increasing strength: Coincidental, logical association, temporal cohesion, procedural cohesion, communicational cohesion, sequential cohesion, functional cohesion and object cohesion.

Coupling is the strength of interconnectedness between the components during design.

In general a reduced software complexity requires high cohesion and loose coupling.

For concurrent engineering perpective one can look at how multi-core processors are handling concurrent execution. This takes me back to the basics of concurrent execution and synchronization basics [Terrence W Pratt, Programming language] I read while at college. Concurrent execution is facilitated through syncronization techniques such as interrupts, semaphores, guarded statements, multi-tasking and so on. Now the paradigm for concurrent execution has shifted with multi-threading and the concept boosted with multi-processor and multi-core systems.

What of the above technique can we use to address software complexity and concurrent
software development?

Friday, July 18, 2008

Breaking down the cost-to-serve parameter for service management optimization

The objectives of managing the service (any service Consulting, System Integration, Application Outsourcing, etc.) through application of innovation techniques are to reduce the cost to service. It is critical for one to understand the constituent factors that influence cost to serve goal. The following diagram crisply captures the factor trees that influence the cost to serve goal.



Now we can capture the difference in the estimate/forecasted cost for each of above
parameters, measure the actuals during execution and drive management and control to minimize the differential.

Our transformation intiatives can be focussed on reducing the estimate/forecast cost in the first place...we'll dvelve more on this topic later...

Any thoughts on what factors I might've missed

Monday, July 14, 2008

Does too much focus on core competence leads to slow decadence...

I came across a very interesting intreview od Richard Ruemelt (strategy's strategist - http://www.mckinseyquarterly.com/Strategy/Strategic_Thinking/Strategys_strategist_An_interview_with_Richard_Rumelt_2039_abstract) on how misconceived the strategy plans of organizations are. According to the interview they are nothing but a 3-5 year rolling plan focussing on market share, lobbying for resource allocations and financial projections. Seldom does on see a concrete actionable to handle a changed scenario in the environment, innovation or tapping into a unmet customer value in an industry that could lead to incremental revenue for an organization.

Interestingly he also beleives that organizations by focussing on their core competence discussion during strategy plan often miss innovation opportunities that would sustain the organization on the long run. The only thing that eventually sustains such large organization is their captive customers, and the strong networks in the business ecosystem preemting the entry of innovative/entrapreneurial company for a short-run, and eventually yielding to them. This is not new if one reads Clayton Christiansens "Innovators Dilemma". One can see this more prominently in the Telecom space. Imagine Telecom organizations leapfrogging to 4G without going through the arduous 3G journey....To do this one need to consider the followng challenges:

1. Will the regulators allow?
2. Will the Communication technology vendors right-price the 4G technologies foregoing their massive investments in 3G technologies (EDGE, UMTS...etc.) leading to its quicker adoption
3. Will the consumre electronics companies (read the handset providers) mass produce their handsets @ reasonable price to consumers to facilitate its adoption (ofcourse foregoing huge investments in the 3G network)?

All the above needs to be seen....Atleast, some of my neighbours in my apartment and my collegues at work do not think 4G really has any scope unless one passes through the drudgery/decadence of 3G...Blame it all on core competence....

What do you think?

Wednesday, July 9, 2008

Illustration of a constraint model for optimal outsourcing decision

Lets assume the client has shared the following volumetric and requested the service provider to bid for the Application maintenance deal:

Guiding factors:

Utilization %

60%

Call Data Period

1 Month

Call Characteristics:

Domain

Technology

Sev 1

Sev 2

Sev 3

Billing Application

Java

3

15

13

Customer Care

.Net

5

10

28

Required SLA:

in Minutes

Availability

Response Time

Resolution Time

Actual Resolution time (Billing)

Actual Resolution time (Customer Care)

Sev 1

24/7

5

60

45

60

Sev 2

8/5

15

240

160

173

Sev 3

8/5

120

480

300

390

Based on the above details we can apply the step-wise resolution step to shape the deal:

Step 1: Assuming that the demand is even and the incoming calls has a poisson distribution, the l

Since Sev 1 calls are 24/7 availability we are assuming the pattern is evenly spread over 24 hrs, 30 days and 60 minutes and l for sev 1 is Number of calls/(24*30*60)

Sev 1

Sev 2

Sev 3

Billing Application

0.0001

0.0016

0.0014

Customer Care

0.0001

0.0010

0.0029

Step 2: Assuming the service rate is an exponential distribution, the m

Service rate = 1/(Actual resolution time in minutes)

Sev 1

Sev 2

Sev 3

Billing Application

0.0222

0.0063

0.0033

Customer Care

0.0167

0.0058

0.0026

Step 3: The number of resources for each domain, the r

Sev 1

Sev 2

Sev 3

Total

# of resource

Billing Application

0.0052

0.4167

0.6771

1.0990

2

Customer Care

0.0116

0.3003

1.8958

2.2078

3

Step 4: The deal optimization based on the above characteristics can be illustrated as follows:

Lets’ assume the following assumptions:

1. We consider 2 locations US and India for this deal

2. We assume there are no shift requirements and the support will be on-call basis

3. There is only 2 levels in workforce: Software engineer and System Analyst

4. The cost for onshore-offshore is as identified in the following table:

All figures in USD per Hour

India

US

System Analyst

21

65

Software engineer

19

60

5. Lets assume the pyramid definition is as follows:

India

US

Engagement

Pyramid

System Analyst

5%

95%

10%

Software engineer

95%

5%

90%

The objective function can be laid out as follows:

Min XonshoreConshore, System AnalystROnshore,System Analyst + XonshoreConshore, Software Engineer ROnshore,Software Engineer +XOffshoreCOffshore, System AnalystROffshore,System Analyst + XoffshoreCoffshore, Software Engineer ROffshore,Software Engineer

Based on the above equation we can represent it as follows:

Min Xonshore*65*ROnshore,System Analyst + Xonshore*60*ROnshore,Software Engineer +XOffshore*21*ROffshore,System Analyst + Xoffshore*19* ROffshore,Software Engineer

The constraint for this is defined as follows:

ROnshore,System Analyst + ROnshore,Software Engineer <= 5*Xonshore

ROffshore,System Analyst + ROffshore,Software Engineer <= 5*Xoffshore

ROnshore,System Analyst+ ROffshore,System Analyst <= 0.5

ROnshore,Software Engineer+ ROffshore,Software Engineer <= 4.5

Xonshore + Xoffshore = 1

Xoffshore - Xonshore >= 0

Any Optimization engineer will be able to solve the above equation using a tool to arrive at the optimal deal parameters. We did the above and identified the following optimal function.

The above is an approach 1 for constraint model for outsourcing deal.....how do we do this in approach 2? What are the limitations of the above model?

We'll revisit these issues later...any ideas and recommendations....

Tuesday, July 8, 2008

Cost optimization objective for service management/delivery in an outsourcing engagement - A challenge

As in any optimization problem the goal or objective of a outsourcing function is to minimize the cost of providing the service to the lines of business. Evolving an objective function is riddled with multiple factors making it increasingly difficult for one to establish an objective function. Typical factors of an objective function are:

1. Onshore-Offshore percentage

2. Choice of delivery location to perform the services in an outsourced environment

3. Composition of workforce to perform service delivery (managers, Analysts, Software engineers, etc.) and their cost

4. The number of shifts designed to perform service delivery

5. And several more (not sure what I'm missing here)

Modelling based on the above factors results in devising an objective function next to impossible. In this context I was considering two approaches for the same:

Approach 1: Why not we consider all the above factors and design one single objective function. For example:

Min ååå XdCaRb

d a b

where X is each location where service delivery is performed and d = {India, US, China…..}

C is the cost per shift per resource pyramid and a ={(Shift A,Cost of Executives in location d), (Shift A, Cost of Managers in location d), ,…..(Shift A, Cost of Software Engineers in location d), (Shift B, Cost of Executives in location d)…. (Shift B, Cost of Software Engineers in location d), (Shift C,Cost of Executives in location d)…. (Shift C, Cost of Software Engineers in location d)}

R is the number of resources per resource pyramid in each shift and b = {(Shift A, # of Executives in location d), (Shift A, # of Managers in location d), ,…..(Shift A, # of Software Engineers in location d), (Shift B, # of Executives in location d)…. (Shift B, # of Software Engineers in location d), (Shift C,# of Executives in location d)…. (Shift C, # of Software Engineers in location d)}

Approach 2: We peel the optimizing functions for each of the key parameters and individually optimize them.

I'm yet to figure out which is the best approach to figuring out a optimizing function. If I choose the former then the function becomes too complex and modelling the constraints equally so. Will such a function be solvable.

If I adopt the Approach 2 then I'm not looking at it from a systemic perspective and hence may inherently end up building a suboptimal objective function while the individual parameters themselves are optimized.

Any thoughts on what could be the right approach?

Monday, July 7, 2008

Service management - Applying simple queue model to address service management challenge

A simple Server queue model can be applied to address some of the challenges identified in the blog http://insightful-journey.blogspot.com/2008/07/service-management-issues-with-using.html:

The following step-wise approach can be adopted on the same:

  1. Demand characterization is nothing but the l for a unique combination of the service factors (Service type, service scope, service domain and service category). The best way to represent such a set of demand characterization is defined using the mathematic notation below:

l = lijklm

i Î {Incident management, Problem Management, Change requests/enhancement..}

j Î {Level 1, Level 2, Level 3, Level 4}

k Î {Business, Infrastructure, Application…}

l Î {sev 1, sev 2, sev 3, sev 4…)

m Î {Java, .Net, Tibco, Oracle, …)

  1. Capacity for providing service can be determined based on the service rate for each combination of service factor.

m = mijklm

i Î {Incident management, Problem Management, Change requests/enhancement..}

j Î {Level 1, Level 2, Level 3, Level 4}

k Î {Business, Infrastructure, Application…}

l Î {sev 1, sev 2, sev 3, sev 4…)

m Î {Java, .Net, Tibco, Oracle, …)

  1. The optimal service level objective can be set by setting factors:
    1. Utilization percentage (as indicated above) 70-90% but not 100%. (Note that a model which is 100% utilized is unstable)
    2. Healthy backlog for tickets to smoothen the demand-capacity gaps. This can be determined as the number of request in queue for each combination of service factors
    3. Balanced service time for service tickets and a cap on target improvement. Uncapped service improvement target leads to instability in system and disproportionate cost to maintain such a model.
    4. Assuming a typical utilization of 90% we can determine the number of resources for each demand:

r ijklm (number of resources) = l ijklm /m ijklm/90%

Total number resources R = å r ijklm

This is a classical optimization problem where one can apply constraint theory for solving it. (We will cover this in detail later).

  1. Designing the optimal demand-supply model can be determined by applying constraint theory for an optimal engagement.
    1. Objective function is the minimum resources to manage the engagement (note we are not using a function to minimize the cost due to complexity introduced in the system due to workforce, the levels, delivery center utilized and so on)
    2. Constraints are defined by cap on service requests, backlogs and service time for each combination of service factors,
  2. Execution and sustenance of the service model requires one to have the right value stream map (a.k.a. service process), waste elimination by continuously eliminating non value added activities (for example reducing the infused management team for transition engagement), improving turn around time for service tickets by measuring and optimizing time for value added activities (one can also apply other engineering techniques such as DSM, concurrent engineering, etc.), using multi-skilling of resources through ongoing training, and so on.
  3. Continuous improvement involves driving some transformation initiatives similar to ones identified in point 5 above to reach the ideal state as determined by the server queue modeling in Step 2.

Your comments & thoughts welcome!

Service Management - Triavialzing real-world complexity with a simple model!

The entire service management model can be based on simple Server-queue modeling concepts.

Model Factors
The minimum factors that one needs to understand of the client service management context for an item in the service catalogue are:

1. Service type and the associated processes are activities that contribute to transforming the inputs (a.k.a problem/incident) to a acceptable/predictable outcome (a.k.a resolution/work-arounds)
2. Service scope (Level 1, Level 2, Level 3)
3. Service domain (Business, Application, Infrastructure, Tools, etc.)
4. Service locale (process such as supply chain, HR; functionality such as order management; and technology such as Java, .Net etc.)
5. Severity and Priority within each level (such as Sev 1, Sev 2, Sev 3, etc.)

For the above service context one needs to get the following measures to model the service:

1. Total number of service requests (classified across the above dimension) over an interval of time. (Lets define this as )
2. Number of service request resolved and closed (classified across the above dimension) over the same interval of time as in point 1. (Lets define this as )
3. Number of service request as backlogs (Lets define this as Lq)

Simple Server-Queue Model concepts
Based on the above factors a simple server queue model as identified in the figure can be applied:



The simplistic model is premised on the following:

1. the mean-arrival rate has a passion distribution (established statistically through a goodness of fit test such as Kolmogorov-Smirnov test),
2. the service rate is exponential in nature (established statistically through a goodness of fit test such as Kolmogorov-Smirnov test),
3. All requests are homogenous in nature,
4. All service is homogenous in nature
5. Service is rendered on First Come First Served basis (FCFS)

But note that this is seldom the case and is a gross over simplification of real life scenarios. However, one can best understand the characteristic behavior of a model in its simple form.


What say you?

Service management - Issues with using Analytical model

The service provider often finds it difficult to agree to a SLA right-away due to several constraints:

1. Lack of clear baseline/benchmark for the service prior to outsourcing due to several factors:
a. Absence of management of service through metrics
b. Absence of structure, processes and organizations
c. Absence of system and measurements
d. Absence of right data in the systems (even if it is available)
2. High client expectations on improvement targets during outsourcing. This results in defining the target metrics at a level which the existing service organizations has failed to achieve

If clients’ services are matured and above constraints are not a bottleneck, then the challenge shifts to the following:

1. Characterization of demand (read as service requests)
2. Characterization of service (read as resolved service requests)
3. Planning the right capacity for providing service
4. Defining the optimal service level objectives for the lines of business and hence the service providers
5. Designing the right capacity-demand model to serve the clients
6. Executing and sustaining the performance of the organization to the defined service level objective, and
7. Continuous improvement on services to the clients and their lines of business

Do you agree? What other problems have you faced?

Service management - Do we have all the answers?

Clients often outsource service management to vendors to increase the value without assuming ownership for costs and risks. The value capture for the clients by outsourcing service management can be understood from several perspectives: cost, improved service to lines of business and so on (we would not dvelve into details of value management here). The only way the client can ensure utility and warranty of service (scalability, consistency, reliability, security, accuracy etc.) of the service from an outsourcing supplier is by imposing strict service level measures through the Service Level Agreement (SLA). The non-compliance to the SLA often results in business impacts on the lines of business, and the client often impose strict penalty on non-compliance of the SLA.

The intent of a service provider is to sign-up for a SLA limiting itself to controllable risk and manageable cost while making sustainable margin. But it not as easy as it sounds due to several reasons. To understand these reasons every service provider should start asking itself the following questions:

1. Does the client & service provider understand all the problems faced in operationalzing a SLA?
2. What analytical models/techniques one can use to ensure the SLA problem and the solution is well understood and factored in the overall solution?
3. When, Why and how should such a model be applied and not applied for a SLA?


Any thoughts...