Thought Leadership

forrester_SM_1204_blog_002

Subscribe to our blog

Your email:

Connect with Ness

Software Engineering Services Blog

Current Articles | RSS Feed RSS Feed

Engineering Effectiveness: Analyzing Your Code Base

  
  
  

by Neil Fox, VP Strategic Consulting

In our past posts, we have discussed the three dimensions of engineering effectiveness: Process, Technology and Quality.

Ensuring that you have a defined process -- I prefer Adaptive Agile -- and that your tools and metrics are configured intelligently to support (and improve) that process is the easiest and most significant thing you can do to improve your overall software development effectiveness.

Start by making sure your tools are properly configured. Only in this way can you get accurate metrics that identify where you should focus your energy. After you have “optimized” your process, tools and metrics to ensure that you are getting the most of your development process, then what? The answer is a little more complicated because it depends on the nature of your development and your business objectives.

If yours is like many organizations we work with, you are focused on adding new functionality to existing applications and there are two approaches that will help you allocate more of your effort on achieving this goal:

Examine Your Architecture

Start by examining your architecture. If you have an outdated and fragile design, a refactoring effort could pay huge dividends. It usually takes some time and a high degree of expertise to do this right, but you really have to do it or you could be wasting large amounts of your time and your company's money.

I recommend finding your best architect and carving out a couple of weeks to examine the current architecture and recommend potential improvements. The decision to refactor, purchase new technology or completely rewrite your application is a big one. 

Making this decision requires some dedicated analysis of the current code base, analysis of available applications and platforms as well as drafting a potential new architecture. Be sure that he or she considers new technologies, and depending on the strategy of your business, you could even consider rewriting your application for the cloud with access by mobile devices, smart phones and tablets.

Use High-quality Code Analysis Tools To Help

If you are having trouble with performance and/or fragile code, I would suggest using a high-quality code analysis tool such as CAST (www.castsoftware.com), PMD (http://pmd.sourceforge.net) or Clover (http://www.atlassian.com/software/clover/).

describe the image

Figure 1: CAST Approach to Code Analysis

These tools, some of which will integrate into your build process, provide specific information regarding areas of high complexity, duplicated code, potential defects and areas of automated test code coverage. Integrating tools like these into your development process will save lots of time and effort later in the cycle when they are more difficult and expensive to address.

 describe the image

Figure 2: Sample Clover Analysis

Several of these tools will also integrate into your development process, providing real time code analysis. Reducing errors and poor code during your initial development will save considerable effort later on when the application is generally complete or even deployed in the field.

So, if you have been following along and taking some of my advice, you should have a robust development process that follows best practices for your methodology, systems configured to support your process, metrics to identify improvement opportunities, solid architecture to support your business objectives and clean code to deliver high performing experience to your users.

See Neil Fox's previous posts:

Software Effectiveness: How and What to Measure?

How Effective is Your Software Development?

Optimizing Your Development Process

Software Effectiveness: How and What to Measure

  
  
  

By Neil Fox, VP Strategic Consulting

Perhaps the most critical step to optimizing your software development effectiveness is gaining agreement on what exactly we will use as a measure of effectiveness. Additionally, there are several metrics that will provide leading indicators for your primary measure(s). Once we have internal agreement on meaningful metrics, we must find the best approach to automating the collection and reporting of these metrics so that they are valid, normalized and trusted.  Without creating a simple metrics lexicon that maps to your roles and systems, the data collected and presented will be marginalized.

Let’s examine the most popular scenarios and implementations.

Early stage product development: Focus on time to market and maximizing feature delivery

Most startups and companies bringing new products to market have a laser focus on getting the “core functionality” — those features that they consider disruptive — to their target market as quickly as possible because delays in delivery or missing any of the core functions can have implications far beyond cost. While few startups bother to actually measure the effectiveness of their team, taking a simple approach to understanding how their team is exerting their effort may seriously impact their success.

In this case I would suggest taking at least one page from Agile practices and force ranking the desired features while also assigning each a business value. Features may take the form of requirements, epics, themes or stories—it's really up to you — because the key thing is that assigning a priority and value to each feature will add clarity to the team about which are the most critical and valuable to the company. At least weekly, the development team should review their progress and priorities with the key business stakeholders to ensure alignment. A simple value burn up chart helps to communicate progress.

 Figure 1: Example Value Chart from Jira

Figure 1: Example Value Chart from Jira

Mature companies: Focus on lowering the cost of sustaining products

Installed or legacy systems can be a steady source of income for mature and maturing companies, but in order to keep customers happy while maintaining reasonable margins, the cost put into sustaining these products needs to be kept under control. Understanding the effort / cost of fixing existing defects is one of the most common approaches to measuring product sustenance effectiveness, but not the only option. As an alternative, I would recommend looking at cycle time for update releases of various priorities. Depending on the nature of your application, customers may desire more or less frequent delivery of updates and they may want defect fixes and updates to be either isolated or bundled together.

In this scenario, I would suggest calculating your cost per defect fix (in perhaps 3 categories of complexity) as well as defect reopen rates, defect leakage and release cycle time. These are fairly standard metrics that most defect tracking systems would produce with ease, providing that the data is captured and standardized effectively.

One obvious way to lower cost of supporting your existing applications is to leverage a lower cost partner or partners to assume responsibility for this work. However, in order to realize the benefits of this kind of cost arbitrage, you have to select a partner who is capable of doing the work in an effective manner and at the level of quality that your existing clients expect. One advantage of measuring your current effectiveness is that it provides a benchmark against which to measure the effectiveness of potential or current partners who may be in a position to deliver a 30-50% reduction in your cost per defect fix, for example.

Mature companies: Focus on the Introduction of New Products

As Clayton Christensen outlined in his book, The Innovators Dilemma, many companies become so bogged down in servicing their existing customers, they can not apply enough resources to drive new innovations. If they want to grow, mature software companies can't generally rely on selling their legacy products to new customers; they have to introduce new innovative products, which actually puts them in a situation similar to that of startups. Accordingly, the broad measurement they can use to assess the effectiveness of their development process is, as we mentioned above, time to market and feature delivery. There is also a measure that is growing in popularity called the new product Vitality Index. Simply stated, this is a measure of revenue attributed to new products in the past 12 months — not product enhancements, but newly created IP. Understanding how much of the innovators dilemma you are experiencing annually is a good start to focus on the measures of the solution.

Process Maps

In order to understand and then communicate the metrics you are looking to collect and then the systems and roles that impact those measures, a simple process map is a good tool. I like to diagram the contributors to each artifact and metric using a vertical chart like the one below.

Current Development Process

Measuring Business Agility

  
  
  

Attendees will undoubtedly hear at our upcoming Executive Networking Event, "business agility" is the leading reason IT decision makers give for cloud adoption.

However, as I discussed a few weeks back, no technology can by itself make an organization more agile. Certainly, technology can be an enabling factor, but its adoption must be accompanied by an equally enabling organizational change if true business agility is to be attained.

One aspect of organizational change that we have consistently advocated involves improved engineering effectiveness. Of course, improving an engineering organization's effectiveness requires first that one define "effective" and then that one put appropriate measurements in place so that actual improvement can be demonstrated.

While it strikes me as intuitively obvious to assert that more effective engineering practices will necessarily make a business more agile, such an assertion inevitably raises a very important question: How does one actually measure business agility?

Here are a few ideas:

  • Time to Market - This is one metric that Kamesh Pemmaraju suggested when I asked him this selfsame question recently. Showing improvement in this area would require establishing benchmarks that accurately determine both the starting point of product development as well as its end point (release 1.0? 1.1? etc.). It would also call for calculating the benefits of getting to market earlier, benefits that would naturally vary based on the competitive landscape, the maturity of the market, and other factors.
  • Profit - Michael Hugos defined business agility as "the ability to consistently earn profits that are 2 – 4% higher than the market average" (he refers to this as "the agility dividend"). I agree that, to be meaningful, business agility should have an impact on profitability (not to mention revenue). I also agree with Hugos' distinction between "self-sustaining" agility (i.e., agility that pays for itself through increased efficiency and productivity) and the "self-consuming" sort (agility purchased solely via cost-cutting). However, I'm not convinced that comparisons to market averages are that useful. More importantly, I don't know that profitability is an appropriate measure of other aspects of business agility such as the ability to diversify product mix or, as mentioned above, speed time to market.
  • Real Options - Agility should mean that you can do new things more quickly as well as adapt to changing market circumstances more efficiently. In order to measure this kind of agility, the focus needs to fall on something intangible: Real Options for Future Action. In other words, the true value of agility resides in what it allows you to do. The way to capture that, as detailed in this Oracle white paper, is to focus not only on things like cost of implementation and TCO but also on things like "cost of change."  For example, it may be less expensive to implement a particular hardware solution than to invest in a robust service oriented architecture. However, when change inevitably comes, it may be more expensive to modify the former solution than the latter. Thinking this way begins to allow you to place a value on the real options afforded by a particular solution. And if agility is about anything, it's about having options.
So, how would or do you measure business agility?

Measure the Impact of Your Tech R&D Spend, Not the Amount

  
  
  

by Glenn Gruber, AVP Travel Technologies

(This post originally appeared on Glenn's blog, Software Industry Insights.)

Just the other week, IDC star analyst Mike Fauscette wrote a post on a topic near and dear to my heart: What are the right measures of your R&D spend?  I submit this is an extremely important topic for any software company and no less so for companies in the travel space.  Even if you’re not in the business of selling software, technology is increasingly important for travel companies looking to increase bookings, margins, and deliver a great experience to their customers.

The context for Mike's post was a presentation he and several other analysts received at the Oracle OpenWorld financial analysts summit.  Oracle was trying to demonstrate their commitment to innovation and keeping their technology at the forefront of the industry (and specifically ahead of SAP and HP).

Fauscette noted that instead of using a common metric –  Percent of Revenue – Oracle used raw spend.  While both are common metrics, I don’t think that either is an effective measure that companies should use in evaluating the effectiveness of their spend.  Mike found both measures wanting as they don’t have any direct linkage to the performance of the underlying business:

“I could spend bunches of $$ and research and develop lots of things that nobody wanted and while my spend as a % of revenue would be very high (and probably increasing as my revenues fell through the floor, at least for a little while), I could never call that success.”

We need to come up with ways to measure the effectiveness of your R&D spend, not just the amount. Specifically, I’d like to help you better pinpoint whether the activities you’re pursuing are helping meet the objectives of your business or not.

What I’m going to do is talk a little about types of R&D and then discuss what metrics you ought to be using to evaluate what you spent on them.

Why Measure R&D Spend at All?

Good question. If you’re a public company it may be a reporting requirement or a metric that financial analysts use to forecast your stock price and set their ratings.

Some may say that it’s not even worth measuring or measuring very deeply. They’ll say it’s hard to do. They may use the old line about measuring advertising expenditure: “Half of our budget is working great. I just don’t know which half.”

Or they may just say that they’re staying within their budget, so leave them alone. This is the single most important reason why I think that Percent of Revenue is the worst possible metric. Many companies “set” their budgets based on a percent of revenue. No other rationale. Now I’ll say this — % of revenue is easy to do and easy to measure, but it doesn’t tell you anything.  Helluva way to run a railroad.

But you’re not one of “them”, right? You’re smarter than that.

Big R, Little R

As I stated before, the metrics you use must support the business objectives you’re trying to achieve. And so you must first understand how your R&D expenditures support those aims.

R&D is a term that is often misused and misunderstood. In the classical sense Research (what I call “Big R”) is an effort to explore and create advanced technology which may or may not have a direct impact on today’s business, while Development is the industrialization of new technology into products for sale.  However many companies mistakenly conflate the two terms to mean the same thing. Thus when many companies refer to R&D, they’re talking mostly about development activities, which I’d call “Little R”.

Similarly, many companies misuse the term "innovation." Clayton Christensen segments innovation into “disruptive” and “incremental." Disruptive innovations alter the status quo in the industry – think: iPod, iPhone, Software-as-a-Service (e.g. Salesforce.com), and Cloud Computing.  Incremental innovations are just what they sound like; they move the ball forward, but not dramatically (e.g. Microsoft Office 2010).

The truth is that most companies spend the majority of their resources on bug fixes and feature enhancements, simply trying to hold on to customers and revenues via a traditional upgrade cycle, while trying to convince others (and maybe themselves) that the new versions incorporate many innovations (“New and Improved!” “Your shirts will be 10% whiter!”). But in most cases these are merely features masquerading as innovation.

How much you spend on Big R v. Little R, or Disruptive vs. Incremental Innovation are strategic decisions which you must make first.

And there aren’t any hard and fast rules of how much you should be spending, both in the aggregate or on specific products. So much of that depends on:

  • Organizational Maturity: For example, startups should spend much more, proportional to revenues, than established companies.
  • Scale: You can’t simply benchmark your % of revenues against Oracle if you’re a $100M company. You may want to compete with larger companies in the marketplace, but don’t enjoy the economies of scale that these competitors have. So don’t try to benchmark blindly against them.
  • Business models: This is the “apples to oranges” discussion.  Different companies have different revenues. A company that pins growth on new license sales should look at investment rates differently than a company that’s dependent on software maintenance. And even different still are long tail revenue-based companies, primarily SaaS companies, who use a subscription or usage-based model.

Once you have your strategies and objectives in place – and it’s critical that the objectives are tied to achieving over-arching business goals and not merely to pursuing technology for technology’s sake – it’s important to measure the progress you’re making, which leads us to our last section.

What Are the Right Metrics?

There are of course many metrics which can be used in evaluating the effectiveness of your R&D expenditures. Let me name a few, some of which I’ll debunk, others I’ll suggest you add to your list if you don’t already use them:

  • Often Used, Marginally Valuable
    • % of revenue: As noted at the top, not really valuable in any way other than as a gross and inaccurate way to compare one company’s spend versus another. Or simply a way to build a top-down budget.
    • # of patents: Another often used metric, yet mostly directional in value. Many companies use this metric to try to gauge how “innovative” they are. But the question is really how many of these patents actually impact the business.  Do they help drive revenue or control costs? A patent, or any new feature, that isn’t monetized doesn’t have any intrinsic value and falls into the category of an invention (cool new thing) rather than an innovation (cool new thing that customers want and are willing to pay for).
  • Revenue- and Margin-based. This is where it actually gets interesting. Are the fruits of your labor actually improving the health of the business?
    • Revenue: Pretty basic. Is going up, down, or is it stagnant. If it’s either of the latter two it means that you’re probably not spending your resources on building products or services that meet your target customers' needs.
    • Vitality Index: Revenue is a very gross measure and there are many factors that impact it beyond R&D spend, thus making it less valuable. So let me introduce you to a concept you likely haven’t heard before, the Vitality Index (VI). VI is a measure of how much of your revenues are driven by products or services introduced in the past year (which are more likely a product of your current R&D spend). The higher your VI score, the greater the direct impact your R&D is having on business growth. The other benefit of this measure is that new products generally return higher margins than older products, so the higher the VI, the better the long term profit prospects of the company.
    • Customer Retention/Churn Rate: This is extremely important as it’s far more costly to acquire a new customer than keep one. It’s also more indicative of energy spent on bug fixes and new feature introduction than disruptive innovations.
  • Cost-Based.
    • Cost of Rework as a % of Total Budget: This is a great one because it highlights wasted energy. By definition this activity adds no value to the organization. It may help reduce attrition from angry customers, but it will not add a single customer to the business. To expect that rework should be zero is not reasonable, but like golf, the lower your score, the better. So watch this for trends and use it to identify inefficiencies in your development organization.
    • Defect Injection Rate: The number of total known defects discovered during a product development cycle. This is the flip side to re-work as each of the defects ought to be fixed, although many are often not because they don’t rise to a level of importance (i.e. impact on sales) that merits the effort. But it is an important indicator of your engineering effectiveness and is what generates the high cost and wasted effort of re-work, as noted above. Then there’s the matter of where those defects came from (bad requirements? bad coding?), but that’s a whole ‘nother post.
    • Defect Leakage: Worse than the number of defects that you find, is the number of defects your customers find. That is as long as they are still customers. If this happens too frequently you can expect real (negative) impacts on customer satisfaction, customer retention, and your corporate reputation as a reliable provider of technology.
    • Variance to Budget by Product/Initiative: Self-explanatory, but it’s important to look at the performance at the detail level rather than in the aggregate. It will help you identify underperforming teams.
    • Variance to Release Schedule: Extremely important as missed release dates provide a black eye for the organization and represent lost revenue opportunities that can’t be recovered.  It’s not a strict financial measure but has a direct financial impact on the company.

    What’s your POV? Are you using these metrics? Do you have others that you’d like to share? Will you do anything different tomorrow than you did today?

    NB: Hat tip to Dr. Jerry Smith, a former colleague who helped me develop some of my thoughts around R&D metrics and introduced me to the concept of the Vitality Index.

    Glenn Gruber is AVP of Market Development for Travel Technologies at Ness. He can be contacted at: glenn.gruber@ness.com.

How Do You Grow without Hiring People or Investing in R&D?

  
  
  

3848251199 3d84512d5f mSarah Lacy over at TechCrunch wrote a post called "Global Economy=Good; Venture Economy=Not so much," in which she highlighted the results of a recent survey, conducted by DLA Piper, stating that 85% of surveyed tech execs are optimistic about the current state of the global economy. This optimism was reinforced, in part, by 72% saying that they expect sales to grow in the coming year.

What was puzzling about this "rosy" outlook was that only 47% said that they were planning on hiring any time soon and, what's more, 62% expect no change or even a decrease in a R&D spending.

"It seems like people are hoping that just treading water with respect to product enhancements will somehow lead to higher sales. Like there’s a massive wave of buying that they’re going to ride. It’s nice to be optimistic about the economy, but we’re not likely to see any massive spending sprees," Glenn Gruber told me while talking this over.

"If someone is looking to really grow, they can't let their product atrophy or just muddle along because, frankly, there's any number of companies out there –startups or established companies – who are going to try to eat their lunch. ‘Good enough’ no longer is.

"Technology companies are going to have to work as hard as ever to earn their customers’ business. Buyers are not lowering their expectations of product features and quality just because the economy is rebounding. They’re going to be as unforgiving in their evaluations as they were a year ago, they just may be in a position to pull the trigger now.  And technologies like Cloud Computing are putting downward pressure on prices and raising expectations of value. It seems like a wrong-headed approach if you want to take advantage of an economic upturn."

Cutting back on R&D is a particularly dangerous path to choose if you happen to be in the software industry where you need constant R&D just to maintain your existing installed base. One Forrester report stated bluntly that, "Customers will cancel maintenance completely if they aren't getting value-for-money because the vendor has cut R&D in line with falling revenue." (To put the impact of such a loss in perspective, a 10% loss in maintenance margin requires a 60% increase in license sales to be made right again!)

In other words: You have to invest in R&D. Which means that, even when holding budgets flat or contracting rather than expanding, you have to maximize the return on any money you are actually spending.

There are basically two ways to do that:

1)  Increase engineering effectiveness so that you can get the  best output from your existing engineering resources, which usually starts with having the right measures in the first place, as Neil Fox has written on this blog. "Companies in this economy are forced to do more with less," Neil says. "If their budgets are flat and they do not find ways to increase their effectiveness then they are circling the drain."

2) Get more for your money by leveraging offshore resources (or a combination of offshore and onshore). The intriguing possibility here is that, rather than trying to do more with the same spend, you could simply do the same for less money, thus increasing your operating margin.

So, if you are optimistic about the future and are looking foward to increasing sales, how are you going to handle the need for R&D in order to sustain growth?

 

Image Source: speckham.

Video: Rally's Todd Olson on the new Enterprise Integration Framework

  
  
  

Last week Rally Software announced that they were extending their popular Agile Application Lifecycle Managment (ALM) platform with features that allow for more customization and deeper integration with an enterprise's existing systems and processes.

The intrepid Phil Marshall interviewed Rally's Todd Olson about this announcement and its implications for software development organizations at Agile 2010. In this video, Todd emphasizes the following:

  • Companies that are undertaking Agile initiatives want to be able to integrate artifacts resident in their existing systems (for quality management, defect management, CRM, etc.) with Rally's tools. The platform's bi-directional synchronization capabilities allow them to do just that. This is particularly valuable when we're talking about distributed environments where these systems may actually exist in far-flung locations.
  • "There is no 'one size fits all' Agile." This means that the ALM platform needs to be tailored to fit the specific needs of the organization. The recently updated Application SDK enables organizations to create, or enlist the services of others to create, apps that extend the platform via enterprise dashboards or in support of hybrid (Agile alongside Waterfall) process models, for example.
  • Organizations want be able to manage and measure engineering effectiveness through visualization of their work at increasingly granular levels, which the filtering features of AgileZen Kanban Board makes possible. 
  • The biggest change in the field that Rally has been seeing is "larger scale Agile." As Todd says, "I think people have figured out how to do Agile on the one to two team basis. But now when you have 100 teams doing coordinated development of both hardware and software products, how do you do that in a meaningful way? How do you manage that? That's a hard problem to solve."

If you'd like to see the interview in its entirety, you may do so now:

Universal Agreement: Development Centers Need to Demonstrate Value

  
  
  

In addition to interviewing Vamsee Tirukkala, Phil Marshall also had time at Zinnov's Congruence 2010 conference to interview Ness' Neil Fox on the topic of "Measuring Engineering Effectiveness" (which Neil has addressed on this blog here and here).

At the conference, Neil says he encountered universal agreement that the era of outsourcing software development to India in order to reduce costs has plateaued. In its place, he says, is an ongoing discussion around global development and the ability for development partners in India and elsewhere to demonstrate the value they can provide.

In spite of this universal agreement, however, Neil found nothing but discussion and debate surrounding the best way to actually measure said value.

What Ness brings to the fray, according to Neil, is a methodology and a framework for measuring this value across three different dimensions: process optimization, quality optimization, and technology optimization. After presenting this methodology at the conference, Neil got a lot of very positive feedback because, frankly, no one yet has been able to articulate a framework for measuring effectiveness in this way.

Neil sees the beauty of Ness' approach in the fact that it does not offer one prescriptive model of value for every organization. As he says, "Some companies value innovation, some companies value time to market, some value quality or number of features delivered, and some value cost savings."

"Regardless of their particular model of value," he continues, "Ness' engineering effectiveness framework really allows them to align their business objectives—growth, savings, output—with a model that measures those elements and helps them achieve their goals."

Here are Neil's remarks in their entirety:

Do you have specific questions about Ness' approach to engineering effectiveness? Leave a comment!

The Challenge of Measuring Software Development Effectiveness

  
  
  

by Neil Fox

Is software development really different from any other business function?

Think about it for a moment. Companies continually work to improve sales effectiveness, manufacturing effectiveness, operational effectiveness, etc., and as a result nearly every aspect of an organization can measure just how effective it is at performing its function. So why not software development?

Why we don’t measure development effectiveness

There are three primary reasons that our industry has been unable to address this challenge:

  1. Effectiveness depends on business context. There are times when companies must focus on getting their product out the door (time to market). As they mature, they become more focused on feature development or product enhancement. Later in their lifecycle, priorities shift to maintaining legacy applications and driving innovation. All of these activities are inherently different and require different approaches to be performed effectively but we tend to measure them with the same yardstick: Did we meet the objective or not? The alternative here would be defining effectiveness for each business context or phase in the lifecycle and then measuring performance accordingly.
  2. Lack of a standard measure of output. It is quite difficult to understand how much work a team has produced if there is no real way to measure their output. For a brief period of time, organizations used KLOC or lines of code produced. It wasn’t long before people realized that this encouraged poor development practices and really bad code. Again, I recommend finding a reasonably objective and implementable standard that applies to the work at hand. Depending on the nature of the work, you might consider using a standardized scale of complexity points, story points, ideal hours, function points, or even the number of defects fixed. Ultimately, to measure effectiveness in a meaningful way, you need a standardized unit of work that is relevant to the goal.
  3. Measurement drives behavior. It’s been demonstrated again and again that people will align their behavior to factors that are measured. If you measure the number of story points delivered, then your team will knock out story points, nearly to the exclusion of all else. The same holds true if you measure defects, estimation accuracy, or anything else. The trick to measuring effectiveness is choosing metrics that will encourage the right behavior but not distract from the ultimate outcome: a successful, timely, and bug-free release. After all, the metrics are a guide and an early warning system, not the goal of the development process.

The goal of software development

Generally, we can agree that the goal of software development is to create new, high quality products and features for end users (though sometimes those end users are other systems). To achieve that goal most businesses want to understand how quickly they can deliver a set of functionality to their target market with the highest total quality. Measuring effectiveness in a relevant way means focusing on those activities that have the greatest impact on time to market and quality.

Software quality may be judged on a range of attributes. To ensure and promote effectiveness, the attributes of quality we want to measure should include, but not be limited to: technical defects; functional defects; security; reliability; performance; scalability; usability; and even something as elusive as “innovation.” Of course, to serve as standards of measure, each of these attributes needs to be translated back into the practices and activities that influence them.

And making the connection between the desired outcome (quality software) and the activities most likely to produce it is where the real work comes in.

We’ll dig deeper into the details of what to measure and how in the next entry.

Neil Fox heads the strategic consulting unit at Ness. His team partners with clients to maximize return on technology investments. He was an early member of Macromedia’s team, a pioneer of Internet technology while at TRW in the 90’s (now part of Northrop Grumman), and  has led large software development efforts for most of his 25+ year career. He can be contacted at Neil.Fox@ness.com.

Software Development Ineffectiveness Costs Companies Millions

  
  
  

measure software development effectivenessBy Neil Fox, VP, Strategic Consulting

Why measure the software development process?

There are those who stand firm in their opinion that the process of software development cannot and should not be measured. Why not? Because, so the argument goes, the results are sufficient evidence of the productivity of the team and the efficiency of the process itself.

As you might expect, these views are generally held by technology professionals who just do not want to be scrutinized and would like the business to focus on the end result rather than how that result was achieved or if it was possible to deliver more, faster or better.

You must be able to demonstrate the effectiveness of the development process

The software development industry is fairly young (it started about 30 years ago) and for a much of that time was considered somewhat of a black art by those on the business side. As the industry has matured, however, this “please ignore the man behind the curtain” approach no longer works. Business managers investing in technology want to know what they’re getting for their money. In order then to demonstrate serious discipline and rigor to the business, we need to be able to discuss, both qualitatively AND quantitatively, our current effectiveness as well as describe our concrete plans to improve effectiveness into the future.

Increasing competition in the software industry drives business interest in ROI and software development effectiveness

The software development landscape has become radically more competitive than even during the dot-com bubble. Software development groups now compete with established and emerging companies as well as a growing community of global partners and providers. If development can be done elsewhere better and for less money, or a company can buy a solution rather than incur the risk and expense of building applications, it makes the most sense for them to do so. Therefore, in order to be relevant, and to be considered a viable option, we must be able to articulate our effectiveness.

The conversation starts with measurement

I am convinced that software development leaders who are able to measure, discuss and improve their teams’ effectiveness will have a distinct advantage, provide increasing value, and attract more investment as our industry continues to mature.

In our next post we will discuss the definition and components of software development effectiveness.

Stay tuned!

Neil Fox heads the strategic consulting unit at Ness. His team partners with clients to maximize return on technology investments. He was an early member of Macromedia’s team, a pioneer of Internet technology while at TRW in the 90’s (now part of Northrop Grumman), and  has led large software development efforts for most of his 25+ year career. He can be contacted at Neil.Fox@ness.com.

 

Image Source: lissalou66 

All Posts