Why do we need so much software?

Software is everywhere, but you can’t see it.  You know it’s in your phone, your computer, your home appliances and your electric meter, but do you know why?  This article explores the reasons for the explosion of software.

 

Computers have taken over many functions that used to be performed by other equipment and by people.  While computers were originally developed to compute, they now control, communicate and manage things that require much more than just “computing.”

Moore’s Law is the term used to describe the geometric increase over the past 50 years of the number of electronic digital circuits that can be placed on a fixed-size piece of silicon.  A corresponding decrease in the cost of those circuits has driven the digital revolution – replacing nearly everything that used electrical or electronic circuits with their digital equivalent.

A “digital equivalent” of course is not really equivalent, because it consists of a computer.  Each computer, no matter how small or large, includes a processor, memory, and ways of moving data in and out.  All of the activity in a processor happens as a result of executing a program – a series of instructions that are stored in the memory.  And programs are software.

Managing the activities of a computer requires – a computer.  The operating system of a computer is the set of programs that are concerned with managing resources and activities inside the computer.  This is not trivial, because programs are constructed of very simple instructions, and there are a lot of resources and lots of activities inside each computer.  For example, what happens when data is moved in or out of the computer?  Where does it get stored?  How does it get checked and how does it get moved to a more permanent location, such as a disk?  These are all activities an operating system is concerned with.

Keeping track of stored data usually is done by a file system, which is another part of most operating systems.  Turning power on and off for parts of the system that are not used all of the time is another function of system software on, for example, a mobile phone.  This extends the battery life.

Furthermore, thousands of conditions can occur while the computer is operating, such as errors in moving data or interruptions due to user interaction (like typing on a keyboard or touching a screen icon).  Each condition has to be dealt with in a way that won’t stop the computer.

As computers have become widely used, specialized programs have come to be part of the standard repertoire.  Programs dealing with databases (such as a customer list with all of their purchases), audio and video data (such as YouTube videos and podcasts), and photos (such as your smartphone pictures) have become standard requirements for computers that we use in business and at home.

Communications systems – including the Internet – have incorporated computers to manage delivery of data globally; and services such as Google have developed enormous dictionaries of everything on the Internet (and also things like videos and books) that can be searched.  The hardware of each of these, while massive and widespread, is dwarfed by the effort put into creating software that keeps them running and delivering the latest services.

Competition between the latest start-ups today is mostly in the domain of software.  Delivering new services in the Internet age requires deep understanding of software and how to leverage what was developed by others last week to make something new this week.

Software and the tools for developing it are the context in which the best and brightest of the current generation are expressing their creativity and becoming part of the global economy.  You can expect more software from more software designers to result in a lot of unexpected new products and services.

Software development – not by PERT alone

I have great respect for software developers.  Because software is abstract, invisible and runs at extreme speeds, the people who are good at building it have to possess a particular talent at visualization and a willingness to use complex tools.

When software developers become project managers (PMs), they tend to rely on software tools to monitor, control and report on projects, just as non-technical PMs do.  The problems that technologists have in management have to do with inexperience in people interaction, including conflict, collaboration and just plain old ability to listen well.  If you’re a technologist in management, you can find more ideas on what to do about this in my book Get Out of the Way.

For the rest of PMs, there are lots of good tools, such as PERT and Gantt charts, but simply having good tools will not make your project succeed.  Software development projects frequently fail to produce results that the customer or end-user wants.  Why?

Here are three factors that contribute to the unruliness of software development projects:

  • Estimating the effort and time required to complete a task is difficult.  Even when reasonable-looking requirements and specifications of a software package are provided, understanding the difficulty of development may require architecting multiple layers and investigating interactions with a complex environment.  Since requirements are generally high-level items, and design has to be done at multiple levels, it is difficult to break down the work into “pebble-sized” tasks and then to keep to a schedule with those tasks.
  • Designing an algorithm often takes experimentation.  Engineering a software system requires trying out some things to see if they work, or testing multiple possible ways to implement something to find one with reasonable performance, for example.  This aspect of software engineering is so prevalent that Fred Brooks in The Mythical Man-Month advised us to “plan to throw one away.”  He meant that at the completion of a complex software implementation (such as an operating system), the designers have learned so much that it is often best to start over and re-implement everything.
  • Assuring that a software implementation functions properly under all conditions may take as long as the design phase.  In fact, you may never be able to prove proper functioning, because testing all combinations of conditions is impossible.  At best, using test-automation tools and good intuition about where to look for errors, a software team can reduce the number of bugs at the time of a software release, but almost never to zero.

Scheduling a software project is made more difficult by the fact that additional tasks are always discovered during implementation.  This is so prevalent that I learned long ago always to ask “What remains to be done?” in addition to “What have you completed?”  You can count on the list of tasks to be done growing during the project.

One of the best countermeasures to all of these problems is to use Agile development methods.  Using iterative development with regular demonstrations of working software having incrementally greater functionality will help reduce uncertainty and increase the ability of a development team to adapt to a changing world.  It also shortens the time between the initial charter of the project and the point where the customer says, “but that’s not what I wanted.”

Even Agile will not save all projects.  To learn more about why not, have a look at these slides, “Why Agile Won’t Fix All Your Problems.”

And good luck.  The world needs software, so we all have to keep on trying to deliver it the best we can.

What’s wrong with complexity?

We tend to design things that are complex, and that can be our undoing.

 

Technologists love intricate mechanisms.  That’s why many of us, as kids, took things apart, and some of us even put them back together again.

In my training as an engineer, I enjoyed learning how mechanical, electrical and chemical things worked.  And the more elaborate the mechanisms, the better the challenge and the satisfaction of getting the understanding.

We tend also to design things that are complex, particularly if we’re in software design, because software is layered into abstractions almost without limit.  Database systems linked via networks to computational engines and on to user-interaction devices are full of opportunities to exercise our power of design in the face of complex interactions.

Yet complexity can also be our undoing.  Consider this from Andre Zolli’s article about the crash of Air France Flight 447:

It was complexity, as much as any factor, which doomed Flight 447. Prior to the crash, the plane had flown through a series of storms, causing a buildup of ice that disabled several of its airspeed sensors — a moderate, but not catastrophic failure. As a safety precaution, the autopilot automatically disengaged, returning control to the human pilots, while flashing them a cryptic “invalid data” alert that revealed little about the underlying problem.
 
Confronting this ambiguity, the pilots appear to have reverted to rote training procedures that likely made the situation worse: they banked into a climb designed to avoid further danger, which also slowed the plane’s airspeed and sent it into a stall.
 
Confusingly, at the height of the danger, a blaring alarm in the cockpit indicating the stall went silent — suggesting exactly the opposite of what was actually happening. The plane’s cockpit voice recorder captured the pilots’ last, bewildered exchange:
 
     (Pilot 1) Damn it, we’re going to crash… This can’t be happening! 

 
     (Pilot 2) But what’s happening?
 
Less than two seconds later, they were dead.  …
 
We rightfully add safety systems to things like planes and oil rigs, and hedge the bets of major banks, in an effort to encourage them to run safely yet ever-more efficiently. Each of these safety features, however, also increases the complexity of the whole. Add enough of them, and soon these otherwise beneficial features become potential sources of risk themselves, as the number of possible interactions — both anticipated and unanticipated — between various components becomes incomprehensibly large.          [Want to Build Resilience? Kill the Complexity by Andrew Zolli, 9/26/2012]
 

This is certainly a cautionary tale about messages that don’t convey important meaning.  But it’s also a warning about interactions that were designed but couldn’t be tested or evaluated in all their combinations.  That’s what complexity leads to.

Disasters like Flight 447 nearly always require a complex system interacting with a human.  Remember the key learnings of the Apollo disaster: NASA’s safety analyses were not being followed up because of a dual-agenda management system.  The bottom line was that they relied on the fact that heat-shield tiles had never yet caused serious damage.

When you’re responsible for a project that is complex, you need to address that complexity in two ways.

First, you need to be sure that the people doing the analytical and design work know what the possible failure mechanisms are, how to compensate for them without adding a lot more complexity, and have scheduled adequate tests to validate the robustness of the design.

Second – and this is the more difficult – you have to be sure that the people implementing the project and the people managing the project (including yourself) are not harboring private agendas that may undermine the effectiveness of the analysis and design and testing.  Adding ship-date pressure on a team, for example, can cause them to short-change the test plan and declare a product ready to ship when it still has serious faults.

The second area is where your experience with people doing projects will help you most.  Listening a lot to project team members and following up on hints of conflict over goals or processes will help you stay current on the health of your project.

Finally, you can become an advocate for simplicity.  When faced with a choice in a project between a more complex solution and a simpler solution, go for the simpler one.  Often this will allow you to discover sooner whether or not the solution is adequate.

Some projects, of course, become excessively complex no matter what you do.  This may be a time when the most responsible thing you can do is recommend that the project be cancelled.  Better to have no product than one that kills.

Why is it so hard to get good software?

Once we get over our wonder at the broad capabilities of software running on modern computers and devices, we begin to ask why so much of the software we use is of questionable quality.  Between vulnerabilities to malware and constant updates to correct problems, it seems that software is never stable and reliable.  Why?

Software is abstract, invisible and runs at very high speed.  This combination of features makes developing software the domain of a special kind of person who can deal with the abstractions and with the incredibly fine detail of software creation.  Managing a group of such people also requires a special kind of talent as well, because there are tradeoffs to be made between getting new features added and stabilizing the functions that are already built-in.

Usually, the pressures of commercial software development lead software marketers to place much more emphasis on new features than on stability, because features are what differentiate one software product from another.  However, the long-term stability of a product also contributes a lot to the product’s reputation.

Software testing is the usual way to verify proper functioning before shipment.  But since software development routinely takes longer than anticipated, it is the testing that gets short shrift in many cases.  The result is premature delivery of software that is not yet ready for commercial use.

A 2002 study commissioned by the National Institute of Standards and Technology found software bugs cost the U.S. economy about $59.5 billion annually. The same study found that more than a third of that cost — about $22.2 billion — could be eliminated by improving testing.     http://on.msnbc.com/Kae58w

In addition, the operating context of software is constantly changing.  As operating system upgrades are released, the software that works under that operating system often must be adapted to the upgrades.  This means that a lot of the work of keeping a software package current goes to merely maintain the capabilities that were already built.

Users continue to ask for new features.  And marketers want the software to be useful in more contexts – such as making an app useful on an Android phone in addition to an iPhone.  These demands guarantee that a software package will always have an endless backlog of potential changes in the “to-do” list.

As more contexts are supported and more features are added, the software inevitably becomes more complex.  And complexity multiplies the difficulty of testing, and also makes each change to the software riskier.  The more interactions that are possible with each part of the code, the more possibilities there are for mistakes.

What can I do to help make software better?

Purchasers of software don’t have very high expectations, because the track record of the software industry does not set a very high bar.  If users demanded better software and rejected poor software, software vendors would provide better packages – or go out of business.  Why don’t we demand better software?

One reason is that it is hard to switch.  Once we have adopted a piece of software for some function, we tend to stay with it.  First of all, it’s what we are familiar with.  This “makes us rather bear those ills we have than fly to others that we know not of.” [Shakespeare – Hamlet]  The cost of learning a new system is high, and we tend to stay with what is familiar, even if it is painful to use.

Second, the vendors of software don’t often make it easy for us to switch to another vendor.  Try taking an Access database and “porting” it to FileMaker Pro.  The conversion process is a barrier that most of us are unwilling to undertake.  In addition, there may be many people who need to be trained in the new system if we switch.  That adds to the cost.

In the future, we should look at the cost of switching before committing to a software vendor or cloud system.  The more “open” the system, the better for us in the long term.

And we should always insist on demonstrably high quality in software.  Keep this in mind the next time you’re making a purchase decision.

 

Why don’t IT people understand our business?

Executives seem to agree that IT people – technicians and their leaders – do not understand the business very well.  This causes all sorts of trouble when making financial decisions on major IT projects.  Why don’t IT people “get” the business?

1.    They’re too busy studying technology.

We all know that information technology is complex.  It’s not surprising to learn that IT people have to put in a lot of time just keeping up with the changing technologies.

But CFOs and Controllers also have to spend a lot of time keeping up with regulatory and financial standards.  That doesn’t excuse them from acquiring a good working knowledge of the industry and the specifics of the enterprise’s products and markets.  So we shouldn’t let the IT folks off the hook just because they’re “too busy.”

2.    They’re not trained in business

IT people typically come from engineering and technology training backgrounds.  These give them good grounding in quantitative methods, but don’t give them a feel for business tradeoffs.  Case studies in business are not part of a technologist’s training.  And those who have ventured into business for themselves usually have to hire someone else to manage the business aspects of their enterprise.

Maybe there’s something you can do about this.

3.    No one on the business side has invited IT to learn about the business

OK, so the IT people aren’t business-savvy when they come to work here.  Why don’t we invite them to learn about the business?  After all, we expect HR and other departments to have a basic grasp of what we do and for whom.  Why not IT?

Do you have a short self-study course on the nature of the enterprise’s business?  Or at least a summary from the 10K that is provided to every new employee?  This would be a start.  Even better would be a concerted effort to explain not only the basics of the business to IT people, but to outline the key performance indicators and other metrics that drive the business.

4.    No one rewards IT people for being business-savvy

Reward systems in IT typically are based on operational metrics rather than business-specific measures.  If you reward IT people only for achieving 99.9% uptime, then you should not expect them to focus on anything else.

Everyone in the enterprise needs to have a basic grasp of why we’re in business and what we provide, and to whom.  But IT people implement many of the systems that make business processes run, so they should have in-depth understanding of what’s important in the business and the meaning of the executives’ measures.

Bringing IT people out from behind the wall of technology and exposing them to business concepts and measures can only benefit everyone in the company.  And it will make your future conversations with IT a lot easier.

How well do IT people understand business in your enterprise?  Add your comments below.

Software, software everywhere

Software is different from other technical stuff.  It’s abstract, invisible, and runs at extremely high speed.  So the people who are good at working with software tend to be different from “ordinary” engineers.  They have to be good at visualizing the abstract processes and the mathematical algorithms that make up the procedures implemented in software.

Software people are different, so their managers need to be able to deal with the difference.  Effective software managers know what’s critical to a well-functioning software team and those managers get good at providing it, even in the face of obstacles.

Obstacles come from upper management that doesn’t understand how software, and software people, is unique.  As a result, they assume that a manager who has skills in Operations can just as well manage software.

I’ve seen IT shops where the best software people left the company quickly after being treated as if they were call-center operators.  For example, the management assumed that the software people could be located anywhere in the building, that they didn’t need any special whiteboards to keep track of their project information.

Why should you care? After all, can’t you just hire the brains you need for software?  Well, not so fast.  You’re competing with every company in the world for the same kind of brains. Unless you’re in an entrepreneurial, fast growing, innovative company, software people will not prefer working for you over going to work in a more exciting environment.

IT is undergoing rapid change, primarily driven by the availability of cloud services.  But the cloud just moves the data centers to somewhere else. If you look closely at internal IT activities, you will realize that IT is itself a software-intensive activity.

This sounds self-evident, but it’s not a joke.  It’s a reality that many financial and operations executives fail to understand.  Everyone, from the business analysts to the website deployment people are not just software users – they have to understand software principles to do their work.

Business competition will come from new players, and from old players who master software tools and the business possibilities opened up by software.

As software becomes an integral part of business, there is a subtle shift in what management has to do and to know.  You now need staff – or consultants – who are knowledgeable about software and its workings.  And from them you need to learn what software means for the future of your business.

Is there something you’ve learned recently about software?  I welcome your comments.