Aerospace and software engineering have many tales of huge projects that ate incredible amounts of resources and then failed, sometimes completely, sometimes producing something with a fraction of the capability originally desired. There's a ton of case studies on them, sometimes whole books of them
. These get blamed on forcing the technology beyond achievable limits, requirements creep, poor management, lack of resources, etc. I think all of those come down to one problem—too many requirements
The requirements I'm talking about here are:
1. Top level. They describe what the system must do, not just a part of it. A complicated vehicle can have many, many derived requirements allocating the top level ones to each subsystem, but the derived ones aren't part of this discussion.
2. Top priority. Any requirement that can be sacrificed to meet another one isn't really a requirement, it's a goal, something optional. Of course, how the requirement is designated officially may not match that definition. If a "goal" has to be met or the project loses funding, then it's really a requirement and equal to all the other "top" priority ones.
3. Not just technical. Any constraints on the schedule or budget for a project count as requirements. If the project has to stay under $1.5M/yr, deliver the production version for $3000 each, and be shipping by 6/1/2009, that's three more requirements. Other constraints count too—if a specific development methodology or tool set must be used, that's a requirement. So is a security regime imposed on the project staff (classified, ITAR, proprietary—they all complicate things).
4. Not just formal. The informal requirements on a project can be the most difficult ones. US government projects are infamous for these . . . despite thousands of pages of requirement studies, the design can be driven by an "understanding" that work will be done in a particular Congressional district, or the employment level at a NASA facility will not be cut.
The more requirements there are on a system the more likely it is that two or more of them will directly conflict. Ideally such conflicts will be recognized before the project gets go-ahead, but there are projects that never resolve those conflicts and instead will thrash about trying to square the circle without realizing (or admitting) what the problem is. But fixing them isn't enough to assure success.
Even if all the conflicts are carefully ironed out of the top-level requirements the project's chance of success will depend on the number of requirements:
Every requirement added reduces the chance of success.
Everone accepts that you can't get everything
you want. Lots of engineers have a sign saying "Cheap, Fast, Good: Pick Two" in their cubicles. But even if you "pick two" you haven't capped the number of requirements for your system.
Technical performance—"good"—can go in many directions. The more different tasks the system has to do the less the designers can focus on any one of them. In practice each task gets assigned to a different design team, with the project management and/or systems engineers trying to make sure they don't interfere with each other's performance.
Let's look at an example: the often longed-for flying car. Even if you don't set any limits on the cost or schedule, there's going to be a lot of requirements describing what you want it to do: Speed X in the air. Speed Y on the ground. Road handling characteristics. What kind of fuel it takes. Cargo capacity. How many passengers and how much room they get. Essentially you'll be adding the requirements for a Cessna 150 to those of a Mazda Miata, with only a small percentage being eliminated in the overlap.
Doubling the number of requirements doesn't just double the complexity of your design. When you design a part to meet a particular requirements, you have to look at its effect on every other requirement on the system. The number of interactions increases roughly with the square of the total number of requirements [I = (x^2 - x)/2 ].
In the flying car example, look at what the air bag designer has to deal with. For a car it has to meet NHTSA safety requirements, interface with the car's electrical system, and not interfere with the steering wheel's function (various other requirements such as acceleration and range aren't affected but still have to be checked to make sure). Adding the requirements for a flying forces changes to the design to comply with FAA regulations, keep the airbag from going off during a rough landing, and make it lower weight to support flying performance.
Some requirements don't directly conflict but push the design in different directions. Take the flying car's wheels. Improving the traction and responsiveness on roads requires wider tires and heavier brakes and suspension. For better performance in the air the tires need to be narrow (to reduce drag) and all components must be low weight. In situations like this the performance allocations to the subsystems must be made carefully to ensure both requirements can be met.
Since engineers tend to advocate for their favorite requirements, allocations and flow-down to lower-level requirements can be contentious. The more top level requirements pull in different directions, the harder it is to find an acceptable compromise. The compromise may be vague wording that postpones the conflict to the next level of detail.
With conflicting agendas among the different development teams, each protecting their pet requirements, engineers have to take time away from their own tasks to check up on other teams. It's easy for someone to make a change to his design that impacts another team's performance. Avoiding that requires studying the whole project's requirements, annoying supervisors who want their tasks done on time. Frequently Team B won't realize they've been affected by Team A's change until after it's been approved and incorporated into the baseline. This can produce thrashing—A's change forces B to redesign their subsystem, causing a problem for C. If C's proposed fix affects A's subsystem the cycle will repeat until the system-level management recognizes the problem and imposes a common solution (or the system is delivered with flaws intact). This can be averted by each team reviewing proposed changes before
they're approved—as required by every project's CM process—but analyzing piles of change requests is always a low priority task, discarded when the team is behind schedule.
This illustrates what the most precious resource of a project is:
The attention of the workers
If the engineers try to pay attention to too many requirements, they won't be able to focus on creating their designs. So they pick the ones that matter the most to them and ignore the rest. This is where process and security requirements do their damage. Time spent on learning new change forms, adding required markings to drawings, or evaluating data to see if it's classified or ITAR-restricted takes away from designing. Hiring more people can help with the technical workload but adding them for process/security enforcement just adds interruptions without reducing anyone's workload.
Concentrating attention is how the Apollo Program succeeded at its incredibly difficult task. "Man, Moon, decade" was a requirements set that everyone could grasp and apply to the problem in front of them. With a common goal and priorities engineers could trust each other and focus on their own jobs. Apollo proves even a very large project can be at the top of the success curve.
Projects at the bottom of the curve fail in multiple directions. Time spent arguing priorities or struggling with new processes causes schedule delays. The salary hours used add to the cost overruns. Thrashing from repeated changes drives up both. Technical performance suffers because no part of the design gets the attention needed to optimize total system performance. Worse, a conflict between requirements or subsystem designs could be missed, producing a product that doesn't meet the original need. Actual flying cars have been built but suffer from these requirement conflicts. Their performance is inferior to cars and light planes while costing more than the combined cost of both. Flying car makers have repeatedly gone bankrupt while nearly all Cessna owners also have a car.
When managers realize their projects are in trouble they start climbing the success curve. Cost and schedule requirements fall off the list once they've been passed but the design choices that gave up performance for low cost or quick manufacturing are still part of the design. Process requirements are frequently removed informally by paying lip service while bypassing the process steps. Deleting a performance requirement can be essential for getting back on track but can threaten the viability of the project. Removing a feature drives away the customers who wanted that capability. Government projects have their funding cut as stakeholders lose interest. If that doesn't leave enough money to meet the remaining requirements the project can go into a "death spiral"
A system can be built with a huge number of requirements if you go about it the right way—get a basic version working and only then add the advanced features. The C-130 Hercules cargo plane has been expanded to land on skis, fly into hurricanes, refuel other planes, infiltrate hostile territory, and be a platform for radio stations and artillery pieces.
Developing all of those capabilities at once would have pulled the engineers in so many different directions they may not have delivered a working system at all.
In the software world this is called Spiral Development
. The define requirements-design-build-test process is repeated several times, adding more requirements on each loop. The cycles continue until all requirements are implemented or the project runs out of time
(This works in other industries. The author used it on an aerospace project.
) Introducing a new development process should be done as a "zero" iteration, practicing the new methods on a "build one to throw away"
Not every project can be a spiral development. Some requirement sets can't be split into useful packages. More resistance comes from corporate (and government) cultures which believe a "Great Leap Forward" is faster and cheaper. They can still succeed if they don't let their requirements list get too long. Deciding on spiral vs. waterfall
development is the job of project managers and chief engineers but all developers can push back on a new requirement. The success curve can be a useful tool for making it clear that someone's "tiny little new feature" can have an out of proportion effect on the whole project.
Too many requirements can kill a project.
Kill them first.