Testing by itself does not improve software quality. Test results are an indicator of quality, but in and of themselves, they don't improve it. Trying to improve software quality by increasing the amount of testing is like trying to lose weight by weighing yourself more often. What you eat before you step onto the scale determines how much you will weigh, and the software development techniques you use determine how many errors testing will find. If you want to lose weight, don't buy a new scale; change your diet. If you want to improve your software, don't test more; develop better.
Large projects call for organizational practices that formalize and streamline communication. ...All the ways to streamline communication rely on creating some kind of hierarchy, that is, creating small groups, which function as teams, and then appointing representatives from those groups to interact with each other and with management.
Feature teams have the advantages of empowerment, accountability, and balance. The team can sensibly be empowered because it contains representatives ...from each of the concerned parties. The team will consider all necessary viewpoints in its decisions and thus there will hardly ever be a basis for overriding it decisions. For the same reason, the team becomes accountable. They have access to all the people they need to make good decisions. If they don’t make good decisions, they have no one to blame but themselves. The team is balanced. You wouldn’t want development, marketing, or quality assurance alone to have ultimate say over a product’s specification, but you can get balanced decisions from a group that includes representatives from each of those categories.
Tools are very important element of defining a path of least resistance. If I can set up a tool so that it’s easier for a developer to do something the way that I want the developer to do it, and harder for the developer to do it some other way, then I think it’s very likely the developer is going to do it the way I want them to, because it’s easier. It’s the path of least resistance.
Seymour Cray, the designer of the Cray supercomputers, says that he does not attempt to exceed engineering limits in more than two areas at a time because the risk of failure is too high. Many software projects could learn a lesson from Cray. If your project strains the limits of computer science by requiring the creation of new algorithms or new computing practices, you're not doing software development, you're doing software research.
One of the most effective guidelines is not to get stuck on a single approach. If writing the program in PDL isn't working, make a picture. Write it in English. Write a short test program. Try a completely different approach. Think of a brute-force solution. Keep outlining and sketching with your pencil, and your brain will follow. If all else fails, walk away from the problem. Literally go for a walk, or think about something else before returning to the problem. If you've given it your best and are getting nowhere, putting it out of your mind for a time often produces results more quickly than sheer persistence can.
Horst Rittel and Melvin Webber defined a 'wicked' problem as one that could be clearly defined only by solving it, or by solving part of it. This paradox implies essentially, that have to 'solve' the problem once in order to clearly define it and then solve it again to create a solution that works. This process is almost motherhood and apple pie in software development.
The source code is often the only accurate description of the software. On many projects, the only documentation available to programmers is the code itself. Requirements specifications and design documents can go out of date, but the source code is always up to date. Consequently, it's imperative that the code be of the highest possible quality.
Computer programs are complex by nature. Even if you could invent a programming language that operated exactly at the level of the problem domain, programming would be complicated because you would still need to precisely define relationships between real-world entities, identify exception cases, anticipate all possible state transitions, and so on. Strip away the accidental work involved in representing these factors in a specific programming language and in a specific computing environment, and you still have the essential difficulty of defining the underlying real-world concepts and debugging your understanding of them.
"If it ain't broke, don't fix it," the saying goes. Common software development practices are seriously broken, and the cost of not fixing them has become extreme. Traditional thinking would have it that the change represents the greatest risk. In software's case, the greatest risk lies with not changing - staying mired in unhealthy, profligate development practices instead of switching to practices that were proven more effective many years ago.
A final essential difficulty arises from software's inherent invisibility. Software can't be visualized with 2-D or 3-D geometric models. Attempts to visually represent even simple logic quickly becomes absurdly complicated, as anyone who has ever tried to draw a flow chart for even a simple program will attest.
Managers of programming projects aren’t always aware that certain programming
issues are matters of religion. If you’re a manager and you try to require compliance
with certain programming practices, you’re inviting your programmers’ ire. Here’s a
list of religious issues:
■ Programming language
■ Indentation style
■ Placing of braces
■ Choice of IDE
■ Commenting style
■ Efficiency vs. readability tradeoffs
■ Choice of methodology—for example, Scrum vs. Extreme Programming vs. evolutionary delivery
■ Programming utilities
■ Naming conventions
■ Use of gotos
■ Use of global variables
■ Measurements, especially productivity measures such as lines of code per day.