Professors typically spend their time in meetings about planning, policy, proposals, fund-raising, consulting, interviewing, traveling, and so forth, but spend relatively little time at their drawing boards. As a result, they lose touch with the substance of their rapidly developing subject. They lose the ability to design; they lose sight of what is essential; and they resign themselves to teach academically challenging puzzles.
The woes of software engineering are not due to lack of tools or proper management, but largely due to lack of sufficient technical competence. A good designer must rely on experience; on precise, logical thinking; and on pedantic exactness. No magic will do. In light of all this, it is particularly sad that, in many informatics curricula, programming in the large is badly neglected. Design has become a non-topic.
We must be careful with terms like readable, user-friendly, and so forth. They are vague at best, and often refer to taste and established habits. But what is conventional need not also be convenient. In the context of programming languages, perhaps "readable" should be replaced by "amenable to formal reasoning." For example, mathematical formulas are hardly what we might praise as easily readable, but they allow the formal derivation of properties that could not be obtained from a vague, fuzzy, informal, user-friendly circumscription.
We know of much better ways to design software than is common practice, but they are rarely followed. I know of a particular, very large software producer that explicitly assumes that design takes 20% of developers' time, and debugging takes 80%. However, advocates of an 80% design time vs. 20% debugging time ratio have not only proven that their ratio is realistic, but also that it would improve the company's tarnished image.
The wealth of features of many languages is indeed a problem rather than a solution. A multitude of features is another consequence of the programmers' belief that the value of a language is proportional to the quantity of its features and facilities, bells, and whistles. However, we know that it is better if each basic concept is represented by a single, designated language construct. Not only does this reduce the effort of learning, but it reduces the volume of the language's description, and thereby the possibilities of inconsistency and of misunderstanding. Keeping a language as simple and as regular as possible has always been a guideline in my work; the description of Pascal took some 50 pages, Modula took 40, and Oberon took a mere 16. This I still consider to have been genuine progress.
I remember a long discussion in an academic seminar in the mid-1970s, when the term "software crisis" was in full swing and the notion of correctness proofs of programs was put forward as a possible remedy. Professor C.A.R. [Tony] Hoare, the speaker at the seminar, had eloquently presented the principles and the advantages of correctness proofs replacing testing. After a long discussion about the pros and cons, Jim Morris got up and disarmingly asked: "But Tony, what is your answer if we frankly confess that we dearly love debugging? Do you want us to abandon our most cherished enjoyment?"