Sure, I used exaggeration to make my point more obvious. But there is more: Have you looked at the actual development in recent program language design? Some well known Java-guys propose to add closures. C# gets closures too, and there is this LINQ thing which is also totally un-OOP. And C++? Look at the recent trends in template-meta-programming (boost++) which is heading directly away from OOP too. And we have those 'new' languages, like Ruby and Python. Sure they are OOP to a big degree, but they also incorporate lots of functional aspects into their design. Same for really new languages like Scala which is only OOP to a certain degree anymore.
Another trend is meta-programming. While I think that meta-programming is a bad idea, it shows that there are limitations in a language one can only overcome with meta-programming. In Java for example we have annotations and reflection which allows to change then semantics of the language. Other languages have other means to accomplish the same. But while meta-programming is quite natural in a language like Lisp, its hard and cumbersome in a language like Java. So if people really go the way to do it, there have to be some obvious problems which aren't solvable with the language itself.
So why would people make those decisions if OOP is really alive? It's not that people abandon languages like C++ or Java, but they start to use them in an increasing non-OO way. And this means that OOP starts to lose importance.
And on the other hand we look at the ever increasing number of frameworks for Java which only try to solve problems which only exists because of the limitations of OOP. Frameworks of sometimes absurd size and complexity considering what problems they try to solve. Ok, maybe that's not a problem with OOP but only with Java. But I really doubt it. I've designed several OOP-languages over the last 20 years and each and every time I wasn't able to solve certain severe problems. Sure, you can make things more easy than in Java, but it's always only a gradual increase not a real big one.
what's the most important point in programming? Creating compact code? Creating easy maintainable programs? Having a very expressive language which allows elegant solutions to common problems? Sure all those are nice to haves, but the paramount point simply is one thing: Code reuse. If you can reuse existing code, you don't have to write it, to debug it, to maintain it. It's the optimal productivity boost and always faster then to re-implement something, even if it's in the most elegant, compact and easy way.
So what's the problem with code reuse? Why is it so difficult? And why especially in OOP? And isn't one of the core ideas of OOP to make code-reuse much more easy? I thought so myself, but for some I doubt the validity of this claim. It simply hasn't worked. But why?
In OOP you have to create lots of unnecessary dependencies just for performance reasons. Think of the cache-example from the last article: Adding simple caching created lots of new potential problems and pitfalls. And in fact it's even totally useless for the function of the program, it's only needed to improve performance. But because of the cache, you need more dependencies to notify the cache of changes in data structures on which the cached value depends. But each dependencies breaks opportunities for code reuse, because you don't only have to capture the 'code itself' but also all the dependencies if you want to make a reusable piece of code. That's the reason, most frameworks grow that fast: They simply have to provide all those dependencies.
Isn't that something in which OOP should really shine? With inheritance, which allows the creation of slightly extended classes from existing ones, OOP should be able to give lots of extensibility. That's the theory. In practice there are several reason why it often won't work:
- The function-composition-problem I mentioned in part one.
- In OOP the type of an object is determined at creation time. So if you have an extended version of some class you can only have the advantages if you also have control over the creations of objects. If object-creation isn't in your reach, you will only get objects of you e 'old uninteresting' base-class instead of your 'shiny-new' one with enhanced functionality. To overcome that problem the factory-pattern has been invented, but it creates additional complexity and isn't of any use if the object you want to extend isn't created by a factory. There's also the idea to make object-types changeable at runtime, but this approach has drawbacks which easily outweights the advantages.
- The limitations of inheritance. Inheritance is a very simple thing, it can only describe 'is-a' relations. While this is perhaps the most important relation, it still only one. And if we want to have multiple 'is-a' relations ('multiple inheritance') we run into lots of difficult specification problems, the reason why some languages simply avoided multiple-inheritance or limited it to interface-inheritance. Also inheritance proposes a simply hierarchical structure of a program, where you can only add leaves to the tree. And you get lots of additional dependencies.
- the limitations of overloading: Overloading only works as good the design of a class. To allow flexibility a class has to consists of very small methods which call each other in a way that you can add as much functionality as possible by overriding them. But in practice that's much more difficult then in theory. If you create more small methods, the code gets more complicated even if the added complexity is maybe never needed, because nobody will ever extend a class in those ways. And if you are to conservative in splitting up the class into small methods, then you could block extensibility later and have to refactor the base class. While refactoring is always possible, it has a big disadvantages: It can break contracts and thus existing code. Also it's against the idea of code reuse where you simply use existing code without having to modify it.
- explicit data paths: This is a topic I will look at in detail in a later article. In short: In OOP you have to specify all data access-paths explicitly. If you have some data-structure you always have to specify some concrete structure how the data is structured. If this structure changes, often lots of code which access data has to be changed too. And a concrete structure which is usable for problem A isn't necessarily usable for problem B. So if you want to reuse the solution for problem A for problem B that won't work, because the data structures have changed, even if A and B otherwise have much in common.
For all those points there exists solutions. But in practice they often simply don't work. It's hard to tell why without looking at concrete examples, but I think the prime reason for it is over-specification: In OOP you have to write much more explicitly as is really useful. Often only for performance reasons. And over-specification leads to additional constraints and complexities which makes reuse much more improbable, because to fit into a new area, existing code don't only have to match the problem but also all those optimizations and tweaks which are necessary in OOP.
I think one of the reasons for that stems from the origins of OOP: Simulation. Each object there is some independent 'living thing' which has to be specified on it's own. Because of this OOP is very local, very 'to-the-metal' and it's hard to look at the overall structure of a program. Thus we have to code lots of things explicitly - which then leads to the described problems. So instead of OOP we need more abstract ways of programming, because abstraction allows the compiler to do much things automatically the programmer has to do now by himself. And by this the code is freed from unneccessary complexity which would not only make programming more easy but would also create more opportunities for code-reuse.