Saturday, August 19, 2006

OOP is dead

Sure, that sounds dramatic, but I really think that OOP reached it's peak and is on the decline. In the last time we saw increasing interest in functional programming languages and concepts from functional programming (like closures, continuations etc). Many people want to see those concepts available in well known OOP-languages like Java, C#, C++ etc. And the language designers reacts and are starting to integrate those features.

But will this really ease software development? I doubt it. Feature overkill never was a solution - in nearly every are of life. Only adding feature after feature into something don't makes it better. It much more important that someone creates a homogeneous, useful and practical combination of features. To make programming as easy as possible, the programming language should guide the programmer in how to tackle a certain problem. If you have two different paradigm's available in the language how can you decide which paradigm you should use? And if you have to use 3rd party code, how probable is it that they used the same paradigm you used in your application. Fitting code in different paradigms together creates a so called 'impedance mismatch' which often leads do error prone interface-code (think of OR-mappers to fit a declarative relational language like SQL to a OOP-language like Java).

But if 'multi-paradigm' languages are bad (because they don't guide programmers and make code reuse more difficult) what's the appeal of those features from the world of functional programming? It's simply because OOP has it's limitations and those limitations are starting to become more and more obvious.

I will try to look at some of the reasons why OOP failed to live up to its promises and why I think that OOP will be superseded in the future.

One of this limitations is the 'specialty' of the 'this'/'self' parameter. A method call has a specific parameter which is used for method-dispatch: Determining the method to call at runtime. But what if you need two parameters to determine this method? In OOP you can use the visitor pattern, but it requires lots of boiler-plate code. And if you have 3 or more parameter or even want to decide on more complex conditions (like value ranges, structural constraints etc) it's totally useless.

In functional programming it's different: The central element is the function and all parameters are similar. To dispatch between different implementations at runtime most fp-languages support pattern matching which is much more powerful even then multiple-dispatch because it allows dispatch on the deep structure of the data and not only on the type. But even much more simple problems show the problem with using methods:

Lets look at the Java-runtime-libs. Imagine you want to add a new method to a class from those libs. Maybe you want to have a method 'trimLeft' for Strings which works like the standard 'trim' but only trims the left side of the String. It's not possible. You can't extend String because it's final (for otherwise good reasons), and even if you could, all the String you don't create by yourself are 'standard-strings' without a trimLeft method. So the only way to do it is to invent a new class, maybe 'StringUtils' and put your 'trimLeft' method as static method in it. So every time you have want to trim a string you have to remember that 'String.trimLeft' doesn't exists and that you have to call StringUtils.trimLeft(string) instead.

In a functional (and even procedural) language that's not a problem: Because all operations are implemented via functions, you simply define

function trimLeft(s: String);

and that's it. Totally equal to a function which resides in a lib, and you can import your own extension module wherever you want and use it for every String.

Ok, the Python/Ruby/Smalltalk/etc. user will point out, how easy it is in their favorite language to extend a given class later. But even this has it's disadvantages, because it only works in dynamically typed languages and there are some very valid reasons why dynamical (or latent) typing is a bit problematic. Also it don't solves the problem that 'self' isn't equal to the other parameters.

Look at a simple operation like the addition. In OOP you write something like v1.add(v2). This is totally unsymmetric and don't reflects the structure of the addition. Compare this to the procedural/functional solution:

function add(v1: Int, v2: Int):Int;
function add(v1: Float, v2: Int):Float;
function add(v1: Int, v2: Float):Float;
function add(v1: Float, v2: Float):Float;

As easy as it gets and all adds are defined together at one point in your program. And if you decide to extend 'add' with your own type, for example a new type 'Complex', you simply add some new definitions in your library. In a typical OOP-Language you have to modify Int.add instead, and Float.add, and BigInt.add etc. Even if it's possible, it's still lots of work and it's non-local, the changes are in several classes instead of being bundled in one single module.

Because of this OOP is in fact less extensible as functional and even procedural programming. Sure, there are ways to solve this problem (think of C++'s friend functions for example), but those turn out quite often as borrowed from fp-land and are often conflicting with other parts of the language.


Also have look at Part 2 and Part 3 of this series.

16 comments:

Anonymous said...

Your article (especially the second part) make several excellent points. However, there are also several points which are more contentious, or even wrong. The order of the following is (roughly) as in your article.

0) Adding closures does ease software development (slightly), or at least adds a level of convenience. While throwing them into a language without thought is not a good idea, like any other feature, anecdotally, I'm a lot happier programming in languages with closures (Lisp/Haskell/Python/
Ruby/Smalltalk/...) than languages without them, thought closures are not the only reason why this is the case.

1) Single dispatch (specialized 'self') is not the only way to do things. CLOS, for instance, uses multiple dispatch; multiple dispatch can also be done via libraries in some languages, such as Python.

2) Pattern matching isn't limited to functional languages.

3) The 'add' example speaks of poor design, but it is not a problem of OO. Smalltalk, for instance, is a pure (single dispatch) OO language, and (at least in VisualWorks, the implementation I've used most of it) the way this is handled is to send a message to the argument to '+'; ie, when the receiver was an instance of Integer, "theArg sumFromInteger: self" is then sent; no changes to Integer are needed for this to work for new types of numbers -- they merely need to implement a sumFromInteger: method.

4) Functional and OO programs tend to be extensible in different ways: functional programming makes adding functions easy, but adding or changing data definitions is often more easily handled with OO.

5) I think there's something to be said for multi-paradigm languages, but discussing this goes too far afield.

Pupeno said...

That functional programming in the Haskell way, or the Erlang or the Scheme way is more powerful than Jova, with it's so-static-everything, single dispatch and tons of other limitations is no news (well, at least for me).
But I'd like a comparition with pure functional programming against something more powerful, like CLOS.
I think an important point would be that dispatching according to pattern matching instead of type is more powerful because you can dispatch according to the data itself.
Yet, I am not sure how the lack of an internal state to objects would help or be a problem in solving problems that are naturally object-oriented, like GUI building.
I am interested though because I'd like to convince myself to use Haskell :)

Karsten Wagner said...

Closures are a very nice thing, but also they're far from being the philosopher's stone of programming. Sometimes even the use of iterator-objects if more powerful then iteration via closures. Think of iteration over a tree with the normal closure approach: You have a call like forEach(tree, lambda(el) { ... }). But what if you want to compare two trees elementwise? You need parallel-iteration or create a new 'forEach2' function which gives you the possibility to call it like forEach2(tree1, tree2, lambda(el1, el2) { ... }). The problem is that writing a forEach2 is much more difficult then a forEach and if you even want a forEachN then things aren't getting more easy too.

But there are ways out of this problem: One is the use of lasy evaluation. Then you simple have to write a transformer which transforms a data structure into a lasy list and then you only need multiple forEach-functions over those lasy lists. But having general lasy evaluations (while a nice abstractions) always has a definitive performance penalty. Another way to go is to use non-mutable iterators with yielding. This would limit the amount of the required 'lasyness' to certain parts of the programm and is much better optimizable.

About multiple dispatch: I mentioned it in my article, but it has two problems: It's less powerfull compared to real pattern matching (because pattern matching can dispatch not only on types but also in values, even 'deep-down' in some data-structure). And it makes encapsulation problematic, because there is no single 'this' (or 'self') parameter anymore which you can use as base for access restrictions. In fact I think that multiple dispatch has already a functional flavour, a 'method' call with multiple dispatch even looks like a standard function call.

Yes, you can add pattern matching to OOP too, but is it's still OOP then? Or wouldn't this simply be a multi-paradigm language? While pattern matching fits well into functional, relational and declarative programming it really hasn't much to do with the central concepts of OOP.

Smalltalk and add: It's still less readable and composable than using functions with some kind of ad-hoc overloading. If you want to add a new data-type for add, you still need to add lots of sumFromMyNewDatatype: methods to several classes. Also with functions only the functions visible in the actual namespace (composed of the imported modules) are considered for beeing called. So one could have different implementations of a certain datatype and only use one, depending on which modules you import in the current unit and this is also important to avoid name-clashes.

And I really dispute the notion that changing of data structures is more simple in OOP then in other paradigms. If you look at the example in part 2 you see what's neccessary just to add a simple cache-field to a data structure. In facts it even worse, I will post a article about this topic soon.


CLOS is also not really OOP, they don't call it 'generic functions' for a reason. CLOS is a interesting (but IMO unneccessary complex) approach for unifying concepts from OOP with concepts of functional programming. If you look at a CLOS programs you will see that there are often lots of 'normal lisp style' programing all over the place, so it's hard to call this really OOP. It's somewhere in between, but I doubt that it's a good idea to mix it up this way, even if it has it's advantages.

luke breuer said...

You should look into C# 3.0 some more. Your "this pointer" deal is partially dealt with via Extension Methods, which are new to C# 3.0. They are methods which can be "attached" to any object of one's choice and are bound to at compile time, with an object's own method taking priority if the names and signatures collide.

I'm not sure why you think methods with multiple "this"es of different types belong in any of the "this"es' classes. I would argue that OO methods specific to one type are more readable when called via the dot notation, while functions that "equally" encompass more than one type might be more readable without the dot notation. I'd like to hear a bit more from you why you want to eliminate use of dot-notation for methods, or what sort of notation (that is different from just function calls with no dot) you recommend instead.

Karsten Wagner said...

C#'s extension methods are more a proof for the validity of my claims. And in fact it's not a really good solution. As I told in my article, even real multiple dispatch (which is an extension of the idea of extension methods) don't really solves the mentioned problems because of the visibility problem.

But because normal functions with pattern matching (which is more powerful then method dispatch) and a good module system instead can simply solve this problem, I think OOP is simply on the wrong track (but not only with this problem, there are lot of others I wrote and will write about).

> I'm not sure why you think methods with multiple "this"es of different types belong in any of the "this"es' classes.

I don't think so. But with OOP you have to specify visibility: The class is some kind of module on its own.

> I'd like to hear a bit more from you why you want to eliminate use of dot-notation for methods,

The problem is not the notation, it's the special abilities of the 'this' parameter. If the compiler would simply always rewrite x.func(y, z) to func(x, y, z) we could the notation which is appropriate in a certain case without having the mentioned problems. I consider x.size + y.size as better readable as add(size(x), size(y)), but with some syntactic sugar we could write both while the semantics are still those of the latter.

Troll said...

@Karsten Wagner,

You said in the comments, "Sometimes even the use of iterator-objects if more powerful then iteration via closures."

Isn't the phrase "iterator-objects" proof that you think in an object-oriented paradigm? Also, you need to learn that object-orientation is a paradigm, a way of thinking. It is not a language; it is an abstraction. It is a layer up somewhere away from the bottom in bottomupness.

Finally, you definitely don't sound like a linguistics person, so who are you to tell me how a language should be designed? Do yourself a favor and buy Writing Systems: A Linguistic Approach by Henry Rogers. Doing so will free you from this vacuous, pretentious, uber-confident ivory tower notion that the language others prefer to talk in is "obselete".

Karsten Wagner said...

> Isn't the phrase "iterator-objects" proof that you think in an object-oriented paradigm?

I've written programs in the functional, object-oriented and declarative paradigm, so I'm able to compare all of them. Iterator objects are a typical OOP-concept and really have certain advantages over closure based iteration (namely arbitrary parallel iteration). So why not pointing it out?

> Also, you need to learn that object-orientation is a paradigm, a way of thinking.

Thanks for the hint. Maybe using OOP for about 20 years really isn't really enough to notice this.

> It is not a language; it is an abstraction.

Of course. It's the abstraction which has the mentioned problems. And because of this the languages which are customized to support this abstraction have those problems too. So we maybe have to switch to another, better abstraction.

> Finally, you definitely don't sound like a linguistics person, so who are you to tell me how a language should be designed?

I know enough about programming languages to allow me an opinion about them. I do programming for more then 25 years now, have used around 50 different programming languages over the years and know about many more. What's your qualification?

If you can show where I am wrong, please do it, I like good counter arguments. But your ad hominem attack only show your weaknesses to bring up a real argument. And if you're not interested in what I have to tell: Just don't listen.

Troll said...

@Karsten Wagner

> I do programming for more then 25 years now, have used around 50 different programming languages over the years and know about many more. What's your qualification?

I don't have much in the way of qualifications. I'm a very young man, with a relatively clean slate. I believe it's my best and worst quality. I (have) program(ed) in Python, Ruby, Java, C++, CL, OCaml, perl, and PHP. I read a lot of computer science literature. Some of what I read is very academic, some not so academic -- like blogs. On a given day I read an estimated 200 printed pages of computer science.

I also hang around a lot of experienced programmers, and usually the sense I get from them is they spend way too much time hassling over language details and not enough time being productive. They also enjoy making sweeping statements without real case analysis, which underlies more significant problems.



My main problem with most of your thoughts on this topic is that YOU CONSISTENTLY DO NOT PROVIDE ARGUMENT. Your topic sentences stop an idea as soon as they start an idea, so it's hard to get a feel for how far your 20 years of experience has brought you. If you want to declare "OOP is dead", then you can't passively enumerate through problems.

As an example, you say, "CLOS is also not really OOP, they don't call it 'generic functions' for a reason." Here, you make a dramatic statement and then beg off backing it up with logic. Where are the detail sentences to support this statement? When you do take the time to step through your thoughts, you make good points: your short discourse on closures and why they are not the philosopher's stone of programming was good.

Karsten Wagner said...

@Troll:

> I also hang around a lot of experienced programmers, and usually the sense I get from them is they spend way too much time hassling over language details and not enough time being productive.

Maybe this should make you think. In programming there is always lots of "hassling over details" because every good programmer wants to do a good job and todays programming languages makes this often very hard. Being productive doesn't mean to put out lots of lines of code, it means to put out code which does the job and which is maintainable.

Often a problem could be solved straight-forward by copy&paste programming. But this makes programs very hard to maintain: If you have a 100000-lines program which should only be 50000-lines because of extensive copy&paste programming, then the program will be nearly impossible to maintain later, even if the initial write was relatively quick and straightforward. The reason is that now every change at some basic structure (which has been copied&pasted) has to be replicated numerous times - and always with slight variations (because in c&p-programming each 'paste' has been changed a bit or otherwise it could simply been a function/procedure/method which was called instead of copied). After several of those changes the program will be an incomprehensible chaos.

> My main problem with most of your thoughts on this topic is that YOU CONSISTENTLY DO NOT PROVIDE ARGUMENT.

Hm, I think I provided arguments. But it's hard to provide something very convincing if you don't have the time to to write a whole book (and I admit, I'm not a good writer, especially if I have to write in English). So I try to pick out certain aspects and try to show why and where there are problems.

> If you want to declare "OOP is dead", then you can't passively enumerate through problems.

I've also given certain more fundamental reasons for this - sometimes accomplished by a small example problem. The problem here is that some people will say now 'Pah, thats only Java, if I use my favorite language this program won't exist'. But Java is a rather common language and also very 'pure OO' so it's suited as a common base for those examples (everybody knows Java, but much less knows Python or even CL). Many other languages which are more multi-paradigm can be used differently, but it won't show problems with OOP anymore. This is not about a certain language - it's about a paradigm!

If we look at the Scala-language for example which solves some of the mentioned problems we also have to look if the solution really is OOP anymore.

I've written about the following fundamental problems of OOP:

- that having references compromises encapsulation
- that explicit and distributed state leads to additional complexity
- that binding methods to classes is less extendable then using functions over types
- that in OOP a programmer has to create lots of dependencies between his objects only for performance reasons
- That inheritance as a single is-a relation is to simple to build a paradigm around it
- That OOP fixes certain access-paths in early stages of design and makes it very hard to allow multiple-access-paths at the same time (this will be sole topic of one of the next articles because its a really huge problem)

I think that OOP is fundamentally flawed as a paradigm for general programming because of the above fundamental problems. But it's quite possible that I has to make them a bit more clear.

> As an example, you say, "CLOS is also not really OOP, they don't call it 'generic functions' for a reason."
> Where are the detail sentences to support this statement?

This was in a comment where I choose to not elaborate to much on it. But its not true that I don't gave a reason for this claim. I wrote:

"CLOS is a interesting (but IMO unnecessary complex) approach for unifying concepts from OOP with concepts of functional programming. If you look at a CLOS programs you will see that there are often lots of 'normal lisp style' programing all over the place, so it's hard to call this really OOP. It's somewhere in between, but I doubt that it's a good idea to mix it up this way, even if it has it's advantages."

I thought these are enough reasons and its quite obvious (as long as you know about CLOS a bit). CL is a multi paradigm language which makes it really hard to tell which paradigm is used where. Most CL-code is written in an imperative style with some non-pure functional elements. CLOS puts an object system on it which is based on multiple dispatch. But multiple dispatch is in fact somewhere in between the classic OO-paradigm (where classes are sending message to each other) and the functional paradigm (where everything is a function call). So bringing CLOS as an example why OOP can work only shows that you need additional concepts from functional programming to make OOP work. And then I ask: Why not make the full step?

I also know that functional programming has its problems too, but those problems are different because fp don't says anything about data-representation: While most modern fp languages use something like the ML type system thats not a fundamental part of fp.

On the other hand the data-representation by individual objects which are connected among themselves, each having a state of their own and that they can invoke operations on each other by sending messages which are chosen by the object based on its type is the core of OOP.

This is the basis of the OO-paradigm which I consider useless and harmful for general programming (even if it may work for certain kinds of domains) and this is what I tried to point out.

Nathan said...

One thing I hate about Web 2.0 is that anyone who knows a programming language or two is suddenly compelled to write an "X is dead or considered harmful" blog post. It's like a rite of passage.

Karsten Wagner said...

@Nathan: Yeah, I hate this too. But why don't you post that in the blogs of those people who do that?

BTW: If anybody has a problem with a certain statement in this blog, just say so. Bitching around making ad hominem attack is nothing more then a display of ones inability to find real arguments (and I suppose that's also true in the somethingawful forums).

llinear said...

There is no doubt that OOP is dying. It was seriously announced since 1995. Those who do not see that are way behind the trend, and probably have nothing to do in research. It should be obvious for anyone who has poked at academic work in the field.
The average programmer will probably never understand half of what is stated in Karsten's post.
Troll: I wonder what kind of papers you are reading since you fail to understand that.
I do not get why Karsten even bothers to answer such idiotic posts...

Eric said...

llinear:

I would venture to say OOP's problems were apparent even before '95. The death of 'logical positivism' from philosophers such as Popper and Quine pointed out the flaws of any logic system built off of strongly-defined, immutable terms (such as, oh, let's say "Class" definitions). The very existence of "Aspects" display the validity of their arguments.

However - OOP languages which are not class-based (Javascript,...) or allow overriding definitions (Ruby,...) go a long way to aleviating OOP pains. I think OOP will be around for quite a while - just not in the form of languages like Java or C#. This brings the questions: are these new languages even really OOP?

Shawn Regan said...

Where do you see the use of generics, being a way to solve some of the problems I have read in your blog and other comments?

Imp said...

OOP is not dead, but definitely dying ... There are a lot of problems, which OOP cannot handle... One of the most serious is a problems with efficient and secure way of utilize multi-threading. Objects with mutable state is pain in the ass, when you write massively multi-threading system... Locking is a worst solution, Software Transactional Memory seems much more promising... And pure functional programs can be much better paralleled.

Andreas Rumpf said...

Please check out Nimrod: It tries to combine the different paradigms to a coherent and simple language.

(In Nimrod "a.add(b)" is rewritten to "add(a, b)" by the compiler.)