What's the problem with state? Simple: It can change. And if it changes and the change isn't reflected accordingly in other parts of the state, we have a bug. Lets look at a simple example:
void calcSum(List<Integer> data) {
int n = data.size();
int sum = 0;
for(int i = 0; i < n; i++) {
sum += data.get(i);
}
return sum;
}
This code looks perfectly valid, but in fact contains a hard to find bug. Imagine while executing the above method, an other thread is removing one element from 'data'. The result will be a runtime error, because n is now bigger then data.size() and for i = n - 1 we get an array range overflow.
Of course every Java programmer knows how to prevent this kind of problem (putting the method into a synchronized(data)-block), but if shows how easy it is to make mistakes which will happen nearly never - but then sometimes in a critical moment. The reason is mutable state: The above method depends on the state of 'data' and if this object is changed somewhere else while the method is executing, we have a potential problem.
But that's not all. Lets look at another example: Caching, which is often required to get a better runtime performance.
class CustomerList {
private List<Customer> customers;
float average_age = -1;
public void notifyChanged() {
average_age = -1;
}
public float getAverageAge() {
if (average_age < 0) average_age = calcAverageAge();
return average_age;
}
private float calcAverageAge() {
if (customers.size() > 0) {
float sum = 0;
for(Customer c: customers) sum += c.getAge();
return sum/customers.size();
}
return 0;
}
public void addCustomer(Customer c) {
customers.add(c);
notifyChanged();
}
}
The calculation of the average age of the customers is cached here in a field. If you add a customer the cache is cleared an will be calculated next time it's required. This idiom is quite useful and common, but it has it's dangers. The danger is that the state of the list isn't automatically reflected in the state of the 'average_age' field. If someone changes the age of a customer externally, the whole caching will fail because this class has no idea that a changed happened. There are several methods to avoid this problem, one is to simply add a reference to the containing CustomerList-object to every Customer-object and then add 'notifyChanged()' calls to each method which changes relevant data of the Customer-object.
But you see how this seemingly simple optimization increases the complexity of the program. First we only have the idea with the cache, then we discover that we need notifications, then we have to add some registration mechanism, that every Customer-object knows it's container. And if Customer can be part of multiple CustomerList-objects we need the full fledged observer pattern, have to identify each and every position in the code which can changed a cached value etc. Imagine you have to maintain a complex program which uses several of those mechanisms and also optimizations to have multiple notification messages (so that the average_age cache isn't cleared if only the address of a customer is changing). You see how complex our once simple program is getting.
All this trouble only has one reason: Mutable state. If you change one part of the state (the age of a customer) all depending parts of the state (here average_age) have to be updated too. In complex programs we can even get circular dependencies and also cross-linking. Using caches is only one part of the problem, the problem happens with each and every dependency in the program, for example consider a GUI-view which has to be updated each time the underlying data changes. And if the whole thing has to be multithreaded, you have to additional sprinkle lots of 'synchronized' blocks all over the program.
But I think most Java programmers know that for long. And also programmers who are using Python, Ruby, Smalltalk etc. It's not a problem with a certain language, it's the programming paradigm, it's object orientation. Object orientation means that you have to do much 'by hand'. Each object, each dependency has to be maintained explicitly by the programmer. Sure, with clever libraries and frameworks, all this is solvable, but you still have to do much per hand. Is this complexity really necessary?
The answer is no. But we need a different paradigm. Most programmers already now one: Relational database management systems. Those systems do lots of caching and index updating totally automatically in the background. You only has to give a SQL query and the system gets the answer. It's using caches, it's updating indices, optimizing the order of table queries and lots more. But the programmer simply has to provide a simple query. Sure, even the best RDBMS have certain limitations, but it shows a way to go.
But there are other ways to reduce the dangers of mutable state. One is to simply have none. But is it possible? Can we program without having mutable state? Yes, and many already know: I'm talking about referential transparency (most often used in functional programming). This simply means that there are no 'side-effects' in a program, thus there can't be any mutation of data. How does it works? We require that each and every call to a function always give the same result if we provide the same parameters to the call. A language with referential transparency solves most of the problems mentions here. Since data isn't mutable there can't be concurrent modifications and multithreading is as simple as it gets. And without modification the compiler can do caching on it's own, he simply has to look at the input parameters of a function and if those are the same as before, he also can return the same value as before.
Of course referential transparency has it's disadvantages: It's a bit hard to program with it, because you can't change data. But how do you do any interesting things with it? Imagine a simple interactive application which reads some data from the user, calculates something and writes the result on the screen. How could we write this in a language with referential transparency? (I use a Java-like syntax)
function int readData(IOState io);
function void writeData(IOState io, int data);
function int calc(int value);
function void littleApp(IOState io) {
(int val, IOState new_io) = readData(io);
if (val != 0) {
int res = calc(val);
IOState new_io2 = writeData(new_io, res);
myLittleApp(new_io2);
}
}
Those three functions above are defined somewhere else. The interesting part here is the 'IOState' parameter. It holds the state of the IO-subsystem which is used to read and write data from and to the terminal. Without this IOState, the 'readData' function would return always the same value, because with the same input each function in a referential transparent always has to return the same result. The same goes for the 'writeData' function, which uses IOState to chain the writing in the right order. The last call 'myLittleApp(new_io2)' loops the application until someone enters a '0'. Recursion is necessary (but cheap here, because the function it tail recursive and can be optimized away by a good compiler), because we can't change a 'variable' (this would be mutation and could lead to side-effects which are prohibited). Each value is only assigned once, so in fact we can only copy something and never change it.
But you see that it's really possible to write interactive programs without having a single mutation of data. Everything is happening by creating a (sometimes modified) copy of existing data. And if the user enters the same value twice, the compiler could safely return the earlier result, because a call to 'calc' only depends on a single parameter.
The above is the real appeal of functional programming, not the existence of closures or continuations. With referential transparency it's not only possible to remove lots of dangerous constructs, concurrency issues and have automatic caching, we also get better ways for testing (if each function only depends on it's parameters, testing is much easier, because we don't need to set up a complex background state) and the compiler can make lots of optimizations and even make the program multithreaded by itself.
All those advantages are paid with a different way to write programs, and some new difficulties which also have new means to solve them (google for 'Monads Haskell' if you want to know more). But it shows that the intrinsic problems of object oriented programming are in fact solvable if we use a different paradigm. Relational databases and functional programming are only two examples, there are more. But before I delve deeper into this, I will try to list other reasons why OOP is dead soon.
Also have look at Part 1 and Part 3 of this series.
23 comments:
This explicit IOState isn't useful. It's better to have a main function that's a list of impure functions:
main = [read, write]
This will be executed like: write(read()), but the main function is still pure because it always returns the same list.
Perhaps this has already been pointed out, but, OOP doesn't necessarily presuppose mutable state. See, for example, OCaml's support of OOP. You can have objects with slots and update them in a non-mutable way, preserving an applicative style.
Karsten,
I think you are being disingenuous here. "Database programming" works without side effects because of the elaborate frameworks created in C++ to make that happen. You can't really claim that OOP fails because a framework created in OOP does a better job than raw OOP.
Instead of creating a random sample of code where several things are not defined, why not just redesign the java example above with notification in the functional language of your choice? In that way a reader can make an apples to apples side by side comparison of the value of what you are claiming is a different paradigm.
Second let's actually look at your code sample
First of all what alters IOState? Something in the OS? Are you again deferring the problem of synchronization to something outside of your programming language?
Second
(int val, IOState new_io) = readData(io);
I guess somehow read Data and returns two different values to the newly created val and new_io
if (res != 0) {
res is nowhere defined. Isn't the state of res "mutable"? Isn't this infact an indication of the existence of state? Worse this evaluation is the exit to your recursion, why does the result of the calculation determine when the exit should occur?
Essentially your recursion seems to set up a polling operation to some external data source. Why is recursion necessary for this? I find 90% of recursion can be better expressed as a normal iteration.
ie
while(new_io != exit_val)
{
do io operations
}
First: Sorry for the 'res' mistake it should of course mean if (val != 0) {. I've corrected it.
denis: I know about Ocaml, but since Ocaml have mutation too you can progam in standard OOP-way and you will get the same problems (in fact most Ocaml programs I know really do this to a certain degree). Also I've described in another post the problems with later extensibility which is also an inherent problem of OOP and an advantage of a functional/procedural approach.
smacfarl: The point is, that from a user perspective a SQL-RDBMS is an opaque system which solves many of the described problems. Of course this system has to be implemented somehow. In this regard it's like every compiler for any other language. It's like Java is translated to VM-code and then executed by a very complex C++ program called JVM. IOState isn't altered, a new version of it will be generated with each call to writeData. Of course deep down in the compiler there will happen some mutation of internal structures, but the point is the model the programmer see. That called 'high-level-programming. Also iteration is of no use in side-effect-free programming, imagine a loop without a way to change a value.
I could've written the sample program in a much more compact form by using Monads (which encapsulate the chaining of data thru the program without having to write it explicitly for each call). But that would only distract from the core idea here: That it's possible to transform interactive programs into a side-effect and mutation-free version. And that this would give the compiler lots of optimisation opportunities and eliminating several severe sources of potential bugs. I will write more about this topic in the future.
Benefits of the FP are obvious. However, there are some limitations. For example, let's say I want to implement an bank account with facility for withdrawal and deposit. How will I do that with an FP language?
The example you mentioned works fine in FP because same operation is getting executed on a shared state. That's why the state can be easily handled as a function parameter or local variable inside a function.
Moment you have more than one logical operations on the same state (like account balance), FP solution becomes very convulated.
That is why you have impure functions. impure functions have local state wich should be enough.
But Haskell has no impure functions, I think you have to represent the bank balances by monads. --phr
Haskell is not the end of functional programming, there are other functional languages which are a bit less pure. Haskell has monads which allow the encapsulation of chaining state thru multiple function calls and also to evaluate functions in a certain order. It's not a Haskell-only concept (in fact you can use it in every programming language which has closures) but Haskell has some syntactic support to make it easier. The problem is that monads are a bit hard to grok for most people.
And: State is inevitable, you can't totally avoid it, because every non-trivial program has to interact with the world - which isn't state-free.
But its a big difference if you have mutable state only in certain parts of your program, carefully isolated - or if it's splattered all over the place like in OOP.
So one idea is to have all 'interesting' functions fully referential transparent and have some simple 'driver' functions which can modify state but let all the work do by pure functions. So you have the advances of fp combined with the unavoidableness of state. Haskell does this with monads, but there are other less difficult to grasp ways.
In fact you can avoid mutable state in most parts of most real-world-programs by the method I wrote about ind the article, only if you have to do I/O mutation is neccessary.
For example in the bank-account-example you mentioned: The 'database' with all the bank-accounts can be an immutable object. If you want to 'change' it you simply create a changed copy of it and chain it thru your program. Problem: This can become difficult for 'deep' structures and you may need something like a 'zipper' to avoid time consuming search & replace.
But I believe its possible to overcome those problems with other approaches of defining data-structures instead of the reference-based one.
As someone else pointed out, OOP /=> mutable state. Pierce's book Types and Programming Languages spends some time distinguishing the two. I think this is more than an academic argument. Throwing OOP out because most OOP languages support mutable state is throwing the baby out with the bathwater. OO provides ways of modeling that are useful and worth implementing in other languages, FP included! While I completely agree with the content of your blog, I think the title is thus somewhat miselading.
I'm not sure if you can really talk of OOP (in the sense how it's beeing used in the real world) if you have only immutable objects. But even if you can, there are other problems with OOP I've talked about in other articles (part 1 and 3). And I'm just writing a new one about the problems with the OOP-way of creating data structures which I will post here in a few days.
One of the main problems of OOP is that you try to create lots of independent entities called 'objects' which have to be as separate as possible. But this leads to lots of problem because in reality the boundaries you draw are rather arbitrary - which often leads to code-maintenace problems later.
There are always lots of dependencies between entities and instead of trying to isolate things which are in fact very dependant of each other its better to build solid methods of handling 'bulks' of data instead of trying to cut it into pieces first.
OOP is quite nice in certain areas of simulation: If you your problem really consists of independent 'agents' which communicate with 'messages' over relativly very small interfaces, then OOP is a good way to representing this kind of problem. But I really doubt that this class of problems is really that big and important in real world programming. I think that the success of OOP is a result of the wish of creating software like 'real world things': Using discrete elements and joining them together. But software is different.
It depends on your characterization of OO. I think many would agree that if you have class inheritance, encapsulation, and subtype polymorphism then you have OO. None of this implies mutable state. In fact Haskell already supports 2 of the 3, so you could say that Haskell is well on its way to being an OO language.
Without an example, I don't really know what you mean by code maintenance problems. In fact OO was applied to software development precidely because it attacked the maintenance problem. In any case, any approach be it OO, functional, logic, whatever will have some arbitary set of boundaries that it draws in order to abstract (see the Expression Problem for an example of this).
OO was adopted because as the nature of problems being tackled by software changed from numerical and algorithmic problem solving of the 60's and 70's to applying computing power to everyday tasks, an approach was needed for modeling the real world, in terms of things and what you could do to those things. It still provides a very good handle for doing that.
I'm in agreement with srinivas that the title of this series of posts is misleading. As a treatise on the limitations of OO and procedural programming and the advantages of functional, it largely succeeds; as a declaration that these facts imply the death of OO I think it's a bit overheated. After all, functional programming and mutable state are things that have been around for decades, and we've yet to see one fully abandoned in favor of the other. Good article nonetheless.
Good idea!!!
Good idea!!!
Good idea!!!
Good idea!!!
NIce!Good idea!!!
NIce ball!Good idea!!!
Payday Loans No Hassle Payday Loans - Payday Loans and Payday Advance Loans online with no hassle
Atomic SEO I really like this blog when I first signed up here i wasn't sure were to post...
seo tools This site has the coolest free seo tools.
Your 'simple example' is buggy: you declared the function as void, so it can't return sum.
You are damn right and I hope people would stop claiming "OOP without state is possible". Please be honest: It's no OOP then anymore as supported by the major programming languages C++/C#/Java. Typical Java code is full of state. Heck, it is even more stateful than procedural programming: What used to be a procedure call is now:
* Creating an object feeding parts of the data to the constructor.
* Call the method you're intested in feeding other abritrary parts of the data to the method.
* Reading from the object to get the return value.
Post a Comment