I'm glad my school does not have an "Everything is made of Java" mindset. There are FP classes here, and we use C as well as C++ (among others), so even our imperative programming is not all object-oriented.



There should be charities for those who have never even glimpsed outside of the box. Adopt-a-Drone, or the Functions are Data Foundation. It's a terrible illness and it destroys their quality of life.



I would use something like the golf ball example as an interview question, especially with students or fresh grads. Can they, even with some prodding, realize that a GolfBall class is unnecessary in that example? If not, I don't really think I want someone with that much of a conceptual mental block working for me.

But the burning question is what should GolfBall inherit from.

Ball and SportsEquipment ; Ball inherits from Sphere , which inherits from PlatonicSolid , which inherits from... It's classes all the way down!

Well, you make several good points, but as with anything good in life (sex, drugs, rock-n-roll, you name it), it is best when it is in moderation. OOP is just one of the paradigms you should be using in your code. Anyone who tells you different is usually a first year CS student who just finished thier first "Intro to Java" class and can be safely ignored. Also, first class functions != functional programming, there is a very long history of first class functions/blocks in OOP (see Smalltalk). FP is a much deeper mindset which goes beyond just "functions as data". You should really look into some of the newer thinking around OO, and even some of the older, but less mainstream, thinking as well. For instance, roles (and the original concept of Traits) goes a long way towards helping modular decompositon in OO not be so "entity" centric. You might be interested in this talk I just gave on Moose::Role at the Pittsburg Perl Workshop this weekend. Towards the end of the slides it gives a number of examples of how roles can provide features that that a class "does" where an "isa" relationship just wouldn't make any sense. You should also look into some of the more multi-paradigm languages like Scala and OCaml, both of which provide an excellent hybrid of OO and functional paradigms. And lastly, OOP != Java/C#/C++ there are some really nice OO systems out there in which modeling is not so "entity" centered. Take CLOS for instance, it uses generic functions and classes, so that behavior is very clearly seperated from state. There are many Scheme OO systems which expand on the CLOS concepts too. There is also prototype basesd OO, which also leads to very different modeling approaches. However, I'm slightly worried about the approach taken in Perl 6 ... Fear not, it might be OO under the hood, but this wont stop you from ignoring it's OO-ness as much as you want. And as for efficiency, let the compiler writers worry about that :) -stvn

You forgot to mention Self and its more popular cousin Javascript. JS is more OO than Ruby, but it doesn't have the concept of a class. My criteria for good software: Does it work?



Can someone else come in, make a change, and be reasonably certain no bugs were introduced? My criteria for good software:

I'm curious. In your view, on what basis is JS more OO than Ruby? If anything I'd say that it was the other way around. (Note that both are far more OO than Perl.) For example in Ruby if you sort an array of numbers, it defaults to sorting them as numbers. In JavaScript it defaults to sorting them alphabetically, no matter what their types are. For another example, in Ruby I can trivially add a new method to Integer, and then call my new method on the integer 5. In JS it works, but much less consistently. This works: Number.prototype.foo = function () {alert("Hi!")}; x = 5; x.foo(); [download] But this doesn't (at least not in Firefox): Number.prototype.foo = function () {alert("Hi!")}; 5.foo(); [download] That isn't what I expect from a completely object-oriented language! Furthermore the OO hooks in Ruby are much more pervasive than in JavaScript. For instance I once tried to write a fairly efficient factorial function in Ruby. I found that it was more efficient than the routine to convert big integers to strings! So I replaced the routine to convert big integers to strings. I wouldn't even dream of messing around in the internals like that in JavaScript.

Some notes below your chosen depth have not been shown here

And lastly, OOP != Java/C#/C++ I deliberately tried to steer away from any particular programming language, but if you received the expression that I'm talking about those three, my apologies. The concerns I have about OOP goes beyond single versus multiple inheritance, static versus "dynamic" typing, calling methods versus sending messages, etc. The point was not to compare programming languages, but to explain why the means of abstraction and combination in pure object-oriented thinking do not appeal to me in general. FP is a much deeper mindset which goes beyond just "functions as data". I should know, having been a FP fanatic... My "functions are modules" argument doesn't mean first class functions or doesn't even presuppose functional programming. What I mean by "passing modules as arguments" is trying to be a generalization of what you can do in different programming languages. It might be implemented as being able to pass function pointers (C); or function names (ALGOL); or function references (Pascal). It might be implemented as passing closures (e.g. Scheme, Haskell, Perl, and too many languages to list); or passing objects; or doing something exotic. The point is that you can parameterize what code does as well as which state it starts the computation from ("non-module" parameters such as numbers and strings). And as for efficiency, let the compiler writers worry about that But it's not even a concern for me... To me, a programming language is foremost a notation with which and in which to express ideas, usually algorithms. That we have machines that can use text written in the notation to do something is just a bonus. (Rather nice bonus, I must say.) This stand is partially hypochritical, but I can live with it. If I am worried about pervasive OO thinking in Perl 6, it's because frequently I don't want to think in terms of objects. There are no "efficiency" worries -- I already know there are efficient implementations for message-passing, delegation, virtual function tables, and what-have-you that goes with implementing these things. Just take a look at C++ or OCaml. The concept of roles resembles Objective-C protocols, though with being able to define not only which functions the implementing class needs to provide but also some common functions that all classes implementing the role "inherit". However, this would again be a much more useful technique to think about if there was no mandatory link to objects and classes! (That's just me.) I'll install Moose::Role some rainy day, I promise. --

print "Just Another Perl Adept

";

I deliberately tried to steer away from any particular programming language, but if you received the expression that I'm talking about those three, my apologies. Well, it just seemed to me (and I may have read in between the lines too heavily and I apologize in advance if that is so), that much of what you were talking about were problems with particular OO implementations and the more idiomatic usage of said OO implementations. For instance, in CLOS programming it is not uncommon to have many plain vanilla functions (and macros) along with the classes and generic functions. The same can be said of much Javascript and C++ programming as well. It is only languages like Java which do not allow vanilla functions/subroutines to exist that the "pure OO" approach tends to win out (mostly because there is no other choice). The point was not to compare programming languages, but to explain why the means of abstraction and combination in pure object-oriented thinking do not appeal to me in general. Well, I think it is very hard to discuss abstract OO thought without at some point coming back down to the language level. Every OO system has it's own set of rules and therefore has its own set of limitations, and some systems contradict or are in direct conflict with one another. A "pure OO" system which is not tied to any language would need to be defined before it can be discussed. As for discussing the merits of the parts of OO like abstraction, polymorphism, encapsulation, modularization, etc etc etc, that discussion too will eventually need to come back down to a particular implementation for all the same reasons. My point is basically that there is not such things as "pure object oriented thinking" which is 100% language agnostic. My "functions are modules" argument doesn't mean first class functions or doesn't even presuppose functional programming. I am a little confused by what you mean when you say "functions as modules", this makes me think of the Standard ML module system and functors in particular. A function being a function which takes a module as an argument and returns a new module. Is this what you are refering too? If not then I am totally confused by your use of the word "module". Please expand. If I am worried about pervasive OO thinking in Perl 6, it's because frequently I don't want to think in terms of objects. This is exactly my point, you wont have to think it objects if you dont want too. This is a stated design goal of Perl 6. The concept of roles resembles Objective-C protocols, though with being able to define not only which functions the implementing class needs to provide but also some common functions that all classes implementing the role "inherit". No, that is totally wrong actually. Obj-C protocols are pretty much the same as Java interfaces, and therefore are about as use{ful,less}. Roles (optionally) provide an implementation as well as just abstract interfaces. This makes them more akin to mix-ins or multiple inheritance. However, unlike mixins or MI, roles are actually composed into the class (in perl we do this with symbol table operations to copy from the role package into the class the role is being composed into) which means there is no inheritance relationship between the role and the class. There is also a strict set of rules by which roles are composed into classes, which makes for highly predictable behavior, whereas MI and mixins are much more difficult to predict behavior-wise. However, this would again be a much more useful technique to think about if there was no mandatory link to objects and classes! Well actually, they don't have to be linked to objects and classes at all. Roles are very similar the OCaml/SML module systems in that they can be used to compose a non-OO module just as easily as they can be used to compose classes. In fact, I have one specific usage of roles which uses pure roles as "function groups" which can be composed together (following the rules of role compositition) and are then used as "function groups" and never actually composed into classes/objects. I'll install Moose::Role some rainy day, I promise. You won't like it because it comes with the entire Moose object system. And that is deep down OO to the core, complete with metaclasses and all that fun OO-over-abstraction you seem to really dislike. In fact it is a meta-circular object system so it is actually OO that is implemented in OO. Which I am sure just makes you cringe :) So, in conclusion, I think you have many good points, but your anti-OO stance seems to me to be somewhat reactionary to the pro-OO zealots. All good programmers know that silver bullets don't exist and in the end you just need good tools to get the job done. Those tools may be OO-based, they may be full of FP madness, or they may be a crusty CPAN module from the mid-90s like FindBin that sucks horribly but (for the most part) works when you need it to so you just use it and move on with your life. The moment you exclude any of those tools on some philisophical, moral or religious basis, you are really just adding more work for yourself. -stvn

Some notes below your chosen depth have not been shown here

Regarding Perl 6, giving you access to everything via objects/classes does not mean requiring you to write everything as objects/classes. I like OO because it provides a simple and effective way to encapsulate data, even provides for encapsulating code, and provides convenient namespaces to avoid name collisions (encapsulating names).1 Sure, many (perhaps most) authors trying to teach OO certainly make too big of a deal of inheritance (and don't teach the pitfalls of overuse of it). And I can see your concern about people trying to make too many or the wrong classes trying to identify the "objects" they are dealing with. But I don't think "the Larrys" are newbie OO fanboys so I don't expect them to make stupid OO design mistakes. And I don't see the downside to having a unifying " framework " for providing convenient access to methods and attributes of internal components. And the Larrys certainly don't appear to have drunk the "OO koolaid" of the Java designers, trying to deny coders the ability to design in paradigms other than OO. Quite the opposite. 1 Yes, I know about the problems with name collisions in the face of inheritance. I don't use that type of inheritance much and you shouldn't have to either. - tye

No programming paradigm can aspire to completeness. All four styles: procedural, OO , declarative and functional should have their place. Bad feelings towards one of them is perhaps not the best starting point, but often OO evangelists say that OO is not a silver bullet and then continue forgetting this ( a style figure which may be compared to paralipsis). I don't think that good design should model the world, it needs to serve other purposes, including the ones you mention. The design patterns are a good example. Tabari

I don't know why, but I've always felt uncomfortable speaking of OOP as a programming paradigm (it was taught that way to me); rather it always seemed to me that perhaps OOD is a manner of abstraction that may apply to more than one programming paradigm. The three main programming paradigms would obviously be "imperative", "logical" and "functional". Am I correct in believing that all three paradigms could host OOD practices ? If that's the case, then we can say that the manner of abstraction is somewhat orthogonal to the operational semantics of the language: O'Haskell is an object-oriented abstraction library for Haskell.





CLOS is an all-encompassing object system for Lisp.





LogTalk and OL(P) are examples of object-oriented abstraction libraries for Prolog.



Languages usually aspire to one paradigm (exceptions include languages like OCaml which implement both "imperative" and "functional" semantics). Within languages, programmers often find ways to express other paradigms within the paradigm of the host language: Monads and do-notation to implement "imperative" behaviour in Haskell ("functional").





A logical programming library for Haskell.





FunctionalJ is a library for "functional" programming in Java ("imperative"). Anyone have any thoughts on this? -David

The lazy answer is that these are all Turing complete programming languages, and can thus (skipping a few implicational steps) emulate each other. So yes, you are correct that given any Turing complete programming language, we can do object-oriented programming in it. Since all implementations of programming languages run on a computer, ultimately what you are doing is functional/imperative/logical/object-oriented programming in machine language. However, it's a separate thing to ask if the syntax and semantics of a programming language makes one paradigm easier or harder than the other. I find programming languages such as Scheme much easier to work with exactly due to minimalistic core features and closures: if I want objects, I'll just wrap the methods in a closure. Doing the reverse in, say Java, that is, using classes and objects to emulate closures entails creating a new class definition for each closure you use, then instantiating objects from them. Both are possible approaches, and as has probably been discussed in the monastery many times (sorry, I can't search right now), closures and classes/objects are just about equivalent. Now, many "functional" programming languages support objects (since it's really easy to do with closures) and many "object-oriented" programming languages support closures. Arguing which one is better is a source of much heat but usually little light. As for me, I'm prone to pick a programming language that has closures rather than one that only has objects, because I tend to use closures more. --

print "Just Another Perl Adept

";

I think you have a point. Logical programming describes what the solution looks like; functional, a set of transformations from initial conditions to the solution; imperative, a sequence of steps from here to there. OO describes... what? How a bunch of things interact in a system that, if you're lucky, gets you where you want? It's a useful way of breaking down problems in the imperative paradigm, but not really a new mode of thought.

However, I'm slightly worried about the approach taken in Perl 6: everything is an object starting from fundamental types of data such as numeric constants. Sometimes it is useful to think in terms of objects, but often I want a number to be just a number, nothing more. I personally believe you should not be worried, not even slightly. When you want a number to be just a number, you're free to think of it as such except for some situation in which you may need to call a method on it. But then you can even consider that as a little bit of funky syntax.

Don't bother to read my rambling reply unless you are an OO skeptic... You can eschew OO jargon and consider objects to be user-defined data types, methods to be language extensions or function libraries, and inheritance to be a curiosity. OO hype can be annoying and counterproductive. From a user/programmer point of view, it is interesting to notice bugs that occur in applications due to over-fondness for objects. But cataloging this menagerie is unlikely to dampen the current enthusiam for objects. Even before the first software object, there was a business in selling integrated circuits (ICs) made from semiconductors. These worked wonderfully, and you could design circuits with them without understanding the IC guts. Software developers became jealous, and proposed selling 'software ICs' with no exposed guts. This terminology only appealed to electrical engineers, and to broaden the appeal the name was changed to 'object oriented programming'. Nauseating terminology was introduced, and non-believers were declared incompetent to comment on the subject. New books were written, courses were attended, conferences were established, and money changed hands. Software productivity ground to a halt as developers retooled, and quality suffered for years. I think the industry has mostly recovered from the OO setback, and now there is possibly some benefit from OO. As Abigail observed, OO enables the creation supportable spaghetti code. If programmers are going to write spaghetti, supportable spaghetti is better. Last I checked, the overall semiconductor industry revenue is still larger than the overall software industry revenue. The irony is that they semiconductor designers use languages like Verilog, VHDL, and SPICE, which are not particularly OO, to design products that are actually tangible objects. Electrical engineering is fun, perhaps you should check it out! In that domain, objects tend to be physical objects, the abstractions are less annoying, and there is still a decent living to be made. It should work perfectly the first time! - toma

... hype can be annoying and counterproductive. ++ The ellipses above can be replaced by almost any magic bullet, holy grail, one true way or dogma and still be true, but it is rarely more so than than in your original formulation. That said, XML and relational databases come close. Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error. "Science is about questioning the status quo. Questioning authority". In the absence of evidence, opinion is indistinguishable from prejudice. "Too many [] have been sedated by an oppressive environment of political correctness and risk aversion."

Being a relative newcomer to both XML and relational databases I too find them annoying.

In my case this is because I do not as yet fully understand their mechanics and as a result I am irritated at not being able to do what I would like to do.

However I doubt that that is the reason for your feelings.

I would be interested to know why you feel XML and relational databases to be "annoying and counterproductive".

Some notes below your chosen depth have not been shown here

I suspect (and please don't take offense if I'm wrong - none is intended) that you are rebelling against formal Ivory Tower-ist training in Java. If that's the case, then you'll find many people that aren't mad about either Java or ivory-towerist OO ;-) Sensible OO-all-the-way-down ("SOOATWD") languages that are built on objects allow users to perform (at least) procedural programming without worrying about classes and objects. If you want, say, to operate on numbers in a loop in Python or Ruby without needing to think about "3" being an object under the covers you can (Ruby uses methods on its numbers, but code can still be fundamentally procedural in nature). Check out the Python and Ruby version of Chapters 2 and 3 of the Perl Cookbook v1, for instance, to see how non-threatening numbers as objects can be. In SOOATWD languages, your modules can be simply collections of procedures, or collections of higher-order functions, or, yes, classes. You can pass both functions and objects to and from other functions and objects. You can use whichever paradigm you need at any moment. Seriously. You don't have to think of "message passing". You can just call methods or member functions with parameters instead. Or whatever other terminology or world view makes you and others happiest. Part of the problem of OO equating to Inviolable Public Interface is Java's need for an object's variables to have accessor methods -- obj.getFoo(); obj.setFoo(x) -- since you may want to change the underlying implementation and if you don't use accessor methods then code which references those variables directly is screwed. This is far less of a problem with idiomatic code in SOOATWD languages which have properties, where underlying implementation can be changed and the changes remain invisible to any code which references those variables. Perl 6 won't be Java or Smalltalk ;-)

He could also be rebelling against the (IMHO) pervasive silliness that infects parts of CPAN: glance through these for some prime examples. When you find yourself doing something like return DoStuffer->new->doStuff() , you're almost certainly doing it wrong, or at least much more painfully than necessary.

A few corrections: I don't think the right phrase is "anthropomorphic terminology", I think what you're really complaining about is the over-use of "metaphors" as the only way of understanding an abstraction. There are people who still think that "identify the nouns" is a good principle in OOP design, but that's kind of silly. If you look at the typical "objects" we really use, they tend to be totally made-up entities like "database handles" and "statement handles" and so on. Some of the things you're complaining about are already understood to be problems within the OOP crowd: it's understood that you should avoid over-reliance on inheritence. Slogans recommending "aggregation over inheritence" are pretty common (as is the point that "inheritence breaks encapsulation"). A few comments: Myself, I tentatively suggest that inheritence should usually be reserved for fixing problems in the original design. You should code to allow sub-classing, but find other ways to share common code. An "object class" should just be thought of as a bunch of routines that need to share some data. There are a lot of ways of doing that (e.g. closures) and I largely just use OOP because I think it's familiar to more people, i.e. I use OOP for social reasons more so than technical reasons. However there is a technical advantage of OOP: the ability to generate multiple "objects" (i.e. data namespaces) all of the same "class" (i.e. using the same method namespace [1] ) within a single perl process. (But I wouldn't be surprised to learn that "functional" programming world has it's own way of doing this). I don't think that OOP is tremendously useful for polymorphism, by the way, I think plug-in architectures work better, (ala DBI/DBD). [1] Looking at this again, I see I'm oversimplifying by ignoring class data... I almost never use class data, myself.

I tentatively suggest that inheritence should usually be reserved for fixing problems in the original design. Well, the first project in my current job was adding features a semi-complex web application -- written in PHP. The previous developer, no doubt a good programmer otherwise, apparently felt too energetic, i.e. not lazy enough, when he wrote the original source files, because there is considerable overlap in functionality. Quite often this is because he used copy-paste to implement features on pages that lacked them. Needless to say, when I was asked to change the way some summary fields in reports are computed, I had to first manually read through all files and discover the five or six places where the same copy-pasted computation took place. Now, the job is nice, and I've actually had fun refactoring this. My very first inclination was to abstract the common code to functions, then pass these functions around, akin to higher-level programming. However, although PHP supports lambda expressions through eval, this quickly turned out to be infeasible. Instead, I implemented a couple of shallow class hierarchies and abstracted most common functionality to (abstract) base classes. It's not beautiful or elegant, but it is much cleaner than the original -- plus adding new features is considerably easier now. Arguably this is not refactoring the design that much; just implementing the design in a bit better way. However, it's a good example where knowing object-oriented programming (inheritance too!) saved the day. --

print "Just Another Perl Adept

";

This does sound like an example of using inheritence to fix someone else's design, but I was actually thinking about using it in the other direction: if there's some code that doesn't quite do what you need, it's sometimes very convienient to create a mutant variant by subclassing it... but if the original author was inheritence happy, you find yourself dealing with unwieldly chains of subclasses of subclasses where in order to understand what the class at the bottom does, you need to learn about all the parents all the way up the chain. In your example, it sounds like I probably would've used "aggregation", i.e. move the common operations to methods in a new class, where the original code needs to create a "calculator handler object" (or somesuch) to access them. The advantage is that the new code is much more independant of the existing code (it has real "encapsulation").

"However, this is a sad price to pay, because anthropomorphic terminology leads to operational thinking. By operational thinking, I mean trying to understand a program in terms of how it is executed on a computer. Usually this involves keeping mentally track of variables and their values, following one statement after another checking what it does to data, and doing case analysis with if-then-else blocks. You are knee-deep in problems once you start trying to understand loops operationally (does it terminate? will the loop counter be always inside the bounds?)." It is sometimes argued that the inability of people to keep track of variables is one reason that methods should be kept short. Some have argued that the number of local variables should be reduced, others that they should be eliminated altogether. However, people have difficulty thinking mathematically, which is possibly why it took so long (in terms of the history of computing) for purely functional languages to arise. Both Object Oriented, and Functinal, programming may be regarded as approaches to making code understandable: OO by intuitive understanding about manipulating objects, funtional by mathematical proof. So when you suggest we abandon operational thinking, what do we put in its place? Traditionally, operational thinking is what it has meant to understand how a program works. This goes to the heart of what good software design is about, because if we are to design code to be read, more than to be executed, then we must design it to be understood. So if that understanding is reached other than by operational thinking, it will colour how we write. I don't have an account here, and I seem to be showing up in preview as vroom, whoever that is.

I think I can say that I'm an "OO proponent", so I'll try to address some of the issue raised here: w.r.t "sloppy thinking": The author wrote: "Perhaps there is a way to reason about the correctness of object-oriented programs in a non-operational way, but I have yet to see it. " Have you read about Design by Contract? Bertrand Meyer's OOSC book is a classic in that area. It is a non-operational way of thinking about OO.



Regarding "Conceptual overhead", I think it comes down to a question of appropriate design. I grant that OO is mainstream and on the mainstream there is a tendency towards overengineering (usually borne out of insecurity from the develpers), but a good team of experienced and knowledgeable programmers can help avoid this problem.



Lastly, as for OO being advertised as "The only way", I do agree with you. I hope that the future will be strongly multi-paradigm and my current favorite language reflects this: www.scala-lang.org. Check it out.





PS: I don't have an account here, but I'm this guy.