Object-Oriented Debate Part 1: Many Kinds of Coding

By Shamus Posted Tuesday Dec 6, 2016

Filed under: Programming 174 comments

A while ago I came across this youtube video, which broadly denounces the programming paradigm known as “Object Oriented Programming”. (OOP)


Link (YouTube)

If you’re not a programmer, you might not get a lot out of it. Author Brian Will is deliberately talking to other coders and so the whole thing is fairly dense with jargon and theory. That’s fine. I’m going to translate bits of it for the purposes of our discussion here. In fact, this series is aimed at non-coders and casual coders who are curious what all the fuss is about and what people are talking about when they say “Object Oriented Programming”.

Depending on who you ask, this video is either obvious, slightly controversial, or deeply heretical. The author certainly seems to believe they are about to say something likely to induce backlash. And indeed, with just over one-third of the people giving the video a thumbs down it does seem to be an unpopular opinion. After watching the introduction I was prepared for the screed of an iconoclastic madman. But by the end I didn’t find anything particularly objectionable. In fact, his final guidelines basically describe the coding style I’ve developed over years of working in both new and old coding paradigms.

I might quibble over a few points, but I think Will is pushing back against a bit of orthodoxy that doesn’t get challenged nearly enough. This debate has popped up now and again over the years and it usually ends with a bunch of people talking past each other and arguing in circles. This is partly because it’s tough to challenge entrenched ideas, but mostly because programming is not one job, but dozens of different jobs.

One Paradigm to Rule Them All

Seriously? You're using the Whitesmiths bracket style? This is exactly why I'm trying to cover the world in a second darkness.
Seriously? You're using the Whitesmiths bracket style? This is exactly why I'm trying to cover the world in a second darkness.

People often make blanket statements like, “Code should be X” or “NEVER do Y because it’s too slow” or “Stop doing Z, because it’s insecure”. These statements will strike some people as preposterous others as completely obvious, depending on what domain they’re working in. One of the major problems with discussing programming is that far too often we talk about it as if it was a single discipline. Imagine if people who repaired stuff were all simply referred to as “mechanics” without ever bothering to specify if they work in automotive, aerospace, HVAC, electrical systems, security systems, robotics, or civil engineering. It would be chaos.

People working on financial software have security and precision needs that would seem absurd to someone making a videogame. A videogame programmer has performance concerns that would seem unreasonable to someone working on data collection and analysis. Someone working in social media is probably really worried about distributed systems and scalability in a way that would seem alien to someone working on embedded systemsRoughly, an embedded system is a fixed set of programs used to drive a device. The operating system of a camera, television remote, CD player, or weather satellite are all examples of embedded systems..

Which is fine. The world of programming is getting bigger all the time. As computing power gets cheaper we keep finding new and inventive uses for it. And yes, there are different languages for different jobs in an attempt to address these differing needs. But often people discuss programming in this abstract theoretical sense that disregards all of these differences and assumes the problems they experience are universal to all programmers.

Oh sure. FOUR dogs but not ONE scorpion? People are always leaving the scorpion owners out.
Oh sure. FOUR dogs but not ONE scorpion? People are always leaving the scorpion owners out.

Look at any debate over coding theory and you’ll see a couple of engineers slugging it out without either of them ever specifying what they do for a living. If pet owners discussed their animals the way programmers discuss their craft, the debate would look like this:

“I suggest grooming your animal with a typical hairbrush at least once a week.” (Woman who owns a dog.)

“I am so sick of seeing the same useless suggestions given again and again. A hairbrush is designed FOR PEOPLE. Repeat after me: DO NOT USE HAIR-BRUSHES FOR GROOMING PETS. You’ll want to use something long that will enable you to reach the difficult high areas on the animal’s back. They sell long-handled brushes at the hardware store. Use one of those, and scrub the animal with warm soapy water before rinsing it off with a hose. Alternatively, sell your pet to someone who knows how to care for it properly!” (Guy who owns an elephant)

“Nobody here is thinking about safety. Sure, go ahead and blast your pet with a hose and see how long it takes for them to bite you. SAFETY is the most important part of pet care, not cleanliness! All that scrubbing won’t do them any good if you end up dead in the process.” (Guy who owns a cobra.)

“It’s always amusing to see just how many people overthink pet care. I suppose it’s easier to read articles online than to learn what your pet needs by getting to know them. Do you know that your pet is perfectly capable of grooming THEMSELVES? Amazing I know, but they managed just fine for millions of years before humans came along. Leave your pet alone and just enjoy the cuddles.” (Lady who owns a cat.)

“Did you seriously suggest “cuddles”?!!?! Your pet is not a toy! Leave it in the tank where it belongs and enjoy looking at it. It will live longer that way. Idiot.” (Dude who owns fish.)

Kids These Days

Is this going to be on the test?
Is this going to be on the test?

Contributing to the problem is the fact that the people who teach programming are for the most part not the people who actually do programming for a living. The people teaching kids to solve problems aren’t out in the real world solving real problems. Programming is typically taught by mathematicians and academics, not by seasoned veterans of the profession. We end up teaching kids to create new code for a world of abstract problems.

What you’re taught in school:
“Write a program from scratch to find the first 1,000 primes larger than three digits and print them out using English words instead of numerals.”

What you’re asked to do once you get a job:
“Our accounting reports are generated by 100,000 lines of spaghetti code. It was written in a mix of C and C++ by a long line of transient programmers over the course of three decades. Get in there and figure out why the even-year reports are always off by one day. Fix the problem as fast as possible, while changing as little as possible.”

While I’ve never been to university myself, I’ve certainly talked to quite a few university-educated programmers. And I’ve never once heard of students being asked to work on an existing codebase. I’m not saying it never happens, but anecdotally it’s not common, even though that’s what you’ll be doing in the vast majority of jobs.

This is not to say that academics are numbskulls or that we don’t have anything to learn from them. You could boot Professor Smugbones out of the university to go write “real” code for a few years. But during that time, he wouldn’t be teaching anybody. And it would take some time to distill all of that practical experience into some sort of curriculum. And by the time it was being taught in class, parts of it would be irrelevant because while half of this vocation is mired in the tar pits of 1975, the other half is experiencing radical and transformative change every couple of years.

The book Teach Yourself C++ in 21 Days is a pretty good example of this problem in action. It’s more interested in teaching you Object-Oriented Design than programming in C++. For example, it’s doesn’t really dig deep into dealing with header files, using external libraries, dealing with C-style strings, or memory management, even though those are complex topics that greatly impact your work and are required to make anything genuinely useful. Wrangling libraries, header files, and development environments will probably be a huge part of using C++ for most people. Instead the book spends about a third of the page space explaining how object-oriented programming is supposed to work.

I'm sick of car analogies, so this week we're doing animals.
I'm sick of car analogies, so this week we're doing animals.

The book uses funny example programs like: Hey look, you can make a base class called “mammal”. And from that we can derive the class “pet”. And then we can derive a class for cat”. And check it out, you can then do Cat.Meow () and Cat.Sleep ()! Cool, right?

Maybe this is helpful to programming newbies, but when I read the book 15 years ago I spent the whole time scratching my head and wondering how any of this could be used to solve Real Problems. I’ve never seen (or heard of) a problem that allowed for a clean, intuitive structure as clear as Mammal»Pet»Cat. Objects in a program are very often only sort of similar. Two things will share a lot of superficial similarities but in practical terms they end up being totally different in behavior. They’re similar enough that Dogs and Cats do similar things – like Sleep () – but they do so in different ways, at different times, and in response to different situations. A lot of the same concepts appear again and again, but they don’t share any code and it leaves you wondering why we spent all of that page space on Pets and Mammals, trying to connect two dissimilar systems because of cosmetic similarities.

(Note that I’m just as prone to domain bias as anyone else. Maybe there really is some discipline out there where things like Mammal»Pet»Cat is amazingly useful. But I haven’t seen it yet, and so a lot of the features of object-oriented programming come off like someone trying so hard to be clever that they forget they’re supposed to be solving problems, not building abstract frameworks for aesthetic reasons.)

I’m not saying that Learn C++ in 21 Days is a bad book or that academics don’t have anything useful to add to computer science. The only reason the profession exists at all is because some professors invented it on the blackboard decades before guys like me showed up to pick through their discoveries and build our careers off of them.

Academics tend to come up with a lot of ideas, both good and bad. Academia is where ideas are propagated. It’s up to the software engineers to sort the good from the bad and figure out which ideas are worth propagating and which aren’t. That sort of exploration takes time.

To put it another way: With each new graduating class there’s a steady flow of knowledge coming from the university to the private sector. But we need to make sure some information flows the other way. Videos like the one Brian Will made are part of that process of sorting the good from the bad and sending the information upstream for the next generation.

Next time we’ll talk about what OOP is and why we need it. (Or don’t, depending on who you ask.)

 

Footnotes:

[1] Roughly, an embedded system is a fixed set of programs used to drive a device. The operating system of a camera, television remote, CD player, or weather satellite are all examples of embedded systems.



From The Archives:
 

174 thoughts on “Object-Oriented Debate Part 1: Many Kinds of Coding

  1. Mephane says:

    a clean, intuitive structure as clear as Mammal»Pet»Cat

    Ha, this is actually such a case as you say, where things only seem superficially similar. “Mammal” and “pet” are orthogonal concepts; a pet can be a mammal. Or a fish. Or a Reptile. Or a bird. Or a spider.

    I’ll risk it and say it out loud: if you built this structure in C++, it might actually warrant multiple inheritance: class Cat: public Mammal, public Pet { /*…*/ };
    :)

    1. Matt Downie says:

      The problem is, if ‘mammal’ and ‘pet’ are both derived from ‘animal’, inheriting from both doesn’t work in C++. (Unless they’ve fixed that in a more recent version?)

      There’s also the component model, where you stick a bunch of ‘MammalFeatures’ in one class, and ‘PetFeatures’ in another, and then the ‘cat’ class contains a MammalFeatures variable and a PetFeatures variable rather than inheriting things.

      1. Mephane says:

        Since you can already buy pet robot dogs, I disagree that a pet is necessarily an animal. :D

        1. Tizzy says:

          This is one of the problems with using OOP in the wild: spend time figuring out your inheritances, and 5 years later something comes along that makes your design obsolete.

          Extra credit if it breaks it only in rare, exotic circumstances, so that fixing it is not a priority.

        2. Rodyle says:

          I’d probably make pet an interface, to be honest.

      2. Redingold says:

        You can get pet rocks, so Pet wouldn’t necessarily inherit from Animal.

      3. D-z says:

        It works fine since C++03, but you have to choose between having a Cat that is an Animal twice, or using virtual inheritance (which has its own technical issues, but fits the model tightly).

    2. Sannom says:

      In fact, I’ve almost never used inheritance in Java, despite working with it most of the time. Implementing interfaces and inheriting abstract classes on the other hand…

      1. Phill says:

        As an aside, I hate the way java forces everything to be an object. Everything is classes!

        But beyond that, in my current work (embedded software with loads of threads and processes communicating with each other) the great majority of inheritance use is in defining interfaces and abstract class factories.

        Whereas in my previous job in games, the more stereotyped use of animal->mammal->cat inheritance was pretty common (weapon->projectile weapon->bazooka). The two jobs – to back up Shamus’s point – necessitated using very different sets of coding tools because the kinds of problems being solved were very different in each case.

        Actually I think the real benefit of OOP isn’t in inheritance. It is in data encapsulation and code reuse, as a way of isolating bits of code from each other and reducing the total of unwanted show effects (see also: functional programming).

        Modem coding style is increasingly moving away from inheritance too. Obviously it is still used where it is useful, but there seems to be a growing consensus (AFAICT) that composition is better than multiple inheritance or complex inheritance heirarchies, although that is going to depends a great deal on the sort of problems you are being asked to solve, and the kinds of classes you need to use. But once you’ve come across classes with no data members, just an aggregation of unrelated functions, you might also begin to wonder whether OOP concepts ates being misapplied in the name of doctrinal purity.

        1. Ingvar says:

          In Java, everything is a class, except the things that are not (there’s still, I believe, a few primitive types lurking about).
          In Smalltalk, I do believe actually everything is a class. Which has its own set of problems.

          Clearly we should all just write Simula, which may actually be the world’s oldest OO language (1965 for the first release, 1967 for the second; to Smalltalk’s relatively modern 1972, or 1980 for the first widespread).

          1. Daimbert says:

            Yeah, my problem with Java is less that everything is an object and more than NOT everything is an object. If everything really was an object, I could do standard object operations on everything without worrying about what it is, but sometimes I won’t be able to (with primitives, for the most part).

            1. paercebal says:

              In C#, everything is an object.

              Time to switch to a better designed language?
              :-)

              Trolling apart, OOP is overrated. Not because it is bad. But because people think it solves all problems, and they ignore (sometimes willingly) other paradigms, like generic programming.

              In C++ workplaces where Java did its “magic”, you have tons of overwhelming hierarchies with interfaces and what not, polluting the C++ codebase. Removing that stuff to put back generic programming instead to solve both design problems and performance problems is a real pain. You have to actually *train* Java programmers working in C++ to NOT put everything inside a class, and use simple functions, instead (and even then, they usually try to put them… in classes as static functions… SRZLY)

            2. Zak McKracken says:

              Come to Python, where truly everything is an object :)

              and that means nobody can possibly need any other language, ever. If you think differently, I don’t think you’re stupid, but you probably haven’t haven’t been told often enough: Come to Python, it’s perfect! :)

        2. Echo Tango says:

          IMO, the push for composition seems to be coming from the desire to have strict control over what version of a method / property you’re using. i.e. When I call my_pet_wuffles.sleep(), does it get the sleep from it’s parent class Cat, it’s parent class AllergenFreePet, or some other ancestor IndoorPet that is somewhere in the ancestry tree? The Python way* of handling this is to not care about where it comes from explicitly, have all your classes do sensible things, and choose your ancestors based on your preference on what behaviours you want.

          * As far as I can tell, from reading blog posts on programming, Python, etc.

          1. Bryan says:

            Hmm, I can’t let this go…

            When you’re working with a lot of other programmers on Python code, this sort of “have all your classes do sensible things” falls apart really fast. Because it turns out that different people have different ideas of what “sensible” is, so you end up with a huge spaghetti mess of class inheritance, mixin classes used to pull in just a single function, all kinds of uses of super to try to make things work with diamond inheritance without actually being possible (it’s not possible to both clear arguments out of **kwargs and leave them in; all need to be removed by the time the MRO gets to object.__init__, but no specific __init__ may remove a kwarg because it doesn’t know if some other __init__ somewhere in the MRO requires it…), etc. Ugh.

            Although I will give it (as the linked page says) that super is helpful for __getattribute__. Just not any method with a non-fixed argument list.

            Backing out a little from super, though — for stuff written by a single person, python is pretty good. For anything where multiple people have to work together, it’s quite hard to get right. And it’s *impossible* to try to track down which function is actually being invoked at a call site, because too much of it is dynamic. Which is really too bad. :-/

            I guess that’s an argument in favor of strong static typing. Or at least strong typing (inferred typing I don’t know a ton about except as implemented in Go). Normally I don’t like static typing because it involves a lot more keystrokes to define the types and tell the compiler what type each thing is (inferred typing helps a little here, but only here), but when I’m looking at or modifying other people’s code, I find static typing *far* superior to dynamic. Because I can tell what function is definitely being called.

            (I have the same problem with virtual functions and class hierarchies in C++ though. So in some cases it’s just really hard. And Go interfaces don’t make it easy there, either.)

            1. Echo Tango says:

              Shouldn’t the only things using a kwarg be the classes who need it, or their subclasses? Like, if I directly subclass something that deals with `max_permitted_length`, it should be because I’m overriding the behaviour for how to handle that kwarg, and therefore the MRO shouldn’t ever get to that ancestor class who no longer has that kwarg available; If I’m multiple-inheriting that class, then it should be because I want to leave the handling of that kwarg to somebody who is not me, and the kwarg remains available. Right?

          2. Zak McKracken says:

            In Python, anything that is defined in a class overrides the anything of the same name from the parent class, so your pet wuffles would inherit from the lowest parent which has sleep() defined.

            The thing with not caring and doing something sensible is indeed sometimes a problem, but what it actually means is that everything is of course strictly defined because it’s a computer after all. Just Python tries to make every weird combination of things have some relevant function (adding strings concatenates them, for example), and in some of the weirder cases, that can lead to confusion (array slicing vs. fancy indexing, and making sure when operations on arrays work element-wise or on whole arrays), although as soon as something is relevant enough that you come across it a few times, it’s usually easy to either remember or work out very quickly which way round it is. The cost is sometimes having to test some lines a bit more. The reward is being able to do quite fancy stuff in very few lines of code. Usually worth it to me.

    3. Tektotherriggen says:

      I would have imagined that game programming has some of the clearest class inheritance. Grenade Mook inherits from Soldier Mook which inherits from Enemy which inherits from NPC which inherits from Physics Object which inherits from Generic Game Object.

      But I’ve never done OOP since Uni, so maybe I’m wrong…

      1. Writiosity says:

        Unrealscript is a particularly fine example of this, so much easier being able to simply extend an object and make a few tweaks (or even massive ones) in a child class.

      2. Shamus says:

        This is fine if you want to hard-code all of your Mook types, but if they’re defined in data files then you can’t do things that way. Instead, what you end up with is a basic “Mook” code and certain mooks will spawn with different variables set to make them behave like grenade mooks or soldiers mooks.

        1. Phill says:

          It varies. In the bazooka example I had above, the weapons were all data driven, but it still made sense to have a heirarchy of code reuse because e.g. all the weapons that fired gravity-affected projectiles had several features in common that it made sense to put into inherited classes.

          Since it was data driven it was all created by assorted factory methods (which meant the weapon data had to have some kind of awareness of the class structure). But the designers could pay around with weapon behaviour (and even add me ones to done extent) without having to change any code.

          In theory.

          That was something of a specialised case because of the nature of the weapons in that game though. It is far from universally applicable.

          1. Blake says:

            Even in that case the inheritance doesn’t really gain you anything besides complexity.

            If the mooks are defined in data files, then the bazooka mook and the grenade mook will just call different attack functions that may or may not share projectile code.

            If each type of mook has it’s own attack function, then when bazooka mook sometimes shoots in a weird arc, it’s easier to look at its own attack function and fix it without having to figure out what parts are filled in by a base class and worrying about breaking all the other derived classes when you fix the issue.

            You CAN certainly set up a OO class hierarchy for this situation, but it doesn’t actually help you in any way.

            1. Rodyle says:

              It does though. Although you may need to write the code for all the individual soldier types, you can then use the inheritance to treat them all the same. You can define an (abstract) base class, which defines a few methods such as Move, Aim, Attack and stuff like that. This allows you to call all enemies through the same interface. This allows you to make your code much cleaner.

              1. guy says:

                Also, if you make subclasses in multiple layers, it lets you easily and extensibly interact with categories. For instance, if you want something to buff grenadiers and bazooka guys, the buff function could take an instance of ExplosiveMook and then later add MorterMook just by making it extend ExplosiveMook.

                1. Blake says:

                  With a flat class hierarchy you can already do this just by making your buff function take a Mook*?
                  Unless I’m missing something and you’re talking about using RTTI or something to check whether something is an ExplosiveMook before passing it to Buff in which case you could just have a bitfield or something on your Mook specifying which attributes work for it and then assert in your Buff function that you’re only calling it on valid Mooks.

              2. Blake says:

                “You can define an (abstract) base class, which defines a few methods such as Move, Aim, Attack and stuff like that. ”

                The Mook class already gives you that though, what functions those functions call internally though can be based off data.
                No need for vtable lookups then either.

                1. Rodyle says:

                  The difference in readability between a huge switch for different kinds of mooks vs having a hierarchy of classes is pretty big though.

                  1. Blake says:

                    Class hierarchies make reading code much harder. You have to jump through lots of definitions to be able to understand your data structure.

                    And I wasn’t really thinking about a great big switch in the middle of somethings update, more a generic move function that works on a small data structure (containing basic things like current position, velocity, acceleration, turn rate) and then running that one function across all your objects at once.

                    If the object has a pointer to this structure rather than containing one in itself, then you can also pool these objects up so that your MovementUpdate function can run across them all, then you maximise cache coherency making it fast.

                    This is kind of the setup we’ve used on all the games we’ve shipped in the past 7ish years, break everything into generic tasks, make your entities add entries to all the pools relevant to them, then while rendering kick off a heap of asynchronous tasks to keep all the cores busy calculating lots of data which the AI can use next frame.

                    I guess the main difference between our perspectives is that you’re thinking about what makes each object be accurately represented and to be able to step through that one case, whereas I want to know exactly where my data is located and be able to read how every object gets updated all at once.

        2. Matt Downie says:

          I was reading about Spelunky and by the look of it, pretty much everything in the game, from monsters to rocks, is just a universal ‘object’ with differing properties. And because of this, things in the game automatically interact with one another in satisfying ways, creating emergent gameplay.

          https://www.rockpapershotgun.com/2016/03/04/making-of-spelunky/
          https://www.rockpapershotgun.com/2016/03/04/making-of-spelunky/2/

        3. Tektotherriggen says:

          Of course, yes! I should have remembered reading that Good Robot did most of that in data files. I suppose you could still write the enemy description using inheritance (Grenade Mook has all the same parameter values of Soldier Mook, except for x, y and z), but you trade bit of copy-and-paste for a lot of extra complexity (and without gaining other benefits of OOP).

        4. Daimbert says:

          You can if the instance values are what changes from the data file. And as I’ve discovered working on the new product I’ve been moved to, you can in fact even use custom attributes if you create them as a map inside the class itself. As long as not every property or attribute is custom, you can still define a nice clear hierarchy with some “personalized” attributes hanging around in a map inside that class.

          Plus, in that way you can create specific behaviours for each type of mook, and divide them up that way. A map of attributes lets you store whatever attributes you want, and then override methods like “handleHit” to handle the differing general actions. Or things like combat actions or AI.

        5. Wide And Nerdy ♤ says:

          Would generics come in handy here? Or am I completely misunderstanding a concept I learned 8 years ago, never used, and only half remember?

          1. Shamus says:

            My understanding of generics if fairly new (only a few years old) and so I’m not sure if it matches up with what you’re referencing. In my understanding, generics is made to solve this problem:

            int AbsoluteValue (int val);
            long AbsoluteValue (long val);
            float AbsoluteValue (float val);
            double AbsoluteValue (double val);

            Where you have to write basically the exact same code over and over for all of the different types. Using generics allows you (in some cases) to write it once and have the compiler automagically create whichever version is needed.

            To answer the question, no. This is something different.

            In Good Robot, all the robots come from a single class. That class has all the particular properties of this robot: Current hitpoints, position, etc. Each robot also has a pointer to a data structure that describes this particular KIND of robot. This is a bunch of immutable stuff like what AI to use, its base HP, what weapons its equipped with, how fast it can move, what graphics it uses, what sounds it makes, and so on. You can think of this as the “species” of robot. These are pulled from a simple text file when the game is launched.

            Maybe an OOP scholar would have suggestions on how you could make this into a system of derived classes, but to me this seemed like the most straightforward way of doing things.

            1. Wide And Nerdy ♤ says:

              Seems like your solution is more modder friendly at least.

            2. Cybron says:

              Your understanding of generics looks correct. It’s very useful for small utility functions. It can also be implemented as part of classes (like List).

            3. tmtvl says:

              In Java (work) generics are basically a way to allow type enforcement. So you can make a class… Store<T extends Sellable> which means you can choose what type of sellable thing a store should sell (sellable things in general, or stuff like Store<Grocery>, Store<Weapon>, Store<FamilyMember>,…).

              In Perl (hobby), types don’t really exist and everything is inferred from context, which can lead to problems when you try to do maths with stuff that turns out to be text. And in Perl6 they not only fixed that, they made it awesome (which Perl6 does with everything it does).

              1. WJS says:

                I work mostly in php and javascript, and yeah, it’s fun when you realise that you’ve accidentally passed a string to a function that was expecting (but you forgot to check for) a number.

            4. Kyte says:

              The technical name for what you do is composition. “Composition over inheritance” is actually an OOP best practice.
              Unity, for example, relies heavily on composition, where your GameObject is composed of a number of components such as Transform, Mesh, MeshRenderer, Sprite, AudioSource, etc, that each provide a specific set of functionality and can be added and removed in real time.

              Related to the above are mixins, which split the difference by having “half-classes” that only concern themselves with a specific functionality that other classes can opt into through inheritance.

            5. Draklaw says:

              Actually, you can do something similar to inheritance with templates in C++ (but probably not with generics in other languages). I won’t enter into the details here, but you can take a look at Eigen for a good example of this. Eigen is a header-only library that rely heavily on template meta-programming to perform efficient linear algebra with a quite natural syntax.

              So, for instance, the Matrix class inherit PlainObjectBase which inherits MatrixBase and so on up to EigenBase. There is no virtual function here. Because all classes have the leaf class “Matrix” as a template parameter, they can cast themselves to Matrix and call its method (or methods of any other class in the hierarchy). The advantage is that function calls are resolved statically, so the compiler can optimize them, inline code and so on.

              This is useful for performance-focued software. Otherwise, it is probably not a good idea. Overall, like OOP, it mostly forces you to follow a very strict inheritance structure which can be disastrous if it turns out to not exactly match your needs.

        6. Arkady says:

          There’s a book called “Game Programming Patterns” which describes a pattern of inheritance, done at data level.

          So instead of having a data file like this:

          Goblin_Mook
          Health: 10
          Damage: 2
          Range: 1

          Goblin_Brute:
          Health: 10
          Damage: 4
          Range: 1

          Goblin_Archer:
          Health: 10
          Damage: 2
          Range: 4

          … etc…

          You can inherit at the data level:

          Goblin_Mook
          Health: 10
          Damage: 2
          Range: 1

          Goblin_Brute:
          Base: Goblin_Mook
          Damage: 4 // Alternatively: Damage: x2

          Goblin_Archer:
          Base: Goblin_Mook
          Range: 4 // Alternatively: Range: x4

          It has some advantages of inheritance, like a single point of truth. It’s easy to alter *all* goblins to be tougher or weaker when balancing the game, so you don’t accidentally miss a goblin type, and end up with goblin wizards who are waaay too hard to kill, for example.

          It also has advantages of being data-driven: designers can alter parameters without fiddling about in code, or having to re-compile the game.

          The primary disadvantages are that your data loading code gets more complicated (but not necessarily much more complicated); your data loading code is less efficient (but it was probably limited by disk read speed anyway – if this saves a lot of replicated data you may find it loads faster anyway); and it pushes more complexity towards the designers (solvable with good tools, and/or bright designers).

          1. Shamus says:

            Good Robot actually does this! You can specify a base robot and any attributes left un-specified will be taken from base.

            Overall I think this made the loading FASTER. Sure, fussing around with the base robot adds a little complexity, but the parser I’d written was humiliatingly slow. It takes the game between five and eight seconds to launch, and a majority of that is parsing text files. Using templates cut WAY down on the size of the robots file, which sped things up quite a bit.

            That slow parser really bugs me. I keep hoping I can find some justification+time to fix or re-write it.

            1. kdansky says:

              I just slapped my “to be parsed” code into an SQLite database. Very easy to edit, very easy to read, and most of the parsing happens in a database driver at blinding speed.

              If you need a more unstructured approach, I’d recommend going with JSON files. Writing parsers for your own file structure is much harder than just going with what exists, and using a properly optimised library that just outputs everything in structured tables / hashmaps, which you can then use to create your objects if you insist on OOP.

              Or even easier: Write some lua code files that you eval, and put the variables into them. You can then load the lua from C trivially easily (it’s quite literally more difficult to set up a C++ makefile correctly with gcc than it is to use the C-Lua API). That way you can still have very simple files for editing, they are as fast as lua goes (“very”) and you get the full language power in your config files.

              1. Rodyle says:

                Or use Linq, which is one of the best things that has ever happened to me. I’m seriously in love with it. I think it’s basically the best thing Microsoft has ever made.

                1. Abnaxis says:

                  Erm…bear in mind that it’s artists and hobbyists modifying these files, not programmers. SQLite and JSON might be easier to make work on the back end, but I’ve yet to meet an SQLite or JSON editor that’s easier for a layman to work with than a well-designed, human-readable text file.

            2. Kian says:

              A possible fix to that is to “compile” your data files once you’re happy with them. So they’re in a binary format that can be directly copied from disk to memory instead of having to be parsed every time you load the game. And it’s much easier than optimizing the parser, since the slow parser already outputs the stuff in memory-ready format. You just need to point the output back to disk. The only things to be careful with are variable-sized structures like strings (couple of ways to solve that).

              You could ship the “compiler” together with the source data, so that modders can just edit the data, run the compiler and replace the old data files if that was desirable, or just ship the binary data.

              1. WJS says:

                I’ve seen that before. IIRC in the one I’m thinking of, the “compiler” was integrated into the game rather than a stand-alone, but the principle is the same. If you edited any game files, it would take longer the next time you started it as it processed the modified files, but only once. Then it would load a compiled version in future.

    4. Merlin says:

      Mammal and Pet being orthogonal is a reason to make Pet an interface that the various animals implement, not a reason to use multiple inheritance. That ensures that you’re implementing a standard vocabulary of pet-related verbs (feed, wash, play) in the manner appropriate to the specific animal being acted upon regardless of whether it’s a mammal or not.

      You’ll still have some difficulty because some animals are just bad at being pets – there’s not much you can do to play with a fish – but that makes the system work conceptually without unnecessarily complicating the implementation.

      1. Mephane says:

        Technically, in C++ (which is what I do daily) multiple inheritance and interfaces are the same. Interfaces do not even exist as a language feature, you make them by simply declaring every member function of a class pure virtual.

        That said, you are right, often an interface does the job just as well or even better.

        1. Merlin says:

          …cripes it has been a long time since I programmed on a regular basis. The Java & C++ knowledge all kinda bleeds together after all the years. Sorry about that!

        2. paercebal says:

          It’s a bit more complicated.

          Abstract Interface (this is the term used by Herb Sutter to coin a Java-like interface) is mostly a class without state.

          So, your member functions do not need to be pure virtual (only those you WANT the inheriting class to override). You can have simple virtual method with default implementation, or even a fully-fledged NVI if you want to separate the public interface of your class (i.e. the non-virtual methods your class user uses) and the inheriting interface of your class (i.e. the private or protected virtual methods inheriting classes can/need to override).

          The real catch with abstract interfaces is that you might want inheriting classes to inherit from your interfaces *virtually*, to make sure, if you have a massive hierarchy, you class inherits at most once from your interfaces (or you will have problem casting them into thsoe interfaces).

      2. Wide And Nerdy ♤ says:

        As someone who hasn’t written any object oriented code in a strongly typed language* in years but keeps meaning to, keeps reading about it, and wants to have something to say. I just think interfaces are the neatest concept ever.

        Though I’m sure its my lack of experience talking, it seems like there’s should hardly ever be a reason to write actual parent objects what with composition and interfaces. It may be a little tidier but it seems like you run into much more potential for inflexibility. Though I suppose the ability to override the methods of the parent solves some of that.

        *I write small functional scripts in Javascript.

    5. Ryan says:

      Many languages specifically block multiple inheritance in order to avoid the Diamond of Death. That’s actually one of the major reasons for interfaces.

      1. Echo Tango says:

        Some languages simply use a deterministic, well-documented algorithm to handle the diamond problem! :)

        1. Wide And Nerdy ♤ says:

          I know I could put this comment anywhere but may I just say I love reading the comment section when Shamus posts about programming.* I don’t do what you guys do but I keep meaning to and I’ve done just enough of it to find all these conversations really stimulating.

          One thing that always gets me about this discipline is seeing just how much thought has already been put into it. If I have a question, there’s an answer pretty much always. And I think that’s intrinsic to this type of work since we** are building the systems that enable this collaboration, we’re the first to reap the benefits. Though our personalities might get in the way of that a bit.

          *Also love the articles themselves but that should go without saying. Shamus’s second major accomplishment is conditioning his fanbase to discuss programming productively.

          **I sheepishly include myself as a writer of small scripts and an occasional debugger of other people’s JS libraries.

          1. Echo Tango says:

            You gotta start programming somewhere. As long as you’re not bragging, or claiming knowledge you don’t have, I don’t think there’s any reason to qualify yourself in any kind of disparaging way. :)

    6. Kyte says:

      Neither Mammal nor Pet should be classes, though. Yes, all (well, most, but close enough) mammals are viviparous, but the specifics of how they give birth are different to each kind of mammal. Mammal should specify that LiveBirth() exists, but it shouldn’t be trying to implement it. Similarly, you want to play with Pets, but how that happens depends on the kind of Pet. So Pet should specify that Play() exists, but not trying to implement it.

      So what I’m saying is that a Cat is an object that implements the Mammal and Pet interfaces.

  2. James says:

    but shamus, there are four dogs

    1. Shamus says:

      I seriously thought you were pulling my leg. I had to look several times. That little guy on the right was just invisible to me for some reason.

      1. DGM says:

        Understandable. He blends into the larger dog behind him.

      2. Sannom says:

        He was to me too, I think I expected the dark patch on the side of the german sheperd and didn’t look closely when I noticed that it was there. The puppy just blends.

        1. Echo Tango says:

          Stealth Dog!

      3. Warclam says:

        Ohhhhh. I thought you were saying the cream-coloured cat behind the rooster was a dog. And I was thinking, no, I’m pretty sure that’s a cat.

      4. MADRED: How many dogs do you see there?
        SHAMUS: (guarded) I see three dogs.
        MADRED: Strange… I see four. Are you quite sure?
        SHAMUS: There are three dogs.
        MADRED: Perhaps you’re aware of the incision on your chest. While you were under the influence of our drugs, you were implanted with a small device. It is a remarkable invention. By entering commands in this PADD, I can produce pain in any part of your body… at various levels of severity. Forgive me… I don’t enjoy this, but I must demonstrate. It will make everything much clearer.

    2. Bubble181 says:

      There’s three dogs and a yappy rat. :-P

    3. DGM says:

      For a second I had a flashback to the “four lights” episode of Star Trek: TNG and wondered if you were about to give Shamus an electric shock.

      1. Daimbert says:

        THERE … ARE … FOUR … DOGS!

      2. tmtvl says:

        And now I’M thinking of Nineteen Eighty-four.

    4. krellen says:

      I see four dogs, four birds, two cats, a tortoise, a hare and two pigs (one guinea).

      1. Daimbert says:

        And a partridge in a pear tree [grin].

      2. Mephane says:

        It’s a rabbit, not a hare. Too small and the eyes would be different, too.

  3. Animal instead of car-based analogies? Wow whatta time to be alive!

    The essence of what I got from the video (and your article) is that there are many ways to solve a problem in programming and its a good practice to really think about why you are doing something a certain way, is it because it fits what you are trying to achieve or are you just doing it because a lot have people have agreed that this is the “right way to do things”.

    Conventions do exist for a reason and they can be very useful, but you need to understand that reason and its implications in order to make good use of the convention.

    1. blue painted says:

      It’s the same in many worlds: You have to understand the conventions before you will know when it’s a good time to disregard the conventions ….

      1. Warbright says:

        I like this comment, because it leads me to assume you’ve traveled from an alternate reality and are just in the process of learning our conventions.

        1. No, I think it’s “traveled from another convention.” You know, that programmercon that’s probably going on somewhere right now.

          1. Blue Painted says:

            Isn’t this where I say “All your bases are belong to us!” :)

    2. MichaelGC says:

      I for one welcome the new era of Terrible Cat Analogiesâ„¢! Although I will miss the car ones.

      1. Syal says:

        I’m now left with the idea that the Terrible Cat Analogy was initially caused by a typo.

  4. Rymdsmurfen says:

    When it starts with “THE MOST IMPORTANT PROGRAMMING VIDEO YOU WILL EVER WATCH”, I’m thinking “Aha, you’re one of those guys…” and stop watching.

    1. Matt Downie says:

      And by not watching it, you automatically prove him wrong!

    2. Phill says:

      It does make it sound rather clickbait (or alternatively “I’m going to wow you with my revolutionary insights which it turns out are already common knowledge in many areas”

  5. CrushU says:

    Only eight minutes into the video…
    I’m not impressed with his arguments thus far.
    I’ve seen state-less systems, and they’re routinely more difficult to understand and parse than state-ful systems. Objects are the best ways to collate a bunch of state into discrete parts that make sense. Objects are not the best way to collate a bunch of functions. The OO paradigm is about realizing that any time you want to perform an action, you ultimately want it to affect some THING somewhere, so that THING should be performing the action.

    When it comes to inheritance, the best uses I’ve seen for it are of the BaseObject -> SpecializedVersion -> CustomVersion sequence. (So if there’s a generic BeanCounter and then a more special GraphicalBeanCounter, and then Initrode pays you to make an InitrodeBeanCounter that’s also graphical, this makes sense. Because now you can take the same GraphicalBeanCounter and make a custom InitechBeanCounter, and then make an IndustrialBeanCounter that isn’t graphical.) In most cases, Inheritance as an extension of Types doesn’t exactly work out. I think the most simple way to say it is that you want Inheritance when there’s a set of operations you want to perform on all instances of the base class, and some specialized versions of the class should perform those operations differently.

    1. Mephane says:

      This is the cornerstone of frameworks like Qt and Wt. You just make a new class that extends a framework class and then use it as if it had been part of the framework.

    2. Groboclown says:

      His big conclusion at the end recommends putting your code into small modules, which sounds suspiciously like OOP without the enforced rigors of OOP. He then throws in some personal nit picks about suggestions (don’t split up a function if you don’t have to) that applies to whatever paradigm you’re using.

      The whole source of his argument revolves around message passing, what that means, and how OOP doesn’t actually use that in practice. If you’re not sure how the data is used, then it doesn’t matter how you program; you’re still liable to break the business rules.

      The main arguments against OOP seems like an argument against bad design. He didn’t detail it out, but his module approach sounds like he’s advocating more for SOLID design principles.

      1. Mephane says:

        (don't split up a function if you don't have to)

        I haven’t watched the video myself, but generally speaking, this is terrible advice. Only splitting up functions when one “has to” can quickly lead to overbloated monster functions full of boilerplate.

        Let’s say I have a function that reads a file, identifies specific lines in it, alters these lines, then writes the file again. Even if that is the only place in your program where you ever read a file, search for text, modify text or write a file, you’d end up with a massive function that has the raw file reading code, the text search code, the text alteration code, and the file reading code all thrown in it.
        Even if none of this code is actually reused anywhere else in the program, splitting them apart makes the code easier to read, easier to debug, easier to unit test, easier to modify and easier to reuse in the future*.

        *Even if none of the code is reused right now, two years later someone else comes upon a similar problem, and then they have to either refactor your monster function into several functions, reimplement the same boilerplate again, or copy-paste parts of the monster function. Usually it is not the refactoring that is chosen, and then you end up with two boilerplate-ridden monster functions.

        1. DerJüngerLudendorff says:

          I was always told that your functions should always be as small as possible. Or more specifically, that your function should do exactly one thing, in order to keep your code properly organized and make it easier to read, change and debug.

          So in your example, you could have one function for accessing the file, one function for reading the file, one function which identifies the specific parts, and one function which writes/modifies the text. Then call them as needed in your “primary” function.

          1. guy says:

            According to my software engineering course, it has been empirically determined that if a single function has more than four conditionals in it ease of maintance and readability crash. Plus the number of tests required to fully test the thing go nuts.

            If you split it into subfunctions, the main function should be easily readable (if a line says
            String text = this.loadFile(filename);
            it’s pretty obvious what it does overall) and each subfunction can be tested in isolation, with 2^n test cases to check every conditional.

        2. Xeorm says:

          It depends though on what you’re doing. You’ll see functions get very small with some programming paradigms, but not with others. Gaming in particular tends towards having larger functions than other industries. It’s easier to modify a single function than it is to search through five to determine which one does what, and gaming functions in general will often look bigger due to the way the math works.

    3. Mick4747 says:

      I’m just a lowly CS student, but I was waiting to see if anybody would mention polymorphism. It’s something I’ve definitely made use of in my projects outside of school, but I have no idea how commonly it’s used in “the real world.”

      1. Groboclown says:

        Polymorphism has its place, but you need to be really careful with it – a book I read a long while back (“Testing Object Oriented Programs”, I believe it was called), made an insightful comment that with polymorphism, the logic of a program moves into its structure, which can make debugging and maintenance difficult.

        As others have pointed out, you’ll find interfaces, which are a special case of polymorphism, to have broad applications without the troubles introduced with polymorphism. I’m not saying that it’s bad, it just needs to be used carefully.

    4. Xeorm says:

      It’s been awhile since I’ve watched the video, but the general gist I had of it was that he was right but only technically. Building code like he would want would optimally produce better stuff, but it’s missing the point of OOP in that it’s mostly designed to facilitate collaboration and reduce complexity rather than to design a better program. It’s similar to the old argument that using machine code will produce more streamlined games, but only if you do it exactly right, and maintaining the code base would be a nightmare.

  6. MichaelG says:

    If I were doing a new programming language, I’d just have interfaces. Interfaces could nest, but objects just implement interfaces. If they want to defer to other object internally, that’s fine, but there’s no formal inheritance.

    1. Ingvar says:

      That’s pretty much the tack that Go took (apart from not having objects, really; methods are defined on types, as it were, but you can only define methods on types created in the current package and types are name-based, not structure-based).

    2. Daimbert says:

      If you mean Java-style interfaces, I have to disagree. I’d go the other way instead, and have multiple inheritance. The problem, for me, with Java-style interfaces is that if you define a method in the interface then EVERY class that implements that interface has to implement it … even if they don’t care about it. Whereas with inheritance if some subclasses don’t care about a method they don’t even have to know it was created. You can create a method in a common parent that does nothing if you need to override it, but in general all you need to do is implement it at the highest intelligent level and everything below just does that that level does, unless they explicitly need to handle things differently.

      The only think that I like about Java-style interfaces is that you can do an “instanceof” using the interface and know that if it says it implements the interface you can call those methods on it and it will do something sensible. But inheritance gives you that, too.

      1. Ingvar says:

        In Go, “implementing an interface” means that all the methods that define the interface have been implemented, it’s not something you claim your type does (well, if you pass something of type Type to something expecting something of type TheInterface, there’s compile-time checking that Type has all the requisite methods implemented and otherwise a compile-time error is thrown).

        It is also frequently recommended that your (Go, again) interfaces only ever have the absolute minimum of methods on them.

        All in all, it’s not an unpleasant language, but the lack of overloading seems to be a blocker for many.

      2. MichaelG says:

        C++ multiple inheritance always feels fragile to me — am I referencing this object with the right type? Seems like I’ve gotten weird storage mangling bugs doing it wrong.

      3. Zukhramm says:

        With Java 8, Java actually allows you to put implementation in interfaces.

        1. Richard says:

          … and so the circle is complete.

          In C++, an Interface is just a class with no data members and no implemented methods.
          (Pure-virtual)

          In the real world, most of the time you actually want the base interface class to implement some of the methods – often as do-nothing, sometimes as the most common or always-necessary version.

          For example, a 3D game might want a generic “physical” object that has “Initialize”, “Render at location”, “Check if collided with other object”, etc.

          There’s a small amount of code and state data that every single such object needs to set up in “Initialize”, so you make the Interface class do that – otherwise you’d be copypasting it everywhere and that’s a nightmare.

          Now it’s not a pure virtual interface any more – but it’s far more useful.

          I would say that fundamentally, inheritance is about avoiding copying same same bit of code all over the place.

          You could of course call a global Initialize method and give it a pointer to the datastructure to Initialize – but that’s just an object by another name. Or “C”-style, to give it another name.

        2. Wolle says:

          To be more precise: Java 8 interfaces allow default methods, but no (non-static) fields, which means that it’s basically reduced to empty methods, returning a constant, or calling other methods. It’s presumably to get away from “patterns” like WindowAdapter and other listener adapters.

          It’s completely sound. The problem with multiple inheritance was mostly in the fields and “diamond inheritance”.

          1. Zukhramm says:

            The main reason it was added was to be able to be able to add methods to existing interfaces without breaking existing implementation. They’re also useful for adding useful things like composition (which is easily expressible in terms the other interface methods) to functional interfaces.

      4. Echo Tango says:

        “EVERY class that implements that interface has to implement it … even if they don't care about it”

        That sounds like the interface in question is too large. I prefer the Go way which is, as Ingvar pointed out above, to have interfaces be as small as possible. Your object just implements multiple interfaces for the functionality you care about.

    3. mhoff12358 says:

      You might want to look into Rust, it might solve some of your issues.

  7. John says:

    It’s true, mathematicians love inheritance. I attended a small college where the CS courses (such as there were) were all taught by math professors. For example, I took CS 50 (Introduction to Programming with Pascal) from the same man who later taught me Real Analysis. My CS 60 (Introduction to Object-Oriented Programming with C++) class was taught by my Linear Algebra professor. He was so very excited to teach us about inheritance. I think the elegance of the concept appealed to him. I may have forgotten all of the C++ I presumably learned the minute the class was over, but I can still remember his explanation of inheritance. For one thing, he had the biggest smile on his face. For another, it involved repeated references to the Sandra Bullock film While You Were Sleeping.

    1. Tizzy says:

      Inheritance works really well with category theory and abstract algebra, so that some of the more recent symbolic computation software define classes such as groups, rings, ordered rings, etc.

      The resulting diagram is quite complex.

  8. The way most people teach Object Oriented Programming is that they focus on the inheritance of behavior. You make some network protocol parent that has functions and methods used by nearly all types of network protocols and then start making child classes for specific protocols which reuses those behaviors.

    That use case is not common at all. What far more common is implementing an interface. You write a class and implement a interface. Once you done that, then you can pass any object created from the class to a routine that expect object with that interface and it will generally work as long as you implemented everything. And most compilers will force you to have every routine or property of an interfere with at least a header.

    I program primarily in Visual Basic 6 maintaining a CAD/CAM application that has some code dating back to the middle 80s. And for over the past decade been slowly shifting over to the .NET framework. VB6 Object orientation is a thin wrapper over Microsoft COM. COM doesn’t support inheritance except by delegation. It does however supports interfaces.

    I used to think that was a limitation but then I got the book Design Patterns. For those of you not familiar with Patterns they are to Object Oriented program what algorithms are to structured programming. Individual patterns are description of how to create a combination of objects to do a useful task.

    For example one pattern I adopt was the Command Pattern which taught how to construct a standard way of handling UI commands including handling undo and redo. The book Design Pattern should be on every Programmer’s mandatory reading list.

    Now what I noticed in Design Pattern is that only a handful of the dozens of patterns relied on inheritance. The vast majority relied on implementing interfaces. I read more on the topic and the additional patterns I ran into mostly were about implementing interfaces.

    I looked at my own code, and yes the parts where VB6’s OO stuff came in handy all had to with interfaces. For example my CAD/CAM has a several dozen parametric shapes. Each with their own set of dimensions and their own way of calculated the final parts to be cut out of a flat sheet of metal.

    However the UI doesn’t worry about the unique details because they all implement the IShapeProgram interface. With that interface I can tell it to calculate the shape, give me the list of dimensions to display, pass it a drawing context to display the shape, pass it a printer context to print the shape and so on.

    The software I am working was started by another programmer in the mid 80s. Three time we had to port it. The first was from an HP Workstation running a unique OS to a DOS PC. Then from a DOS PC to Windows using VB3. Finally from VB 3 to VB6. The VB6 version is proving the most lasting because of my extensive use of interfaces. The company has a dozen different version of the software controlling different types of metal cutting machines. All of them running on the same set of core libraries.

    What we are doing now with the .NET framework is converting parts of it over at a time. The new .NET code implements the same interface so to the older VB6 code is appears the same.

    And while it not cut and paste compatible with Android or iOS, the conversions are a lot faster because we use their equivalents to interfaces to setup the same. This gives a template that is enforced by the compiler for the conversion.

    Is it all milk and honey? No, the most difficult part are the graphics at the highest UI levels. Differences in how keyboards, mouse, and screen, make replicating behavior from one setup to the next different. But the thing is with the way we have it, it is isolated to specific functions. We don’t have to rewrite the whole system or the core library. Once we figure it out we can craft an object around the algorithm and handle it off the drawing routines which work with anything that implements that interface.

    For example in VB6 drawing on a form or picture box is straight forward and flicker free. Using the .NET framework equivalent steps did not works. Finally after trial and error, we found that using the .net classes for double buffering in a specific way did the trick. So we wrapped that up in an object that implemented the drawing interface. Then handed that off to the original VB6 drawing code and it just worked.

    In short OOP works.

    1. Groboclown says:

      Like everything else in programming, interfaces are a great tool, but they’re only a tool. Indeed, there’s even multiple classes of interfaces – off the top of my head I’m thinking of the “here’s what I can do” list of publicly published actions, and the “I’ll call you” that allows for function composition.

      If you spend time in some highly parallel algorithms, you start running into issues. This is where functional programming shines, as the enforced immutable state makes the data access much easier, and prevents all classes of deadlock issues from ever being possible.

      However, to me the biggest reason why we have this debate of functional vs. object oriented vs. procedural is users. A user comes to a programmer and says the car must have a radio which plays your radio station unless the cell phone rings in which case the radio must tell me who it is unless the window is down in which case I don’t want to be bothered, but I still want to be able to buy an aftermarket radio so I need to be able to replace it and I might also want an equalizer so you need to allow me to hook that up. All these different pieces of information and situations need to be negotiated with the different systems.

      No matter how you structure your code, a user can always come in and ask that something else be considered that you hadn’t designed for. I think that’s where the posted video was going – functional programming allows for the program to develop naturally to capture the evolving user needs.

  9. miroz says:

    I learned to code first and had a few years business experience and only then went through university program. I knew enough programming then that I could immediately spot errors in logic or usability in anything professors tried to explain using simple principles like mammal>pet>dog. Other students were excited: so easy, so understandable programming concepts instead of usual C++ pointer management.

    And that’s why we got OOP in such a poor state: university teaches students bad OOP who never actually use it but they become teachers and repeat the cycle again.

  10. Da Mage says:

    Going to have any mention of Functional Programming? cause that’s a whole other can of worms. An interesting look into where programming could have gone if we hadn’t done Procedural.

    1. Tektotherriggen says:

      My (poorly informed) impression is that functional programming is another thing that causes a lot of excitement among academics, due to its elegance; but I’d hate to be the guy who has to write an experimental control system with it (my current project).

      1. guy says:

        My experience with it is that it’s great to have an immutable state and not have to worry about unexpected errors from unrelated parts of the program, but when you actually need to maintain a state it turns into an enormous mess as you keep passing a state object around. Made writing an interpretor pretty hellish.

        On the other hand, I loved being able to pass a function as an argument; I’ve sometimes found myself doing that in Java by creating an object and passing it just to call a method, and it’s horribly inelegant. I think Java 8 added support for passing functions, but in some horribly overcomplicated way.

  11. Tever says:

    So is there a resource you would recommend for learning? Because I think this stuff you’re talking about might be why I always get so frustrated trying to take classes and read books about programming. At the very least, I’d like to try a different method before I decide maybe programming isn’t for me.

    1. Echo Tango says:

      Learn Python The Hard Way is a pretty good place to start if you’re fairly new to programming. Some fun games/tools to see if you like the concepts, problems, etc of programming are Human Resource Machine and Factorio. TLDR: Do you like slamming into obtuse problems all day, until you finally hit on the solution? How about working with a dumb machine, which is relentless in its ability to follow your instructions to the letter?

      1. Tektotherriggen says:

        “…are Human Resource Machine and Factorio”. Or anything by Zachtronics, like SpaceChem.

        But note that all these games fall into the exact trap Shamus mentions in the article – they will let you have a lot of fun with algorithms, and optimising them, even to the extent of real learning. But they don’t deal with collaborative code, user interfaces, network or file protocols, satisfying unclear user requirements (see also Shamus latest series on the dot.com crash), which make up large parts of many real programming tasks.

        In professional terms, you may well have a good head for user interfaces or be able to draft useful design documents, even if you aren’t a genius coder. Those skills should be valuable too, and are probably rarer.

  12. Dev Null says:

    Read the opening block, and was all ready to come on down here and rant about people with hammers deciding every problem has to be solved with a nail… But then I read the next block:

    One of the major problems with discussing programming is that far too often we talk about it as if it was a single discipline.

    Precisely.

    Object oriented code is quite useful in a number of specific instances where you can actually leverage it’s strengths. Anyone who teaches you to use it all the time is an idiot, as is the guy telling you never to use it.

    1. silver Harloe says:

      Agreed! You CAN use a flat-head screwdriver to turn a Phillips-head screw, but not very well; and if you come across a bolt, you’re hosed unless you have a good socket set. Get all the tools, learn when you use each, and you’re better off than if you just have one kind of tool or if you just throw away some tools on principle.

  13. Alexander says:

    Hey Shamus, I’m currently in university and one of the classes that is required to graduate involves working in a huge old mess of code that has been modified and ‘improved’ by every previous class. The assignments for the class are generally to add or adjust some functionality in the software without breaking any other parts of it or writing test cases for existing code that was never tested by previous classes. So that sort of class is out there, but I do not know if any other schools do similar things.

    1. Adam says:

      +1 for your teacher/professor

    2. Groboclown says:

      I’ve seen some universities require their students to participate in the “Google Summer of Code” (or something similar), because it does something similar. The student (or students; it can be a group project) work on an open source project to improve the code base.

      1. Tizzy says:

        Whether or not it is required, it’s a good idea for budding programmers. It’s good practice, and it helps build a résumé. Hell, you can even get unsolicited job offers from reputable companies on the basis of your contributions alone. Even for non CS majors

    3. Blake says:

      In one of my units at uni some 9 years ago, we had an assignment where we were given the source to a very simple DirectX game and had to find and fix some bugs in it.
      We weren’t given any sort of run down on the structure of the engine, we just had to get in there and figure it out.

      Pity it was only 1 assignment across my entire degree, I feel like all programming subjects should have a similar task.

  14. Alan says:

    The key is to use the techniques that make sense for the problem you’re trying to solve. I’m deeply suspicious of any person or programming language that promises to make my life better by taking away options. When you take away options, you make some solutions far more awkward to write and read. I want objects, multiple inheritance, interfaces, generics, exceptions, globals, garbage collection, manual memory management, RAII, scoped variables with guaranteed immediate destruction, functions not attached to objects, purely functional code, code that side effects like crazy, operator overloading, function overloading, anonymous functions, closures, and more. Perhaps it’s a bit much to ask any given language to do all of them at once, but give me a big subset! Any language that survives in the mainstream was either designed from the start to grow and cover multiple paradigms, or bolted on clumsy additions.

    1. Mephane says:

      You just described C++ (more precisely, every version since C++11). :)

      1. Phill says:

        You beat me to it…

      2. Alan says:

        Perhaps unsurprisingly, I’m very fond of C++. :-) (And Perl. And Python is growing on me.)

        Vaguely relatedly, if anyone is curious why C++ is the glorious confused mess that it is, I recommend Stroustrup’s The Design and Evolution of C++.

  15. Abnaxis says:

    I really feel like your being more than a little unfair to academics. Maybe it’s just the school I went to, but every single “academic” I worked with not only had experience working with Real Problems, they continued to work on said problems while they were teaching. Academics aren’t disconnected people who live in ivory towers and have no idea how the real work works, they actually do work for a living, and despite the disclaimers the tone of your article was more than a little condescending.

    Still, it is definitely true that 90% of actual coursework is writing bubble-sort from scratch and talking about OOP paradigms, but I feel there’s a good reason for that–because those concepts are not language specific (unlike hunting for header files), because you really need to understand coding before you can come close to performing those duties, and because the skills themselves don’t translate well into a lesson that can be taught in a classroom (in an apprenticeship setting it might work, but not in a classrooom). Again, however, it’s unfair to say that even those skills are ignored, because most schools I’ve seen have “project courses” where students are required to work with “real” code to accomplish some task, usually in the senior year.

    1. Mick4747 says:

      I think another issue is that, at least in my school/job market, CS students are absolutely expected to also work in internship programs, preferably for their entire junior and senior years. The main reason I’m going to school at all is so I can get into an internship (not that I don’t think the education itself has significant benefits). And yes, we also have a required “Senior Capstone” project that involves creating production-quality software for a local business.

      Of course, not every college offering CS degrees is located in a city that actually has a significant industry presence.

      1. Zukhramm says:

        Another thing is that, while related, computer science and programming are not exactly the same thing.

        1. Mick4747 says:

          True.

        2. Groboclown says:

          Mandatory link

          “Despite having invented much of the technology of software, Dijkstra eschewed the use of computers in his own work for many decades.”

    2. Sabrdance (MatthewH) says:

      I teach research design and program evaluation to public administrators (mostly local officials -people bucking to be department heads). As part of that class I teach basic database management and data cleaning -which involves some very limited programming. What I’ve learned is that a lot of students learn to work with data that has already been cleaned, so when they get raw data -data that has errors in it, or that uses text when it should use numbers, or any of the thousands of other ways real raw data isn’t like the pretty stuff we use in stats classes.

      As a result, I added a week-long unit on how to find and clean data in the real world.

      My own experience was that this was never taught to me because the assumption was I’d figure out how to do it on my own -which I did when I wrote my dissertation -but it took several months of trial and error to learn. You do not know true pain until you spend months cleaning 1990 census data, and go to match it with the 2000 census data only to learn that there are new counties (the unit of analysis) carved from the old counties, several merged counties, oh, and they changed a bunch of definitions of “income.”

      They don’t put this in the textbooks, and so the teachers don’t think to cover it unless, like me, they have a bad experience. I imagine mathematicians and physical scientists rarely encounter this kind of problem because they produce their own data rather than pulling from other databases, and so would be unlikely to think of this type of problem when teaching the much more complicated programming to CS students.

      1. Abnaxis says:

        Yeah, my wife has masters in statistics and in sociology, and the sociology masters thesis involved doing longitudinal data analysis with census data…and it was indeed so very, very ugly to get that data cleaned. Even worse, now she’s working in medical research, and commonly has to deal with datasets filled with 30,000 patient data records that were hand entered by some belabored resident somewhere under the direction of an investigator who rarely understand what the proper formatting of the data should be (I mean, at least the census takers are nominally trained in proper data entry).

        Not only that, but she has someone working under her that she constantly has to watch over to make sure he’s actually paying attention to what customers want, not just blindly doing the analysis they ask for (seeing as they’re physicians and not statisticians, what they think they want from an analysis and what they actually want rarely lines up).

        All of which is to say that yeah, you rarely get anyone straight out of college who is practically ready to work without supervision by a senior engineer/statistician/technician/whatever. But that isn’t because academics are out of touch. It’s because school generally isn’t the place where you learn those skills, barring capstone projects. Practical skills come from practical experience, and if I’m going to spend thirty hours cleaning data so I can learn the practical aspects of data management I prefer to get paid–as opposed to paying tuition–for the experience.

        School is just there to teach theory so you can get the most out of your experience when you start learning from experience.

    3. Echo Tango says:

      My anecdotal data-point: The university I went to had a lot of the pure-academic style professors, whose main focus was on difficult algorithms, and other high-level stuff far away from the “real world”. It had a couple profs who did more “real world”-type work, and from what I hear, they’re improving the ratio recently, so there’s a better mix of “academic” and “real-world” stuff in the department.

  16. TMC_Sherpa says:

    To a man with a hammer, everything looks like a nail. -Twain

    1. Geebs says:

      To a very angry man with a hammer, every nail looks like his boss.

  17. Abnaxis says:

    I used to work in HVAC, programming user interface and control software so maintenance people can diagnose and optimize the gigantic temperature control machinery for high-rise buildings.

    Note that for the most part, the people who write this code are nothing like your normal programmer. Usually, they are mechanics who picked up a little control theory here and there, and who are mostly computer illiterate. OOP is a necessity, because the most successful systems use a graphical API so the mechanics can make a program without having to learn code. I’m a little different, in that when I encounter an application where these systems don’t have a per-engineered set of objects to do what I want, I’ll dig into the guts of the underlying framework and write my own.

    With that context, let me describe two different programming frameworks my last company used. Both are written in Java, with a graphical programming API to allow non-programmers to link objects together and build functioning code out of it.

    For one of these frameworks, the developers are infamously strict about maintaining their object model–and it shows. If I’m working with a communication protocol I’ve never seen before, it’s OK–I can pretty much guess how it’s going to work because they all inherit off the same classes. When I need something custom, it takes five minutes to roll out something workable because everything has the same virtual methods, every class has documentation telling me what methods and properties I can access; ultimately everything is *standardized*. Even if you aren’t an advanced user, the product is robust and is easy to work with once you’ve learned it.

    For the other frameworks, it’s very much not strict about maintaining a consistent object model. There are at least four different ways to specify an enumerated variable, depending on what object you’re working with. Half my time working with it is trying to coerce one data type to another to try and get two objects to talk to one another. It’s not super stable or reliable. Performing the same task in the graphical interface two different ways that should be equivalent will sometimes result in wildly different behavior with no discernible reason why. It’s a mess of legacy-soup, lugging a boatload of ass-backward-compatible luggage from previous versions, and they still haven’t settled how to represent objects in a clean way. And it shows–even among less advanced users, the tool is a pain in the ass to use that people only put up with because it’s less proprietary and supports better hardware than the previous framework.

    From that context, my experience is that highly structured OO systems are much easier to work with if you’re going to distribute and reuse that same code to a massive degree, and that’s partly why there’s so much focus on it.

  18. Mick4747 says:

    At my current school, most of the CS teachers seem to be part-time teachers and full-time programmers. The problem with this is that most of them are terrible teachers.

  19. Nick-B says:

    There can be a practical application for this in NPC behavior: NPC > Guard > Castle Guard vs NPC > Civilian > Jester.

    All of them have pathfinding which can go into the general NPC class, but civilians and guards behave differently when it comes to combat and player detection (in, say, thief). Since you don’t want to copy paste your pathfinding logic 20 times for the various NPC types, heirarchy makes sense in this case. Heck, if you want to be totally fair, you can have NPC and Player both derive from a “Human Entity” class, which handles solids collision and texturing.

    But this distracts from the fact that outside of video game development, I can’t really picture a reason for this level of abstraction. I QA’d for a bit on some regular software by a big company and I didn’t notice much that could be split up like this. The problem with OOP books is they teach how “cool” this is, but rarely get into practical uses.

    1. Groboclown says:

      Here’s a “real world” example that I fall back on – a virtual file system.

      Most languages give the programmer a tool box of ways to interact with different systems – the local files require way to reference files and get their data, remote FTP sites use network protocols that require a different way to get the data, the contents of a zip file have a specific format, and some backup systems have their own way.

      Putting all these under a “virtual file system” allows the programmer to talk to them using a common set of tools without having to worry about the details. “Give me the data for this file” becomes trivial now.

      Unfortunately, this is where some of the warts of polymorphism come into play. If you write something like this, you have to be really careful to define the way you expect it to be used. If you want to support web pages, now you can’t support listing files anymore unless you come up with some special cases the programmer has to be aware of. Adding support for reading files that span CDs requires asking the user to eject the old disc and insert the next one, which now adds UI feedback and all kinds of terrible problems into the mix.

      This is one big area of bugs for software maintenance. Someone sees existing polymorphic code and thinks, “Gee, I could take advantage of that and make something new that solves this problem!” However, there may be some bit of code in the dusty corners of the project that uses it in an unexpected way.

  20. Alex Broadhead says:

    As a software engineer who mostly worked on signal processing applications early on, and then moved to embedded controls later, I used to _hate_ C++ and OO. “This stuff is useless! Not everything is an object! Trying to turn signal processing into objects is nutzoid!”

    Then I started working on the FPI, and later GUI side. Turns out that there are some things that OO is good for. (Though I still don’t like C++ much; Objective C is much more self-contained.) Creating a library of interface components is a great application for OO. I need lots of instances of buttons and text boxes and sliders, etc., and they all share a lot of functionality under the hood, and I’d like to customize them.

    That said, it’s still a bad idea to try to turn signal processing blocks into objects when they naturally express themselves a functions, and it’s an even worse idea to try to turn signals themselves into objects, when they’re just streams of raw data. OO is good for some things, and not good for others. Recognizing things like this is why we’re called software _engineers_.

    1. Blake says:

      I pretty much agree.
      In all the projects I’ve worked on, UI code is basically the only place I’ve wanted to have real OO stuff with base classes that have data and do logic.

      Interfaces get used all over the place from things like a serialiser which might be for sending data across a network or saving data to disk or whatever, to generic handling of asynchronous system tasks that all just want Kick, Update, GetStatus and GetError functions, but I think that’s a very different thing to base classes with logic in them.

      I’m sure the UI code could be rewritten in some interface-only way to just call lots of the same functions, but I think for that one case, using OO paradigms actually makes for more robust code.

  21. фывапролджэ says:

    Guess what video I was watching just before checking one of my favorite blogs for new posts.

    Google analytics-snoopalitics rarely work as intended, but when they do, they only make me paranoid.

    Here’s an idea for how programming should be taught: student shall choose an open source project and contribute to it. Uni can host its own software to develop by students. Old Unix was an OS targeting universities, so why not write your own? This was practically the case with Minix, IIRC. Don’t want OS? No matter, there’s always things to do. Maybe biology department needs some DNA analyzing utility or whatever.

  22. What you learn in school vs. what you do in the real world doesn’t always seem to mix well. It’s a frequent problem because academia seems to be in the stone age, while people working in the field are dealing with much different, new issues.

    I’ve been playing guitar for about ten years now, and got really good not because I decided to learn all the fluffy pop music or techniques I had to learn in school, but because I ended up following my own inclinations and learned polyrhythmic time signatures before I was even able to do it myself. While my so called “formal” training in guitar and music is minimal (I forget chords, notes, and scales besides the bare basics), I can completely out-play most people who I know have been taking everything by the book for a long time.

    Revolving around coding, I imagine this is a similar problem. They teach something so naive that isn’t actually important. I do want to code an app someday… but I’ll probably just build it as I learn. I’ve got time.

    I wrote something addressing this a couple weeks ago. I’m not sure that I was entirely correct on what I wrote though… the real world seems to be different from what I bludgeon through in a classroom… although admittedly, this addresses my own field, anthropology, which realistically has no deadline and you have to be an “expert” academically before you can actually work in the field.

    1. Shamus says:

      Heads up: I wanted top check out your link but it seems to be blank. (Not sure what went wrong there.) Can you post the URL?

      1. http://www.unknownvoyage.com/2016/11/academic-vs-popular-learning-whats.html?m=1

        Mine only touches on learning at the surface vs. Learning via information (overload). But there are things that could be relevant to what I said above.

      2. One last note: academia focuses on putting together the old and trying to understand something new. In other words, it studies both old and new things which may not be important for several decades, centuries even.

    2. Benjamin Hilton says:

      I was actually lucky enough that my main CS professor had worked in the industry since the 70’s and left it to teach, so he was all about what actually goes on in a job.

      1. Most college degrees/fields need some practicality. Programming and engineering does, that’s for certain. Social sciences and fields such as English, often have to be flipped on their head to be practical.

        EG, my Anthropology courses never explain practical usage except through vague detail. But if I’m using my skills practically, they might work well with bringing a business overseas to another country with a different culture. The first example that comes to mind for me are store/restaurants.

        Using something for the real world, and teaching it, is more useful than technical jargon. But typically academia has to deliver information to be utilized or processed later, unless it becomes outdated. In that cases, tough luck learning something useless.

        1. Benjamin Hilton says:

          It can go the other way however. I also studied Psychology (weird combination, I know) and in that field the major problem right now is getting information from academia to the practical world. Major breakthroughs are being made in terms of understanding the brain, as well as in terms of collating data about many psychological issues, but so many psychologists who actually treat people are either too entrenched in their ways or unconvinced of the new information because they didn’t do the experiment firsthand. There are literally treatments that the world of academia knows is harmful (psychologically or physically) to patients, that are still being used because they can’t convince the people in the field.

    3. Blake says:

      “They teach something so naive that isn't actually important.”

      I’d actually disagree regarding programming, largely because so much of your work is working with other peoples code and you really need to be able to communicate in the same way.

      It’s like if you were asked to fill in for someone on guitar one night, and were given a couple of hours to prepare and only had their notes which were referencing all those chords and things you aren’t too familiar with. Or, alternatively, being asked to write some music for someone, then them wanting you to be able to write it out in a way someone else could perform a few years from now.

      Being a good programmer means spending a lot of time working with and learning from other people and writing code they can understand.
      That’s not to say you can’t search for better solutions to problems too, but when you do you need to be able to communicate them so that the next person who comes along and doesn’t understand your code doesn’t just delete it and replace it with something they do.

      Having said all that, the point that academia remains behind what people are using in the real world remains true, which is why it’s important that we find ways to force the educators to try to keep up to date, because their job is very important too.

      1. My comment on what’s being taught being naive is far-fetched. I will agree with you on programming, as programming with another person to several people probably requires communication.

        However, if you’re undergoing a solo project where you’re the sole programmer, do you need to communicate to others what you’re trying to do? Or is it all up to you?

        My music analogy doesn’t exactly do my point justice either. With music, if you don’t understand a song, you need only note the key and join in the music.

        With programming, I’m sure there’s more to it than improvising. So yes, as I said here, if you’re working with others (in programming), it’s better to be communicative and learn a lot about what it is you’re coding or fixing. I’m not much of a programmer though.

        1. However, if you're undergoing a solo project where you're the sole programmer, do you need to communicate to others what you're trying to do? Or is it all up to you?

          The you who is currently fully inside the code and the you who is two weeks/months/years from now are two different people. Future you really appreciates when you make things easy to understand.

          1. True, I might try to make things understandable for my future self. But if I’m going to do so, if future me forgets anything, it might just be my future self not keeping up with the past me’s project.

            1. Richard says:

              Future You has different knowledge and skills to Now You.

              Future You already thinks Now You is horrifically naive about something, so always try to make sure they don’t think Now You is a fool or unnecessarily obtuse.

              Be nice to Future You.

    4. Echo Tango says:

      The thing with academia is that yes, some things are in the stone-age; Tools, methods, best practices. However, the core of Computer Science is things that don’t change quickly, and some things that will probably never change; Algorithm design, timing analysis, breaking down large problems into sub-problems, etc. All of those things are useful whether you are doing research projects, user-facing industry, programming for personal projects, or programming for anything else.

      1. I don’t think I took this into account.

    5. Rodyle says:

      As an academic, I disagree pretty strongly, for a simple reason: going into academia is not for people learning to program, in the same way that university level biologists do not learn the true names of all plants animals.

      University IT, in my experience, is about the pretty deep stuff. Why does an algorithm work? What can we do to improve performance on a really basic level. How does a computer work on the inside, and how and why do protocols for connecting to other appariti work?
      On the other side, there’s also the heady stuff: how to prove the computational complexity of an algorithm? What types of optimisation algorithms exist, and how do you prove that they will reach an optimal solution when time goes to infinity? Why and how do pseudonumber generators work?

      Those are the kinds of questions that typify an academic IT study. Sure, you will learn some programming along the way, but it’s not inherently focussed on making you a viable candidate on the job market. It’s a scientific study, and it focusses on that part of IT.

      1. The information would still be useful regardless. It really depends on the situation though. Admittedly, I haven’t touched all of academics as closely as I would like, so I don’t know how programming there works. But I will say that what I do know of, I’ve gained through observing the way it works, attending academic conferences, etc., etc.

        It seems like there’s a lot of information in academics. For the most part, there should be a mix of what is being taught. I interned as an archaeologist for cultural resource management, and on top of learning more local archaeology in my area, I also had to do practical work. Office work such as binding, running through computer files, and some website editing. Practical work+academic work is best-suited for some fields, no academic work would work better for others.

        It really depends. This is just personal experience, but this is how I interpreted what I was doing. Still doing it too.

        1. Rodyle says:

          The information would still be useful regardless. It really depends on the situation though. Admittedly, I haven't touched all of academics as closely as I would like, so I don't know how programming there works.

          Sure, it could be useful, but an academic IT degree prepares you for a career in research and super-specific areas of programming where a deep theoretical background is required, rather than working on the codebase of a ‘generic’ company.

  23. Groboclown says:

    What you're asked to do once you get a job

    My first assignment at my first programming job was literally this. I was given a nasty bit of C code that performed some deep analysis inside a big loop, which included lots and lots of memory allocation and freeing based on weird conditions. People were now asking it to run even bigger problems, which meant it was taking up all the computer’s memory, and crashing after 45 minutes.

    Unfortunately, the original developer had moved on, and no one knew what it was actually supposed to do. This was before the time of unit testing, and all we knew was that the users assured us it was giving the right answers.

    After two weeks of trying to untangle the thing (which ended up just giving the wrong answers), my mentor approved my frustrated solution to to terminate the program after running a large number of loops with an error message. The interesting part was that no one complained.

    1. Did you eventually have to code something new?

      If so, they might’ve simply wanted to scrap it and move on.

      1. Groboclown says:

        The code was essentially left to rot. The little bit of it that I remember, it had to do with generating simulations of the space shuttle’s telemetry data with errors. If I had any kind of experience at the time, I would have asked to see the base requirements to really understand what it was supposed to do. As it was, management saw it as an old bug for software that only a few people used for a system that was going to be thrown away in a year.

  24. tmtvl says:

    Funny to hear him compliment the Java naming paradigm, when I think of stuff like “LocalContainerEntityManagerFactoryBeanFactory”…

    On a functional note, let’s talk functional programming.

  25. A failing of the college/university education in preparing you for a job is the one language, start from zero mentality. You’re almost never going to start from nothing, and even if you do, you’re going to pull in a bunch of existing libraries which handle the heavy lifting on things like databases and UI and have to make them all work together. However, it’s more than likely those decisions have already been made for you, and you’re going to have to work with them. Business also tends to be a balance between fixing things now and fixing them right. Dirty hacks that are going to break the next time someone sneezes that get the job done in an hour are preferable to something ‘properly’ coded three days from now when the storefront suddenly doesn’t work because of a platform update.

    Reading/understanding existing codebases/libraries/documentation is the skill I use the most, and the second-most used skill is figuring out how things actually work when one or more of those things are nonsense. I learned these from working with C++ and SDL before my education and I would be a much worse developer without that experience.

    Now, on the flipside, what formal education was really good for was filling in gaps that I didn’t know existed/wouldn’t have sought out myself. Networking, DataCom, SQL style databases, and certain datastructures (which SHOULD have been taught this is how this works, this is when you should use them, you should probably go find the appropriate language function/library, but that’s another can of worms). All things I’ve ended up using to one degree or another over the years.

    As to OOP, the more language agnostic I get (I’ve worked seriously with Javascript, Ruby, C# over the last three years; with additional minor forrays into PHP and Python so I could fix and then subsequently eliminate them from the codebase), the more I like objects and the less I like inheritance. Each language has it’s own variation on inheritance, but objects work like objects pretty much everywhere. Of course this is me ensconced in my current Javascript/NoSQL workspace where I’m just chucking objects around like a madman, I might change my tune if I had to go back to C++.

  26. Abnaxis says:

    Okay, I’m up to the halfway point, “no one ever writes code this way, it’s absurd way to write code,” and what he’s describing is literally the API I’ve used for the last five years. And I have no idea how else it would work other than with OOP.

    Like, I understand where the video is coming from, but in applications where you’re not writing code to do a thing, but rather you’re writing tools for other people to do things, OOP is really, really useful. It lets you create an abstraction that laymen can work with to make computers do stuff, even if they aren’t computer people. And while that’s probably a small subset of programs written, I feel it’s an important one.

  27. Rodyle says:

    (Note that I'm just as prone to domain bias as anyone else. Maybe there really is some discipline out there where things like Mammal»Pet»Cat is amazingly useful. But I haven't seen it yet, and so a lot of the features of object-oriented programming come off like someone trying so hard to be clever that they forget they're supposed to be solving problems, not building abstract frameworks for aesthetic reasons.)

    I wrote a little about this earlier, but I figured this earned its own comment. I’m going to take an example from the ASP.NET (because I’m quite familiar with it through my work) standard libraries: user controls.

    User controls are a huge collection of various controls, such as simple buttons, dropdown lists, radio buttons, sliders, text boxes etc. etc. All of these controls need a few simple things to function properly: they need to be displayed, they need to respond to user input, they have a selected value of some kind, and so on. Internally, the beauty is that pages do not need to know what a control is: they can just say: “well, you’re a user control. Display yourself. Also, notify me when the user interacts with you”.

    Externally however, it’s much more amazing. We write supply chain management software, so we need to be able to display various kinds of organisations, such as suppliers and distribution centres (although I still disagree with this being an organisation, but from a programming perspective it works). Now, we could use the basic drop down list every time, but getting them to display a list of suppliers is pretty annoying (especially if you want to do fancy stuff like having an empty option /s). We can therefore write a specialist drop down list which inherits from the standard ASP dropdown list.
    However, when we write the same thing for distribution centres, but the code will be basically identical; only the query which selects the organisations to present is different, as well as a few labels, maybe. We could therefore write an organisation dropdown list, and make supplier and distribution centre dropdown lists which inherit from those.
    We can keep going like this: we need a specialized dropdown lists, which only displays suppliers which actually supply the organisation the active user works at. And we don’t have to start from scratch either, we can just inherit from the supplier organisation dropdown list.

    Although we could also have one very advanced dropdown list which is able to do all of the above, this option is much cleaner for two reasons: firstly, the code is much more legible. You don’t have to analyse huge lists of parameters to see the exact function and working of the thing; a look at its name tells you all it needs. More importantly however, if you decide you need to add new functionality, good luck if it’s all the same dropdown box. Although adding it to the dropdown box class may be doable, you’re going to have to go through hell and back to go over the few hundreds of thousands lines of codes (if not more), to make sure that all calls to your dropdown list still work properly. And then, you also need to show that for all possible permutations of permutations, it works correctly as it did before.

  28. Decus says:

    I’d say that university education would be an area outside of your domain as well! In order to teach for a degree at a university you need to hold the highest degree they offer. That is, to teach even the BS-level classes at a university that offers a PhD you need to hold a PhD. Anybody who holds a PhD in computer science either is still currently solving problems out in the real world or has retired from solving problems out in the real world with teaching as something relaxing they do for fun.

    As well, CS degrees are now accredited and have been since 2011/2012. In order to achieve accreditation they are required to tick off a bunch of boxes that relate to the sorts of problems you’d actually be handling out in the real world. This has improved courses across the board at all institutions hungry for accreditation, not because the professors were lazy or bad or uninformed before such a program existed but because it gave them more leverage in saying why they needed certain courses or why certain courses needed to be taught in such a way. After all, it isn’t CS degrees all the way up when dealing with getting courses approved and setting requirements on them–especially in the early courses it wasn’t uncommon for a program to be beholden to a business school.

    Courses that are literally just “have some messy, bad code and have fun making it run fast enough to be worth anything or even run at all” have had a much easier time at existing after 2011, usually offered at the junior and senior level. That sort of thing is seen as a major plus by the hiring market and thus by the accreditation program and thus by the universities hungry for such things. Professors love to teach them too since it allows them a good laugh at their student’s expense.

    1. WJS says:

      In order to teach for a degree at a university you need to hold the highest degree they offer. That is, to teach even the BS-level classes at a university that offers a PhD you need to hold a PhD.

      That’s a pretty obtuse rule. At my university, most of our teachers were doctors, with one professor and one who just had an MSc + 30 years or something of industry experience. Protip: don’t call him “Doctor”.

  29. Zak McKracken says:

    Waaait! That’s four dogs, no scorpion, but also more chicken than cats? Who put a picture on the web that has more dogs and chicken than cats? I’m speechless.

  30. Zak McKracken says:

    A place where inheritance makes sense reasonably often: Engineering!

    The main code I work on/with uses it to handle multiple types of similar components. Makes it much easier to test variations on them, too. Just create a new subclass which overrides one or two of the properties with something else but is otherwise equal. Ding!

    That said, yes, there’s always someone who comes up with something to vary which is orthogonal to the stuff I was grouping things by. Although this is often solved by separating a class into two or more component classes, so a widget is then made up of two or three subwidgets, which we can swap around as we like.

    That said: Yeah, doing this in idealistic clean object-oriented code would have probably broken my brains.

    Also, I did in fact have to work with preexisting (and reasonably confusing) code as a student. I did not study Programming, though, but engineering, and our programming lectures were barely helpful. Those CS people delivering them had a weird idea of programming in engineering…

  31. David says:

    This reminds me so much about the article written by ‘Uncle Bob’ The Curn. Goes on to talk about how there are very few ‘new’ ideas in software, and people look at things like there is only 1 true style, when in fact each of these software styles are tools that should be used when appropriate.

    I highly recommend his book ‘Clean Code’ for anybody that is professionally working with software. It is full of many insights that are ‘common sense’ when someone has you think about it, but are just not typically taught at school. For example for me one thing that resonated is to make sure your functions are written at the same level of abstraction (explained way better than I can here).

    One thing I think was disappointing about when I went to college is that early on they teach you about C++/java they cover the ‘how’, Classes, Inheritance, Interfaces, etc … but they don’t teach you the ‘why’ … why bother with a class when I could just bang out the logic. It wasn’t until my last year in school that in an elective class I took covered that topic … seemed like it should have been an immediate follow up to the original programming classes. So many of the programming classes teach the whole animal -> pet -> cat|dog thing wrong. They focus on the shape of the object but not the responsibility. In different domains you will model the same ‘objects’ differently. How you model car’s, tires, and engines will be vastly different in a piece of software used in a mechanics inventory system than in a game like GTA.

  32. Anachronist says:

    This all sort of reminds me of the classic essay “Real Programmers Don’t use Pascal”, in particular the line “the determined Real Programmer can write FORTRAN programs in any language.”

    Indeed. As someone who started out writing FORTRAN programs on punch cards, then graduated to other languages (various flavors of BASIC, then C, C++, Java, PHP, Javascript, Ruby, etc.) I learned how true that was. I use OOP when it suits me for the problem at hand. Or I write procedurally if that works. Or a combination.

    I remember my first Java project. The first thing I did was create a class called “globals” and put all my global variables in it.

    Shenanigans like that more or less offend Java programmers, in particular one professional software engineer who eventually became my wife, and who later realized that I can code better than some of her coworkers. None of my jobs in my lengthy career has ever required me to code for a living. Coding for me is just a tool, not a profession — a way to solve a problem (often for my job) and communicate with the programmers who work for me.

  33. Rick says:

    I use OOP and inheritance a fair bit in PHP, often for things like storable data objects, but I could probably also do that with dependency injection.

    PHP also has interfaces which are great for specifying the functions that objects should have but without caring about their implementation. This is used all the time for things like “drivers” where you could load in any cache system and the interface will ensure it had so the appropriate methods (load, save, clear, etc). I don’t know if C++ has anything like this.

  34. Some nerd says:

    I’m late, but had to say this: I’ve been working in “embedded software” for a little over 10 years and I’ve worked on systems with extreme reliability requirements (satellite), serious performance requirements (40Gbps networking devices), severe resource constraints (SIM cards, utility meters) and important safety requirements (things with Lithium batteries).

    It’s like Shamus said, but worse: even within a field, there’s a wide range of what programming practices are acceptable, desirable, and required.

  35. Ahiya says:

    “With each new graduating class there's a steady flow of knowledge coming from the university to the private sector. But we need to make sure some information flows the other way.”

    This is the root of a whole lot of problems in the IT industry.

    When companies complain about how Comp Sci graduates in the US aren’t skilled enough, I always ask what they’re doing to help improve the local university programs. Have they contacted college career centers?

    The answer usually is silence, because of course they aren’t. They expect university professors to read their minds, or something. Which is why we’re seeing increased use of H1-B visas, why US programmers are having hard time getting jobs, and why there’s such bitterness in the field.

  36. Neil Roy says:

    I could never stand using C++. I have been, and always will be a C programmer. I see OOP as bloat which causes more confusion and problems than anything else. It seems to me as if C++ programmers are scared to death of pointers. I have had more problems with misplaced semicolons than with pointers. I have yet to see any C++ that I can’t do just as well in C, only with less confusion and bloat… and more speed.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to D-z Cancel reply

Your email address will not be published.