Bugs Aren’t Features

I upgraded to the latest version of Opera a little while ago, and since the upgrade, it has developed a really charming bug: every so often, it just decides not to have anything further to do with certain web sites. It happens most frequently with ScienceBlogs, because I usually have several SB tabs open, but I’ve seen it with some other frequently-visited sites. It works fine for a while, but after a day or two, hitting “Reload” to, say, update comment counts, does nothing. It says that it’s loading, and maybe even that it’s transferred some trivial number of bytes, but then it just sits there. No other pages from the same domain will load, either.

This only happens in Opera. If I cut and paste the URL into Firefox, the pages in question load immediately. If I quit Opera, and re-start it, the pages load just fine… for a while.

In a similar vein, iTunes keeps asking me to upgrade to the new version, which I haven’t, because I’ve heard that it’s buggy. I’m not sure exactly what the problem is supposed to be, but I’m not downloading an upgrade until the numbers change, just because I know it’ll do something stupid.

These things are fairly typical of the modern relationship with computers. I’ve gotten so used to it that I didn’t really think about how odd this is until this morning, when I read somebody on LiveJournal talking about a video game crash, saying “I should know better than to purchase a game before the first patch is out…” Then it hit me: Has there ever been a bigger con job pulled on consumers than the modern software industry?

I mean, look at what they’ve managed to do: computer users now expect new software to be buggy. That is, we fully expect a newly purchased product, straight out of the box, to be broken. Something that for any other product category would be an outrage is just shrugged off as the normal course of the business. If you bought a toaster that didn’t toast, you’d take it back to the store and demand a refund, or at least a working toaster. But somehow, because computers are involved, if a new video game crashes, we just say “Oh, well, that’s what I get for buying it before the patches come out.”

I have no idea how they pulled this off. I think it probably has to do with their initial market being people who are perfectly happy to have broken products, because it gives them a chance to tinker around and find some hack to avoid the bug. But I bet the same thing happened with cars in the 1920’s, and it didn’t stick there– if you buy a new car, and the radio doesn’t work, you take it back to the dealer and demand that they fix it. Somehow, the “bugs are normal” thing has held on even as software moved from a niche product to a mass consumer market, and even people who are sensible and frugal in other areas just accept that new software will frequently not work right.

I bet Ford would kill to know how that was done.

44 thoughts on “Bugs Aren’t Features

  1. I’ve ranted about this many times previously. You’ve seen them. The people who actually write software always tell me I’m crazy for expecting software to actually, you know, work as advertised.

    In part though, badly written software keeps me employed since part of my normal administrative tasks is patching old bugs (and of course, introducing new ones in the process).

  2. I think you can contribute most of your frustration to the open architecture model that 98% of computers have adopted. The plug and play ability of most peripherials comes at a cost of complexity, and only in the software industry are you required to account for Person A’s “performance modifications” and Person B’s “love for spyware.”

    Although it looks easy in theory and the problems you see are obvious to you, developers face an ever increasing scope of integration combined with decreasing project timelines.

    The reason they are able to “pull it off” in this way is simply because they can host a patch on their server and have you download it. Free and easy. Unlike if you have a bad drive train and had to have it reinstalled…

    It has its pros and cons, but thats the heart of it.

  3. The reason the car dealer will fix the radio is that if he doesn’t, you’ll go to another dealer — because you both know that the radio is fixable, and there are plenty of dealers.

    You’re assuming that the software problem is fixable — that complex applications, with all the bells and whistles consumers want, can be made to run without bugs on modern operating systems. That’s the only basis on which a “conspiracy theory” of an industry-wide con job will stand: you can’t switch to another program like you would go to another dealer, because they’re all crooked and in collusion; there is no competition in the software industry.

    I don’t buy it. I think that if it were possible to write such complex applications bug-free, someone would have done it and taken over the market.

    It’s true that Microsoft (to pick on the obvious example) has used its dominant position to drive crapware into the marketplace simply because it’s cheaper for them and they can get away with it. (MS Word, I’m looking at you, you turd.) It’s also true that poor design (for instance, bells and whistles that consumers don’t actually want) has much to answer for.

    Nonetheless, I think it’s largely just the case that Software Is Hard.

  4. From what I understand, there’s this Golden Code in Windows, something over a million lines that no one knows what it does (the original programmers are gone and apparently documentation was patchy) and everyone’s afraid of touching simply because it’s been there all this while.

    But that’s more of an aberation more than anything else (I hope). Even open source is not free of bugs. I suppose the nuclear option would be to create an exception for software with regards to the implied term of merchantability and not allow them to use the industry wide “system” of buggy software as a defence for their own buggy software. But I think that would kill off the industry overnight.

  5. Well, it’s inseparable from the fact that they’ve convinced us that the computer that does everything we want it to do this year, will be hopelessly outdated next year. The whole computer industry has tapped into some primal “more, more, more” part of our monkey brain, and we are clearly willing to suffer any number of inconveniences just to get our hands on “OSX 10.9 Ocelot”, or whatever carrot they want to dangle in front of us next.

  6. I saw the same wailing and gnashing of teeth over iTunes 7.0, so like you I avoided it. I held out a couple of weeks on suggestions to download 7.0.1, too, but finally gave in last night and have had no problems, although admittedly all I’ve been doing is listening to preexisting playlists. But apparently the 7.0 problems included simply awful sound quality, which this version doesn’t, so I’m guessing it’s finally safe to go into the water.

  7. I believe Ford does know how it’s done. The software industry is in the equivalent of the tail fin era. Tail fins and other style tics made Ford, GM and Chrysler a lot of money until the Japanese figured out how to build reliable, efficient cars.

  8. I think it has something to do with the inate “magic” quality of computers. With a car the whole contraption is pretty simple and most people can describe how the car works to a decent level of detail. Everyone can describe exactly how a toaster works. Then consider what portion of the population can program (never mind the hardware part). Thus because most people don’t understand what is going on they don’t get the “I could do this better” anger and bugs slide.

    The other thing to consider is operating programs out of their design parameter space. Chad didn’t specify how long it took to get to this state of non-working. It is possible he is simply using the program in a way that the designers never thought to test (kind of like how my mother has told me several times “I would have never thought to tell you not to do that“)

    Also consider the model of Fedora. They have thier comercial core which is RedHat and then fund the Fedora project as a huge (and mostly free) beta testing and coding community

  9. This is a perennial topic on comp.risks. Many folks take reliability far more seriously than games developers. Look at the process that goes into software used by NASA missions or used in medical devices. Some places can point to any line of code and show you pages of documentation and testing that went into every change ever applied in the history of that line. That level of care is orders of magnitude more expensive and time-consuming that what goes into typical end user commercial software where the level of quality is determined by ‘whatever the market will bear.’

    Way back in 1993, thanks to a three month schedule delay in shipping the original Apple Power PC hardware, Graphing Calculator 1.0 had the luxury of four months of QA, during which a colleague and I added no features and did an exhaustive code review. Combine that with being the only substantial PowerPC native application, so everyone with prototype hardware played with it a lot, resulted in that product having a more thorough QA than anything I had ever worked on before or since. It also helped that we started with a mature ten year old code base which had been heavily tested while shipping for years. Combine that with a complete lack of any management or marketing pressure on features, allowed us to focus solely on stability for months.

    As a result, for ten years Apple technical support would tell customers experiencing unexplained system problems to run the Graphing Calculator Demo mode overnight, and if it crashed, they classified that as a *hardware* failure. I like to think of that as the theoretical limit of software robustness.

    Sadly, it was a unique and irreproducible combination of circumstance which allowed so much effort to be focused on quality. Releases after 1.0 were not nearly so robust.

  10. There’s no particular mystery to creating rock-solid bug free software, it’s just too expensive to be generally practical in today’s world. If Opera was built and tested to the same standard as a Boeing 757, the version you are running would be 10 years old and cost thousands of dollars but you could be sure it wouldn’t do weird stuff. In fact, software used to work just that way back when mainframes ruled the world.

    The reason people are willing to accept buggy browsers is the cost/benefit ratio – if it’s free, we’re pretty likely to put up with a few flaws. For a couple of hundred bucks we’ll gripe about the flaws in Word but we’re not going to rummage around in the basement for a typewriter. For the price of a new car, we’ll expect the vendor to fix it.

  11. Well, the other thing is that when you buy a toaster you can reasonably assume that the user will put it in a kitchen, plug it into a wall, and basically just use it to make toast. You don’t have to worry that the toaster is incompatible with the fridge or has a conflict with the pantry. You can design a toaster and test basically every scenario you can envision before releasing it. With software, you have no idea what 3/4 of your users are going to do with your application, what system they’ll try to run it on, what else they have, what version of the other crap they have, etc., etc.

    The reason so many bugs show up in release software is that there is no possible way to recreate the majority of the environments that your product will see once it hits market — or even to guess what those environments will look like, beyond a few assumptions about the OS. (But, even then, you can’t count on every person’s OS configuration beind exactly the same…)

    I say this as a person who works at a software company and has regular contact with the development teams. We test absolutely every scenario we can think of before we sign off on a build for production, but what kills you is always the scenario you didn’t think of. Or sometimes it’s the scenario you can’t recreate but thought you supported. And, yes, sometimes you release something with a known problem, but that’s always a judgement call. Most of the time, you get it right and the only bugs that get out there are pretty obscure and you quietly fix them a little bit later. But, sometimes, there’s a killer hiding that you just couldn’t find. Sometimes, your calculation is off and that obscure bug turns out to be not so obscure.

    But just compare your Opera bug to, say, Firestone’s exploding tires. The same process was behind both of those, and I assure you that Opera’s and Firestone’s engineers took their jobs just as seriously. But, all told, which would you prefer?

  12. You can design a toaster and test basically every scenario you can envision before releasing it. With software, you have no idea what 3/4 of your users are going to do with your application, what system they’ll try to run it on, what else they have, what version of the other crap they have, etc., etc.

    See, I don’t really buy that.
    It’s a Web browser, for God’s sake– people aren’t going to use it to make toast, they’re going to use it to look at sites on the Internet.

    I agree that you don’t know all the widgets that people will have on their computers, but it really shouldn’t matter for 90% of it. The fact that, for example, you can have a conflict between a printer driver and a sophisticated piece of mathematical modelling software (as I had with a previous computer) strikes me as indicative of bone-deep stupidity in the way the whole industry is set up– there’s just no sensible reason why having a HP DeskJet set as the default printer should prevent me from even being able to start MatLab.

    I could understand it if, say, MatLab and Mathematica couldn’t run at the same time, because they’re similar programs doing similar things. But MatLab and a printer driver?

  13. Michael in #10 hits the nail on the head. The added cost of finding and removing all the bugs (or writing the program carefully enough to not introduce them) is far higher than the added value. Sure, you want Opera to work, but not enough to pay the extra $100 it might take. Or not enough to give up CSS rendering for everything else working.

    This is something that pains me very much as a software engineer, because I want to write stuff that works, not cobble together a web site that might last a month without crashing. But people want a website that works now rather than one that will be guaranteed to work in six months, so that’s what they get.

  14. I have to disagree with Micheal’s post above.

    I have deployed countless workstations with AutoCAD in a civil engineering environment and although the software costs upwards of 3,000 dollars for a single license, it has the same annoying bugs and inexplicable crashes as just about any other piece of software on the market. My company is small and we spend nearly $100,000/year on just one piece of software. I can only imagine what the national/global engineering firms spend….

    The cost of the software seems to have no effect on quality. To my mind the single biggest factor is based on the vendors marketing strategy. In the case of autocad we call it the “microcrap mentality”. AutoCAD pushes out new product every year with new and exciting “features” and push heavily on thier customers to upgrade, upgrade, upgrade. Meanwhile, I have load patches at a near constant rate, and then load patches to fix the patches.

    In opposition, the other civil design software we use, MicroStation, costs nearly as much. The difference is that Bentley typically takes two or more years to develop the next release and this means that a fairly decent (nothing is perfect) application goes to market and stays there essentially unchanged for a good long time. They also seem to do a good job of beta testing and actually being responsive to thier customers needs. Heck, we still use MicroStation 95 (as in 1995) for much of our design work. That is 11 year old software that still is powerful, functional, and reliable.

    Sure, I still have a few gripes about the Microstation product, but in the end it is a far sight superior product.

    Being superior isn’t enough though. Thanks to AutoCAD’s aggressive marketing techniques, they are the dominate force in the market. What do you do when all of your clients demand that product be turned out in AutoCAD because it is what people know about? To stay in business, you do what the market and the paying customers tell you to do. And that means buggy software, constant upgrades, and patches galore.

    Just like Microsoft….

  15. I agree that you don’t know all the widgets that people will have on their computers, but it really shouldn’t matter for 90% of it. The fact that, for example, you can have a conflict between a printer driver and a sophisticated piece of mathematical modelling software (as I had with a previous computer) strikes me as indicative of bone-deep stupidity in the way the whole industry is set up– there’s just no sensible reason why having a HP DeskJet set as the default printer should prevent me from even being able to start MatLab.

    Just like it made no sense for the replacing the copper tube to get you two and half orders of magnitude better vacuum?

    You spend a lot of time putting together complex custom experimental physics hardware that you can keep running long enough to get your results – think of how much more time and money it would take if you had to declare your apparatus “done” at some point, and then be locked out of the lab while the experiments were conducted by incoming freshmen with whom you could only communicate via email. Now you need safety interlocks to make sure that they start up the pumps in the right order, and a big list of all the funny noises they might hear (or stop hearing) and what they have to do in each case, and you probably need to buy expensive quick-disconnects because they might not know how to torque down the Swagelok fittings properly…

  16. Just like it made no sense for the replacing the copper tube to get you two and half orders of magnitude better vacuum?

    See, from where I sit, the analogy is more like replacing the copper tube on my vacuum system led to better performance of the laser system on a different table twenty feet away. It doesn’t surprise me when changing a component of the vacuum system improves the performance of the vacuum system– that’s why I told the student to try it, after all.

    If changing a component on one system improves the performance of a completely different system, though, somebody has done something deeply stupid somewhere along the line.

  17. … although the software costs upwards of 3,000 dollars for a single license, it has the same annoying bugs and inexplicable crashes as just about any other piece of software on the market.

    A quick scan of eBay indicates I could buy a very nice second-hand drafting machine for $100, but I’m pretty confident Blaine isn’t seriously considering abandoning AutoCad for that tried-and-true technology. Even with bugs and at $3000 per seat, design software is the better solution when all the pros and cons are considered. At $100,000 per seat the story might be different, so the decision is a compromise for both the software developer and the customer. It’s all a matter of what the developer thinks the market will accept in balancing the introduction of new features against the purchase price, development costs, and the risk of bugs. In today competitive environment most vendors are going to get the new version out quickly and worry about the bugs later because if they don’t the customers will jump to the competition.

    The constant introduction of new features is an important tool for generating revenue, and software developers need revenue to stay in business. I know little about engineering software, but if customers are sticking with a 10-year-old product to my mind it doesn’t bode well for that developer’s future.

  18. Consider that you might not have a very accurate appreciation of the complexity of computer software.

  19. If changing a component on one system improves the performance of a completely different system, though, somebody has done something deeply stupid somewhere along the line.

    What’s “completely different”? After all, Matlab wouldn’t be very useful if you couldn’t print with it. And while most printing from a computer is just sending streams of text across and letting the printer take care of it, Matlab needs to print complicated graphics quite precisely. There are ways to print complicated graphics quite precisely, but the standard is (was?) owned by Adobe, who charged money for it, so a bunch of people (including HP) wrote knockoffs, which probably didn’t behave exactly as Matlab expected.

  20. that complex applications, with all the bells and whistles consumers want

    Okay, there’s some of the problem right there. Given the complexity of programs and the interactions they have to accomodate, adding anything must be difficult. But if they don’t keep upgrading and adding bells and whistles they say we want, what would they sell next year? And of course, the computer industry is so driven by the theory “first out wins” that they’re in a big damn hurry to get it out, so….

    An interesting and instructive antidote. I can’t tell you who I got it from, but it was someone who would know. It appears that Microsoft did a research survey some years back to find out what functions users would most like to have added to Word. Five of the top 10 were, in fact, already present in Word, but it was such a mess people couldn’t find them.

    MKK

  21. The reason software is bad is that we keep buying it when it is bad. At least to first order. I don’t think it is a conspiracy as much as it is very difficult to write software well, and its difficult understand just how complicated and occult software development is.

    I’m no software engineer, but I wonder if there is some metric that tells how complex software is, how coupled different parts are. If you replaced the copper tube 20 feet away from another laser, but the instruments were coupled through the floor, so vibrations coupled, then replacing the tube might well have an effect. The key must be isolation of the parts. The horror stories I see about the innards of Windows means that this principle must be being violated gleefully.

    I program a little for data collection, and I have slowly learned to appreciate the way ‘object oriented’ programming decouples bits of software. It took a long time to grok what it was all about, but then, I’m not a pro. The decoupling matters a lot, in my limited experience.

    The problem is huge with legacy code. Like evolving organisms, you don’t get to go back and rationalize the earlier layers, using all the experience you accrue over the years.

    Not to say that software never just gets released when it is just not ready. Anybody emember the readme file in Mac System 7.0? “Rock Solid”, indeed. It’s irritating.

    It’s tempting to use all sorts of metaphors about other sorts of engineering, but one difference is that in software, you cannot see what you are doing much of the time. You can’t watch it do everything, because the set of possible states of the system with many things changing at once is combinatorially huge. The industry still has a great deal to learn.

  22. If changing a component on one system improves the performance of a completely different system, though, somebody has done something deeply stupid somewhere along the line.

    Or else your mental model of what “completely different” is is wrong.

    The main thing, though, is just the cost. I can think of at least a dozen bugs that I’m going to be creating with the next release of software I’m putting out at work. But if I tell the users, “Look, there’s a dozen bugs we know of, and probably another few dozen we don’t. The effects of the known bugs are all pretty minor — they don’t come up in the normal case, and you can always work around them to do what you need to do; it’s just that you’re sometimes going to get error pages if you try to do things in an unexpected order. We can either release the software next week as planned, or we can delay it for a month while we spend $40K to fix the bugs. Even then, though, while we’ll have taken care of the bugs we know about, we’ll still be at the mercy of the bugs we haven’t detected yet, because we don’t have the time or resources for the kind of intensive testing we’d need to smoke those out. If you want us to guarantee you that there are no bugs in it, we can spend six months and $250K getting that absolutely 100% nailed down,” I can guarantee the answer that I get every single time.

    Now obviously, if I were working in a bank or a nuclear reactor, I’d get different answers. But I’m not, and game developers aren’t, so.

  23. It appears that Microsoft did a research survey some years back to find out what functions users would most like to have added to Word. Five of the top 10 were, in fact, already present in Word, but it was such a mess people couldn’t find them.

    Check out Jensen Harris’ blog for much, much more along those lines. (He’s one of the people redesigning the UI for Office for just this reason.)

  24. I work on a large engineering code, perhaps 3M lines of source code. We have shared memory (like the two processors on a dual core), and MPP versions (running parallel across many computers networked together), and
    of course different precisions. Add to that the fact that we support dozens of different combinations of hardware and software, and well you guessed it there will always be some bugs, many of which only show up on one specialized system. Added to the complexity of our software, is the fact that the vendors software, compilers and libraries also contain bugs, and perfection is clearly unobtainable.
    Then you add in certain big time (800pound industrial customers) pushing for difficult new features, and well you guessed it: intense pressure to quickly add features.
    So yes as long as the technology is in a state of rapid change, there will always be bugs. At least for our customers, they have many similar versions (builds) of our code available -if one fails, perhaps the next one will work….

  25. I’ve been working on computer software for nearly 40 years now, and it never fails to amaze me. A modern application of modest size has hundreds of thousands of moving parts. Something the size of iTunes easily has thousands of modules and each of these might have ten to a hundred statements. This has to include several asynchronous processes to manage simultaneous music playing, frequency spectrum display, user interface, background encoding and the internet functions.

    iTunes itself is just the tip of the iceberg. It comprises from twenty to thirty percent of all the code that has to work in order for iTunes to work. There is all the user interface library code, the graphics display, the network formatting for the music store, the graphics and video decompressors, the music codecs, the interface with the sound system and so on. There are probably millions of pieces, and they each have to be hand crafted to work properly.

    Imagine a 757 with ten times as many parts as a real one, but with no two parts similar enough for multiple, let alone mass, production. If two pieces of code were as similar as two rivets or two airplane seats, they’d have been implemented as one subroutine. I’m often amazed that any code ever works at all.

    I haven’t even mentioned the operating system kernel, the firmware, the code in the disk drive controllers, in the power supply, the batteries, the keyboard, the routers, the cable modem, and all the device drivers for interfacing with these things.

    How does anything ever work? Modularity helps. Microsoft abandoned modularity so they could claim that their web browser and media player were intrinsically merged with the operating system. This didn’t play well in court, and they’ve been paying for it big time.

    Is this a crummy state of affairs? It sure is. At least there is usually a fix in the works, and the internet has sure made it easier to get updated.

  26. I guess you’ve never written a single line of code in your life, which is fine. But you really shouldn’t be posting about things you have no experience of.
    Lets look at your toaster example, its perfectly valid and yes I think everyone would take it back to the shop as you said.

    Now if you took it back to the shop and complained that the toaster didn’t work while you attempted to use it upside down to warm your slippers the guy in the shop would probably laugh and tell you to get out.
    What am trying to say is that most bugs are discovered when software users attempt to use the software in a way which was never anticipated by the developer and thus never tested.

    Now if a bug happens in normal usage while performing an intended action in the correct way then that is a major flaw in the development cycle and the software shouldn’t have been released.

    As developers struggle to make more fool-proof software the world keeps making better fools.

  27. I am a little amazed that people think that the car industry does not deliver banana products on the market. They do!
    Take Citroen for example (but other brands work similar).
    Citroen introduced the XM in 1989, it was plagued with problems until the second type was introduced in 1995. The same happened with the C5 introduced in 2000, until the type II was introduced in 2004.
    The golden rule with buying cars is: Never buy a newly introduced model, wait for the updated/patched model that arrives several years later. Most of the times the updated version is also restyled and looks better.

  28. Chad, you obviously know nothing about software engineering and development. Comparing a software system to a toaster is completely out in left field. The other comment was correct in saying that a toaster can be expected to run in one environment, doing one thing. A toaster is also a very simple piece of equipment. It makes toast. Its also a mechanical device, that was engineered perfect decades ago. Software systems are made up of thousands of lines of code, each having to work together perfectly for the system as a whole to perform. With hundreds of people working on a complex system, it is impossible to get all the bugs out, unless you’re talking about NASA and their life-critical systems and unlimited government budget. If Opera was totally and completely bug free, it would take ten years to release and would cost ten thousand dollars a copy. You are “a physicist on tenure track at a small liberal arts college.” Please do not pretend to know what you’re talking about when it comes to software engineering. You, sir, are wrong.

  29. Hehe. I just spent the night redoing my thesis talk for a 50 minute seminar. Well, actually, trying to figure out why Internet Explorer, Mathplayer, and S5 have a certain couple of surprises when combined.

    Chad, I wouldn’t claim to understand what everything in a large piece of software does. I’ll gripe plenty, but I’ll also be willing to understand that there’s scads of stuff I don’t understand, just as there is with any complex technological creation. Hell, my thesis code is a few thousand lines of code, vetted, stress-tested, tweaked, constantly debugged (you can’t even imagine how voluminous the diagnostic output is when I make everything maximally verbose). And I can’t decipher one stinking plotting bug.

  30. I just downloaded iTunes this morning. I would NOT recommend doing so. It corrupted my iPod wiped out all my songs. It is extremlly laggy and almost unusable.

  31. For me, software being disfunctional comes from two related issues, a) it is way to expense to make it work properly and b) programmers see coding as an art form and want creative freedom. Microsoft dominates the largest software market because they know how to churn out cheap software — hire smart but poorly trained coders, add lots of testing and then whip them until the code is good enough. As long as consumers accept that model, companies will continue to brute force their implementations and ship em before they are even half done.

  32. Tom, there was something posted to the Apple forums about reverting without losing anything; I’ll look for the print-out this evening, and post the URL here (Google failed to find it).

  33. another thing you neglect to mention is that many bugs in programs are a direct result of problems within the OS or poorly written DLLs.

    when you put your toaster on the countertop, one would assume the countertop is reasonably level. how well would your toaster work if the counter was upside down? that’s right, it wouldn’t. and you can’t expect it to, the toaster was made with the assumption your counter is level.

    the same applies with your OS/Operating environment. If you have some bad patch apply, corrupt (or poorly written) DLLs or other system issues it may cause the program to fail even though the program is written properly and bug free.

  34. Paul, the trick was that Microsoft got the clout to make people to use their stuff (for all practical purposes).

    Anybody remember switching from WordPerfect to Word? It stank. From what I heard, law firms stuck with WP for years, because they do big-time word proccessing, but even they had to switch.

  35. This old argument is extremely flawed by nature of trying to compare software to manufactured items. Software isn’t manufactured.

    It would be more appropriate to compare software to a book. Even the best books can have typographical errors. More to the point, they can have inconsistencies. You might not even like the story. A lot of different people want different things from the book.

    There are probably plenty of books that would have benefited from a bit more time, a bit more polish, but there is a point of diminishing returns, and at some point you risk destroying the core essence with too much tinkering.

    Rewriting a great book isn’t guaranteed to make it better. Yet that’s what developers are expected to do all the time.

    The other part of the argument about adding more and more features or “bloatware” has some truth, but it is an overused generality. Almost all features are in software because someone asked for them, and actually uses them.

    That someone may not be you, but for one or two features, you may be that someone. If you didn’t really want new features, then why do you even bother upgrading at all?

    And calling software a scam because of the bugs, is like calling medicine a scam because people still die. Regardless of imperfections the value is there in enormous quantity.

  36. “And calling software a scam because of the bugs, is like calling medicine a scam because people still die.”

    this is by far the best quote on the topic i have read in the past years.
    kudos to ryan!

  37. Actual computer code is a bunch of bytes or “words”. The word “word” here doesn’t mean the same a in every day speech.
    1101100011101000 is a 16 bit word.
    To be able to communicate with computers the fist step was the creation of Assembler languages:
    Mov AX FFFF
    moves a lot of high bits into register AC of the processor.
    Some programmers still use some Assembler. It requires complete understanding of every component of the computer but it results in very powerful very fast code. And almost bug-free. Buggy Assembler almost certainly leads to instant crashes.
    To make life easier for programmers the next step were the higher level programming languages. Interpreters and compilers try to make it look like computers can understand human language. We pay for this convenience in the form of performance, ever increasing hardware hunger and… bugs.
    Object oriented programming, dynamic link libraries, scripting and so on all come with their own bugs.

    I once heard a story about the early days of automobiles: a client regularly complained about his car, until the garage owner told him to construct his own perfect car. I believe the client’s name was either Rolls or Royce.

    The computer equivalent is called open source. I hope one day it’ll result in an operating system as free as Linux, as user friendly as some future version of Windows, as bug free as a Rolls Royce, so powerful that it might run ray tracing software on my good old 1983 IBM PC and so reliable that we could ban the blue screen of death to the museum.
    I have a dream…

  38. Comparing software to a book is not particularly useful. A book is a generalized container for holding information. It functions perfectly, unless the pages get ripped out or stuck together. Software is a tool, and a big computer system is far closer to a skyscraper in its origins and behavior. An architect designs the tower, engineers verify the design and than a construction crew builds it. 1.0 is perfect, shiny and new. Overtime as people use the building, they start to renovate the various parts to better suit their specific needs. It gets messier. Software blurs the line between initial version and maintenance, but the comparison is fairly accurate. The biggest difference is that office towers have far less problems than software. Because they are material, and more obvious when they are flawed, people tolerate less defects from them.

    Modern software has a lot of problems, and the real issue here is whether or not this has to be the case. In medicine, the doctor may not have the knowledge to prevent the patient from dying. Unless it is incompetence, the knowledge base restricts the outcome. Do we not understand how to write better software programs? Is it an issue of knowledge, training or just economics that limits our industry?

  39. Anybody who hasn’t read Frederick Brooks’ No Silver Bullet should read it now.

    Medicine is more similar to software development than some might imagine, which is why organizations like IHI exist. Since we all have to deal with the healthcare system occasionally I won’t dwell on the gory details, but see this post on the Google Blog by Adam Bosworth, and his recent keynote address delivered at the Markle Foundation’s PHR conference in December.

Comments are closed.