64

I am working in a .Net, C# shop and I have a coworker that keeps insisting that we should use giant Switch statements in our code with lots of "Cases" rather than more object oriented approaches. His argument consistently goes back to the fact that a Switch statement compiles to a "cpu jump table" and is therefore the fastest option (even though in other things our team is told that we don't care about speed).

I honestly don't have an argument against this...because I don't know what the heck he's talking about.
Is he right?
Is he just talking out his ass?
Just trying to learn here.

Adam Lear
  • 32,039
  • 7
    You can verify if he's right by using something like .NET Reflector to look at the assembly code and look for the "cpu jump table". – FrustratedWithFormsDesigner May 25 '11 at 15:21
  • 6
    "Switch statement compiles to a "cpu jump table" So does worst-case method dispatching with all pure-virtual functions. None virtual functions are simply linked in directly. Have you dumped any code to compare? – S.Lott May 25 '11 at 15:26
  • 74
    Code should be written for PEOPLE not for machines, otherwise we would just do everything in assembly. – maple_shaft May 25 '11 at 16:14
  • 9
    If he's that much of a noodge, quote Knuth to him : "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil." – DaveE May 25 '11 at 16:50
  • See here: http://stackoverflow.com/questions/234458/do-polymorphism-or-conditionals-promote-better-design/234491#234491 Also using polymorphic objects is just as quick as a switch. As it is optimized into a single function call based on the type of object. – Martin York May 25 '11 at 17:01
  • Can someone make this Community Wiki considering I would not know what is the "correct" answer anyway? – James P. Wright May 25 '11 at 17:08
  • @Pselus CW is for posts that'd benefit from collaborative editing. Even CW posts can have a "correct" answer. You can accept the answer that provides the most compelling arguments that change your coworker's mind? :) – Adam Lear May 25 '11 at 18:24
  • I would ask your coworker for the following pieces of evidence: 1) switch statements create a "cpu jump table" and 2) Object-oriented techniques do not. Without anything to back up those claims, he cannot claim to be right. Conversely, find some documentation to prove your point as well. – Jesse C. Slicer May 25 '11 at 18:36
  • 13
    Maintainability. Any other questions with one word answers I can help you with? – Matt Ellen May 25 '11 at 19:34
  • 1
    Ask him to estimate the gain in performance for your application. – Kevin Hsu May 25 '11 at 23:51
  • How many lines is the entire switch block? (i.e. average number of lines per case, times number of cases) If more than 1.5 screens (45 lines?) first consider breaking into functions (if there's one or two cases which is big), then evaluate the benefits of OOP. Also, if the exact same switch (theTypeCode) happens twice or more, interface + polymorphism will be easier to maintain. (even if it happens only once, delegates and anonymous functions will sometimes be more maintainable than switch). I think it's better to just try different ways and compare the two versions of code. – rwong May 26 '11 at 06:12
  • 2
    a Switch statement compiles to a "cpu jump table" You mean like a VTable that is created to select which function is called in a virtual override? – Michael Brown Jul 10 '12 at 22:09
  • 1
    Now I understand what was meant by "Premature optimization is the root of all evil" and "We write code for people not the compiler" – Songo Aug 29 '12 at 08:04
  • @FrustratedWithFormsDesigner: .NET Reflector doesn't show assembly, it shows MSIL. There's a whole optimizing compile stage between the two. – Ben Voigt Aug 12 '14 at 18:38
  • his advice only applies in very special cases where this type of optimization is required. you can replace gigantic/often-used loops in long running processes with case statements, and gain SERIOUS performance. I've only come across this situation 3 times in 22 years. otherwise maintainabiity>performance – DaFi4 Sep 29 '16 at 12:55
  • Turn it upside down - require the tests to be written before the code. Then see what design comes out. – Thorbjørn Ravn Andersen Oct 23 '21 at 15:56

17 Answers17

52

He is probably an old C hacker and yes, he talking out of his ass. .Net is not C++; the .Net compiler keeps on getting better and most clever hacks are counter-productive, if not today then in the next .Net version. Small functions are preferable because .Net JIT-s every function once before it is being used. So, if some cases never get hit during a LifeCycle of a program, so no cost is incurred in JIT-compiling these. Anyhow, if speed is not an issue, there should not be optimizations. Write for programmer first, for compiler second. Your co-worker will not be easily convinced, so I would prove empirically that better organized code is actually faster. I would pick one of his worst examples, rewrite them in a better way, and then make sure that your code is faster. Cherry-pick if you must. Then run it a few million times, profile and show him. That ought to teach him well.

EDIT

Bill Wagner wrote:

Item 11: Understand the Attraction of Small Functions(Effective C# Second Edition)   Remember that translating your C# code into machine-executable code is a two-step process. The C# compiler generates IL that gets delivered in assemblies. The JIT compiler generates machine code for each method (or group of methods, when inlining is involved), as needed. Small functions make it much easier for the JIT compiler to amortize that cost. Small functions are also more likely to be candidates for inlining. It’s not just smallness: Simpler control flow matters just as much. Fewer control branches inside functions make it easier for the JIT compiler to enregister variables. It’s not just good practice to write clearer code; it’s how you create more efficient code at runtime.

EDIT2:

So ... apparently a switch statement is faster and better than a bunch of if/else statements, because one comparison is logarithmic and another is linear. http://sequence-points.blogspot.com/2007/10/why-is-switch-statement-faster-than-if.html

Well, my favorite approach to replacing a huge switch statement is with a dictionary (or sometimes even an array if I am switching on enums or small ints) that is mapping values to functions that get called in response to them. Doing so forces one to remove a lot of nasty shared spaghetti state, but that is a good thing. A large switch statement is usually a maintenance nightmare. So ... with arrays and dictionaries, the lookup will take a constant time, and there will be little extra memory wasted.

I am still not convinced that the switch statement is better.

Job
  • 6,459
  • 51
    Don't worry about proving it faster. This is premature optimization. The millisecond you might save is nothing compared to that index you forgot to add to the database that costs you 200ms. You're fighting the wrong battle. – Rein Henrichs May 25 '11 at 15:47
  • 1
    I know that, but humans are no robot. Many cannot be told unless they already know. So, they need a dramatic experience to change their habits. – Job May 25 '11 at 15:56
  • 30
    @Job what if he's actually right? The point isn't that he's wrong, the point is that he's right and it doesn't matter. – Rein Henrichs May 25 '11 at 16:00
  • Most software is complex and he cannot be right 100% of the time. There must be cases when he is also wrong - that should make a thinking individual stop preaching, even if they are obsessed with speed. But yes convincing someone that it does not matter would be nice too. Being a relatively junior guy surrounded by seasoned C++ hackers switching to .Net I often see how my arguments are lost on others, so for me to have a chance to prove my point, I have to stick to the guns that they chose and just out-do them. – Job May 25 '11 at 16:06
  • 3
    Even if he were right about 100% of the cases he is still wasting our time. – Jeremy May 25 '11 at 18:32
  • 7
    I want to gouge my eyes out trying to read that page you linked. – AttackingHobo May 25 '11 at 19:25
  • First of all, I agree with Rein that it doesn't matter. Second of all, I think the switch statement is slower than OO, because OO uses a constant time virtual method lookup, compared to the logarithmic switch. But most of all it doesn't matter. – Jaap May 25 '11 at 20:34
  • 1
    The dictionary lookup will be nowhere near as quick as indexing into an array. The latter just involves looking at an offset from a memory address; the former involves producing a hash of the requested key, then inspecting the implied offset from the starting memory address of the dictionary, and then (depending on the implementation of the dictionary) potentially looking up the item elsewhere due to hash collisions. http://stackoverflow.com/questions/908050/optimizing-lookups-dictionary-key-lookups-vs-array-index-lookups However, the difference is too small to worry about. – Ant May 26 '11 at 08:45
  • 2
    Re: "EDIT2" First, the question was about OO vs. switch - bunch of it/else doesn't feature. Second, switch using a jump table would be O(1) i.e. constant & if/else O(n) i.e. linear - neither is logarithmic. Third, switch (if appropriate in the first place) is both faster and more readable than if/else - so there's not much to debate about. (However: yes switch is faster. But n is usually too small to make a real difference. Even at 50 options, if overall performance is improved that much, the method is probably called too often.) – Disillusioned Sep 24 '13 at 22:50
  • 7
    What's with the C++ hate? The C++ compilers are getting better too, and big switches are just as bad in C++ as in C#, and for exactly the same reason. If you're surrounded by former C++ programmers who give you grief it's not because they're C++ programmers, it's because they are bad programmers. – Sebastian Redl Feb 06 '14 at 11:15
  • Back in the day I use to write C/C++ for a variety of system. I have been hard core Java for years. I have never written any C# code. What I can add to this conversation is that this isn't just a language specific thing. It's good OO and programming practice to functionally decompose your code. And use polymorphism to handle complexity effectively – Christian Bongiorno Apr 10 '14 at 17:33
  • @Craig: There's no reason for if/else trees to have linear complexity. Oh, the ladder does, but arrange the tests to divide-and-conquer in a balanced tree and you have logarithmic time. Sort by a-priori probability, and you can do even better than that. – Ben Voigt Aug 12 '14 at 18:41
  • Please see the reference in my answer: large switch statements are converted to Dictionaries by the compiler. Your favourite approach makes sense therefore both as that's what the switch statement ends up as, and it's easier to read. – David Arno Jan 05 '16 at 12:35
  • @Job For your EDIT2, I've actually had a situation where I had to replace an old C hacker's giant switch with a dictionary-like array in order to make the code readable/sustainable. – Snoop Apr 19 '16 at 11:23
  • Since the compiler provides the best optimization for you, I think the switch statement is best - it is basic coding understandable by any C# programmer, rather than advanced Dictionary of delegates or other tricky method pointer logic, providing shared state means you don't have to work hard to get around not having shared state when you need it. – NetMage Feb 08 '19 at 21:26
43

Unless your colleague can provide proof, that this alteration provides an actual measurable benefit on the scale of the whole application, it is inferior to your approach (i.e. polymorphism), which actually does provide such a benefit: maintainability.

Microoptimisation should only be done, after bottlenecks are pinned down. Premature optimization is the root of all evil.

Speed is quantifiable. There's little useful information in "approach A is faster than approach B". The question is "How much faster?".

Jim G.
  • 8,005
  • 3
  • 36
  • 66
back2dos
  • 30,060
  • 2
    Absolutely true. Never claim that something is faster, always measure. And measure only when that part of the application is the performance bottleneck. – Kilian Foth May 25 '11 at 15:53
  • 1
    And know how to measure objectively. – Job May 25 '11 at 15:56
  • 7
    -1 for "Premature optimization is the root of all evil." Please display the entire quote, not just one part that biases Knuth's opinion. – alternative May 25 '11 at 19:15
  • 2
    @mathepic: I intentionally did not present this as a quote. This sentence, as is, is my personal opinion, although of course not my creation. Although it may be noted that the guys from c2 seem to consider just that part the core wisdom. – back2dos May 25 '11 at 20:53
  • 10
    @alternative The full Knuth quote "There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil." Describes the OP's coworker perfectly. IMHO back2dos summarised the quote well with "premature optimisation is the root of all evil" – MarkJ Jul 25 '13 at 10:47
  • 2
    @MarkJ 97% of the time – alternative Jul 25 '13 at 18:20
28

Who cares if it's faster?

Unless you're writing real-time software it's unlikely that the minuscule amount of speedup you might possibly get from doing something in a completely insane manner will make much difference to your client. I wouldn't even go about battling this one on the speed front, this guy is clearly not going to listen to any argument on the subject.

Maintainability, however, is the aim of the game, and a giant switch statement is not even slightly maintainable, how do you explain the different paths through the code to a new guys? The documentation will have to be as long as the code itself!

Plus, you've then got the complete inability to unit test effectively (too many possible paths, not to mention the probable lack of interfaces etc.), which makes your code even less maintainable.

[On the being-interested side: the JITter performs better on smaller methods, so giant switch statements (and their inherently large methods) will harm your speed in large assemblies, IIRC.]

Ed James
  • 3,489
  • 3
  • 23
  • 33
  • Definitely this. – DeadMG May 25 '11 at 18:58
  • +1 for 'a giant switch statement is not even slightly maintainable' – Korey Hinton Jul 25 '13 at 14:14
  • 3
    A giant switch statement is a lot easier for the new guy to comprehend: all the possible behaviors are collected right there in a nice neat list. Indirect calls are extremely difficult to follow, in the worst case (function pointer) you need to search the entire code base for functions of the right signature, and virtual calls are only a little better (search for functions of the right name and signature and related by inheritance). But maintainability is not about being read-only. – Ben Voigt Aug 12 '14 at 18:44
  • Sentence #1: “Beginners save nanoseconds. People who are good at it save milliseconds.” – gnasher729 Oct 23 '21 at 14:45