No, programming languages do not have speeds

I see time and time again this myth that programming languages have varying speeds perpetually peddled in various discussions between programmers in different IRC channels, forums, and so on. Some people think that, for example, C is much, much faster than Python. And a few of them go even further – they argue that C is a language that is “close to the metal”. Well, let me tell you. The code that you write in C is actually dedicated for a virtual machine that can eat that code and spit out assembly which roughly does the things that it is supposed to according to the C language standard. Why roughly? Simply because that virtual machine is poorly defined and it has a lot of undefined behaviour. This opposes our original argument that C is “close to the metal”. How could real hardware be poorly defined? Could it be that if you executed a certain instruction one time it would do something completely different, that was not mentioned at all in the CPU manual? Well, the answer is obviously no. Nobody would use such computers.

In general, a language (either spoken or a programming language) is just a conjugation of lexicon, syntax, or, in other words, it is just a way for us to express our thoughts that is understood by other people or computers either by using their brain to interpret the sounds waves that reach their ear-drums or by interpreting the logical structure of text. A computer at the end of the parsing pipeline really just executes a *lot* of “primitive” instructions at the CPU level (for brevity let’s say that only the CPU executes instructions and takes significant decisions). And those instructions don’t take the same amount of time to execute. How could it be that one way or another would be faster? It really mostly *depends* on the parsing part. Obviously, CPU have these things called caches and so on which might influence results but the former part still remains the most important one.

Certainly all of those abstractions don’t come without a cost but at the bottom line the languages themselves don’t define how fast they are. Instead, what we should be talking about are their implementations at the very least. I am pretty sure that when people are having those discussions,  they do not have some kind of fastest or slowest implementation in their mind.  So, if we are talking about speeds, we ought to compare the speeds of functionally equivalent, compiled programs on specific implementations. Even then it is problematic because we need to agree on what is the definition of “speed”.

The number of CPU instructions that are executed? Well, some instructions are faster, some are slower. Just because there are more of them does not mean that the program is slower.

Number of source code lines? The sheer size of program’s size does not translate directly into how big the resulting executable is. Also, see the former paragraph.

Memory usage? Even if some program allocates, let’s say, 2GB of RAM it still doesn’t mean that it is slower. It might calculate the answer quicker regardless of how much RAM it needs.

All in all, ideally we would be talking only about objective things. However, I think this is an utopia in general and in this case because the benchmarking software still runs on some kind of operating system, specific hardware, and so on. Perhaps there is no need to go into so much detail about objectivity when doing a comparison of programming languages but we should at least look at the tip of the ice berg.

That tip is the specific compiler and its version. The benchmarking method and/or software should also be included. You could even include the definitions of various words such as “speed” so everyone would be talking about the same thing when they are using (sometimes) convoluted terms. So, please, let’s all do our part and make our place a little bit more objective instead of spreading anecdata and encouraging people to practice cargo cult (people blindly switching from one language to another because they think it will make their programs magically faster) by saying that the programming language X is faster than Y.

4 thoughts on “No, programming languages do not have speeds”

  1. > Simply because that virtual machine is poorly defined and it has a lot of undefined behaviour. This opposes our original argument that C is “close to the metal”. How could real hardware be poorly defined? Could it be that if you executed a certain instruction one time it would do something completely different, that was not mentioned at all in the CPU manual? Well, the answer is obviously no. Nobody would use such computers.

    I’m sorry to tell you that the real processors we use do have undefined behaviour. Lots of it. Look in the Intel or ARM manuals and see.

  2. Well, ok, that is true but what I meant is that with C one is adding another layer of undefinedness and plus C only has a very short standard, most of the stuff really is undefined. The Intel CPU manuals, for example, are thousands of pages long. Plus, as far as I know most of the stuff that is undefined in CPUs are the register contents after certain operations. So, I see the resemblance but however I would not equate the definedness of those two things together.

  3. You seem confused with the notions of parsing, compiling code, and executing instructions, and this is probably the reason of some of the assertions you mistakenly made.

    The execution time of instructions has nothing to do with the parsing, which is done only once by the compiler during its first stage of processing the program source code. This stage implies reading the source code and storing it into a database which can then be used by the compiler to transform it (usually by breaking it down to a simpler representation and by optimizing it) into an intermediate executable format like the bytecode, or into a final CPU assembly code (or actually, directly assembling it into its binary form). The time to perform this compilation is thus irrelevant here.

    The time required for a particular end-user’s CPU version to execute the produced machine code mainly depends on its pipeline resources and in the instructions’ interdependencies.

    Close to the metal means being more likely to give an efficient list of instruction for the target processor. Most languages have a relative propension of being translated to efficient instructions.
    One most obvious difference is the necessity of code introspection. It is no secret the Python language has poor performance, the types are completely dynamic and thus the compiler cannot anticipate the operations. Instead, the choice of the operations must be done during the execution depending on the type, which gives a much longer list of instructions to execute than, say, a C++ program, or a C# program that doesn’t use dynamic types.

    For example, adding two integers in C can be directly translated into the CPU instructions. But adding two variables in Python will require a test on the operand’s type, a branch to the actual operation, either an integer addition, a memory manipulation to add the data to a list, a floating-point addition, and so on.

    “The number of CPU instructions that are executed?” You argue that some instructions are slower/faster. It’s generally false, the timing of those operations is well-defined and usually very similar (1 cycle in each pipeline stage). The execution may vary with the dependencies because it could stall the CPU pipelines. Some languages are better compiled in that regards because the instruction flow is known at compilation (e.g. when the types are known, see above). And in any case, let’s be realistic… executing more instructions will be slower than fewer instructions!

    “Memory usage?” It is actually very important. Not only more memory will consumme more power – it’s the foremost criterion actually, but it will also require more memory management. If a language uses more memory, not only does this require to move more data to and from this memory (the amount of which directly translates into transfer time), but it could also be an indication the language is using some form of memory management. And as a most typical example, garbage collection is known as a synonym of impredictable slow-downs. C and C++ can be as predictable and efficient as the programmer wants it to be, Java will depend on the JVM implementation (both in the bytecode interpretation/compilation, but also in the memory management), over which the programmer has no control.

    In conclusion, yes, programming languages do have significant performance factor ranges relative to one another, they can be found, so many results have been regularly published on that topic that I will not bother to reference them. Arguably, those ranges may vary depending on the application, but they will remain in the same area.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.