Programming languages and programming paradigms

Published on 2016-01-02. Modified on 2021-01-26.

The Internet is filled with articles, blog posts, and forum debates about which programming language or which programming paradigm that is better than another. Python vs. Java vs. PHP vs. Ruby. Procedural vs. object oriented, object oriented vs functional, etc. In this article we'll take a much deeper look at programming languages and programming paradigms and try to understand these concepts from a different angle.

Table of contents

Binary patterns

When I studied low current electronics and engineering I had the opportunity to design and program micro controllers, micro circuits, and binary circuit boards. In relation to programming this is a blessing because even if you're never going to actually work with computers or electronics at such low levels such insight provides a different outlook than most programmers have.

Some people leave the university with a high degree in software development yet they still do not possess a basic understanding of how the technology works.

On the very basic level a computer is "just" a bunch of electronic circuits wired together. These circuits consist of components that each has a very specific task. One such component is the transistor. Transistors are commonly used as electronic switches which can be either in an "on" or "off" state. This state is represented by the presence of an electronic current or no current (or very low current). This state of the transistor can be mapped to a mathematical pattern of one's and zero's which again form the base-2 numeral system or the binary number system. One binary digit consisting of either a 1 or a 0 is called "a bit". 8 bits are called a byte.

The interesting thing about binary digits is that if we group bits together in patterns we can use these bits to represent numbers, letters, and symbols. And if we agree on such representations we can create standard maps of patterns. One such map is the ASCII map. The ASCII map was developed using telegraphic codes and its first commercial use was as a seven-bit teleprinter code promoted by Bell data services. ASCII is based on the English alphabet. The characters encoded are numbers 0 to 9, lowercase letters a to z, uppercase letters A to Z, basic punctuation symbols, control codes that originated with Teletype machines, and a space.

Using the ASCII map we can see that the binary representation for the uppercase letter 'A' is 01000001. In the computer memory the binary representation 01000001 actually consists of a bunch of transistors each with or without an electric current present.

Let's see what happens in a C++ computer program when we manipulate such a representation.

1. int main()
2. {
3.    char ch = 'A';
4.    short s = ch;
5.
6.    cout << ch << endl;
7.    cout << s << endl;
8. }

In line 3 we're telling the computer that we want to reserve space in memory to store the character 'A' represented by the variable name "ch" and in line 4 we're telling the computer that we want to reserve memory to store a short integer represented by the variable name "s". We then copy the contents of variable "ch" to variable "s" and print out both.

The result will be the following:

A
65

Why are we getting the number 65 when we're printing out the contents of the variable "s"?

The reason is because we copied the binary contents of the computer memory to another memory location represented by "s", but the content is the same. However, because we're now telling the C++ compiler that we're dealing with a short integer the binary representation gets mapped to the number 65 instead of the letter 'A' which conforms to the ASCII map in which the binary pattern of "01000001" represents the letter 'A' when we're dealing with letters and the number '65' when we're dealing with numbers.

    +--------+
+-+ |01000001|  ch = ASCII letter 'A'
|   +--------+
|
|
|   +--------+
+-> |01000001|  s = ASCII number '65'
    +--------+

If we use a C program instead and use the printf command from the standard library we can tell printf how to display the binary pattern at the memory location stored in our "ch" variable:

1. #include <stdio.h>
2.
3. int main()
4. {
5.     char ch = 'A';
6.
7.     printf("%c\n", ch);
8.     printf("%d\n", ch);
9. }

In line 7 we're telling printf to display the data as a character whereas in line 8 we're telling printf to display the data as a decimal. The output becomes:

A
65

The point of this example is to understand that because we're using the ASCII map we're getting the results above, but in reality we could map the binary number 01000001 to anything we like.

Randomly mapping binary numbers to different things makes no sense of course, so computer scientists and companies eventually agreed on different maps for different usage.

Another map is the UTF-8 map that is capable of encoding all possible characters, or code points, in Unicode. UTF-8 uses 8-bit code units and was designed for backward compatibility with ASCII. UTF-8 is the dominant character encoding for the World Wide Web, accounting for 85.1% of all Web pages in September 2015.

Now that we understand how the binary digits represent numbers and letters etc., we can also figure out ways to manipulate these binary digits in order to perform binary arithmetics - The mathematics of integers, rational numbers, real numbers, or complex numbers under addition, subtraction, multiplication, and division.

Programming languages

So what is a programming language?

The Z3 computer, invented by Konrad Zuse in 1941 in Berlin, was a digital and programmable computer. The Z3 contained 2,400 relays to create the circuits. The circuits provided a binary, floating-point, nine-instruction computer. Programming the Z3 was through a specially designed keyboard and punched tape.

The computers before the 1970s had front-panel switches for programming. The computer program was written on paper for reference. An instruction was represented by a configuration of on/off settings. After setting the configuration an execute button was pressed. This process was then repeated. Computer programs were also manually inputted via paper tape or punched cards. After the medium was loaded, the starting address was set via switches and the execute button pressed.

Developing computer software this way was of course extremely time consuming and difficult and eventual "abstractions" was invented in order to make it easier to program computers. The word "abstraction" has a very broad meaning, but in computer science an "abstraction" generally refers to a model of something that can be used and re-used without having to start over each time.

We can view abstractions in the following way:

+--------------------+
|Programming language| - Assembly, C, C++, Rust, Go, Java, Python, etc.
+----+---------------+
     |
+----v------+
|Abstraction| - Assembler, compiler, or interpreter.
+----+------+
     |
+----v---+
|01000001| - Computer memory
+--------+

One such abstraction is an assembler that translates the Assembly programming language into machine code.

From the Wikipedia article on the subject we can get the following explanation:

A program written in assembly language consists of a series of (mnemonic) processor instructions and meta-statements, comments and data. Assembly language instructions usually consist of an opcode mnemonic followed by a list of data, arguments or parameters. These are translated by an assembler into machine language instructions that can be loaded into memory and executed.

An example of some assembly code is taken from the Wikipedia article in which we instruct a x86/IA-32 processor to move the value "97", represented by the binary digits "01100001", into a register in the processor.

MOV AL, 61h;  Load AL with 97 decimal (61 hex)

The binary code for this instruction is 10110 followed by a 3-bit identifier for which register to use. The identifier for the AL register is 000, so the following machine code loads the AL register with the data 01100001.

10110000 01100001

Programming in assembly language was still difficult and very time consuming and eventually other "higher level abstractions" was invented. Wikipedia also has a nice article about the History of Programming Languages.

So, in light of the above we can actually view a computer programming language as a bunch of symbols that makes it easier for us to manipulate electricity in transistors. It allows us to easy insert data into memory and then manipulate that data.

No matter what programming language we're using we're essentially just abstracting away the manipulation of bits in memory using different symbols and "instructions".

The efficiency of the resulting computer program has nothing to do with the programming language itself. Rather it is a question on how the "abstraction" is working. The abstraction being the assembler, the interpreter, or the compiler that translates the symbols and instructions we provide (using a specific programming language) into binary digits. The end result is always binary digits.

Interpreters vs. compilers

The higher level of abstraction we're using the more detail is lost.

This means that the closer we are at the hardware level, the more control and detail we have. The further away we move from the hardware level, the more control and detail is lost.

There is only one programming language that any computer can actually understand and execute: its own native binary machine code. This is the lowest possible level of language in which it is possible to write a computer program. All other languages are said to be high or low level according to how closely they can be said to resemble machine code.

Low-level languages have the advantage that they can be written to take advantage of any peculiarities in the architecture of the central processing unit (CPU) which is the "brain" of any computer. Thus, a program written in a low-level language can be extremely efficient, making optimum use of both computer memory and processing time. However, to write a low-level program takes a substantial amount of time, as well as a clear understanding of the inner workings of the processor itself. Therefore, low-level programming is typically used only for very small programs, or for segments of code that are highly critical and must run as efficiently as possible.

High-level languages permit faster development of large programs. The final program as executed by the computer is not as efficient, but the savings in programmer time generally far outweigh the inefficiencies of the finished product. This is because the cost of writing a program is nearly constant for each line of code, regardless of the language. Thus, a high-level language where each line of code translates to 10 machine instructions costs only one tenth as much in program development as a low-level language where each line of code represents only a single machine instruction.

If we write a program in the C programming language we loose the ability to manually move data around in registers as we could in assembly language. However, if we write a program in the Prolog programming language we loose the ability to link structures using pointers as we could in C. The higher level we're using the higher the abstraction penalty.

+-----------+
|Interpreter| - Java, Python, Ruby, PHP, etc. (very high level).
+-----^-----+
      |
 +----^---+
 |Compiler| - C, C++, Go, Rust (high level).
 +----^---+
      |
 +----^----+
 |Assembler| - Assembly language (low level).
 +----^----+
      |
 +----^---+
 |01000001| - Machine code/Binary instructions (hardware level).
 +--------+

A compiler converts source code into binary instruction for a specific processor's architecture. This conversion results in a binary program that can be executed without further translation on a computer. A cross compiler can generate binary code for different processor architectures.

An interpreter executes the source code "on the fly" without compiling it into machine code.

An interpreted program can be distributed as source code without compilation. It then needs to be translated in each final machine, which takes more time but makes the program distribution independent of the machine's architecture. The portability of interpreted source code is dependent on the target machine having a suitable interpreter.

Interpreted code is always slower than running compiled code because the interpreter must analyze each statement in the program each time it is executed. Access to variables is also slower in an interpreter because the mapping of identifiers to storage locations must be done repeatedly at run-time rather than at compile time.

There are various compromises between the development speed when using an interpreter and the execution speed when using a compiler.

Programming paradigms

A programming paradigm is in reality nothing more than a "style of programming", ie. a way to organize the code. Each paradigm is best suited for a specific task and there is no one paradigm that's better than the other. It all depends on the problem that needs to be solved.

Different programming languages supports different paradigms. Some programming languages has been designed to only support one particular paradigm while others support multiple paradigms.

Once again I'll refer to a very good article on Wikipedia: Programming paradigms in which we find the following information:

Programming paradigms that are often distinguished include imperative, declarative, functional, object-oriented, procedural, logic and symbolic programming. With different paradigms, programs can be seen and built in different ways; for example, in object-oriented programming, a program is a collection of objects interacting in explicitly defined ways, while in declarative programming the computer is told only what the problem is, not how to actually solve it.

An important notion is the following:

Some programming language researchers criticize the notion of paradigms as a classification of programming languages, e.g. Krishnamurthi. They argue that many programming languages cannot be strictly classified into one paradigm, but rather include features from several paradigms.

Which is very correct.

Solving problems requires the right set of tools and the right set of tools requires the right concept of usage.

If for example we're building a house we first need the right set of tools. Which set of tools we require depends on what kind of house we're building, the materials we're using, and the location where we plan to erect the house. Secondly, we need a deep understanding of the different approaches to building a house depending upon the main usage of the house.

If we were to approach this task as most programmers do, using only one specific programming paradigm, and one specific programming language, we would always be building the house using the exact same set of tools in the exact same manner, even if the usage of the house changes, even if the location and soil changes, and even if the building materials are different.

Such a programmer would for example always be solving different problems using only object oriented programming. Another programmer might be using a different programming paradigm, but still always use only one specific programming language.

If we compare this approach to other areas of life we quickly realize that this is not very productive. Using the right tool for the job and using the best approach suitable for solving a particular problem is universal.

From this perspective a programming language should optimally support multiple programming paradigms, however another approach is the combination of several programming languages each with its own programming paradigm and each solving its own little part of the problem.

An interpreted programming language might support several programming paradigms very well, yet if our problem has aspects that require extreme speed, we might need to combine the interpreted solution with the efficiency of a compiled solution.

Hence, multiparadigm is really what we need to be thinking about when dealing with problem solving, yet not necessarily restricted to only one particular programming language.

Adding paradigms to a programming language is also added complexity which again makes the programming language more difficult to understand and use.

One of the reasons C is considered such a powerful programming language is not only its "lower level" compared to other programming languages, but also the beauty of its simplicity.

The Go programming language is another good example. While Go support both procedural, object oriented, and functional paradigms, the language has been designed to be very simple and easy to use. Hence, in comparison to other modern languages it lacks several features known to support a truly object oriented paradigm. Some people consider this a weakness of the Go programming language. I disagree. I regard Go as a clever and efficient design that has removed much of what I consider to be unnecessary complexity.

Some people think that in order to use the object oriented paradigm you need to have classes, but classes are just tools that helps in the implementation of the object oriented paradigm. The classes themselves has absolutely nothing to do with object oriented programming.

A really simple example in PHP illustrates this.

// Procedural:
function foo()
{
    echo 'Hello from foo.';
}

function bar()
{
    echo 'Hello from bar.';
}

foo();
bar();

// Still procedural but with a class:
class Hello
{
    public function foo()
    {
        echo 'Hello from foo.';
    }

    public function bar()
    {
        echo 'Hello from bar.';
    }
}

$hello = new Hello();
$hello->foo();
$hello->bar();

In the above example the class serves only as a container for the two functions, other than that the entire construct is procedural programming.

When we're dealing with object oriented paradigms some buzzwords has gained wide acceptance. When I first studied the object oriented paradigm I struggled with understanding the concepts because of these buzzwords.

To me a "variable" is a "variable", no matter how it's used. And a "function" is always a "function". But that's not entirely correct from an object oriented point of view.

In a pure procedural approach a function is available in global scope and the function can be accessed from anywhere within the code. The same is true for variables. However, if we group functions and variables together inside a "container" (a class in C++, Java, Python, and PHP) we're dealing with another situation. The container allows us to define which of our functions and which of our variables should be available to other functions and variables outside that particular container. Also because containers are tools specifically used to represent "objects" in an object oriented programming language, functions and variables pertaining to one particular container is viewed as "members" of that particular object and as such they "change" their names to "methods" and "properties". Methods being functions and properties being variables.

// This is called a variable.
$num = 3;

// This is called a function.
function name()
{
    // Do something.
}

// The class helps us group specific related code together in a container.
class Person
{
    // This is now called a property.
    public $num = 3;

    // This is now called a method.
    private function name()
    {
        // Do something.
    }
}

In the above example the function name() inside the Person class becomes inaccessible from outside the class itself because it has been declared private.

Let's look at the real meaning of some of the object oriented terminology, but keep in mind that the terminology is implemented very differently from programming language to programming language and sometimes it's quite difficult to see "what means what" to which programming language.

Encapsulation: is any kind of mechanism that allows:

In PHP, Python, Java, and C++ for example this feature is implemented using "classes".

Inheritance: or "abstraction" (which is a very broad and general concept) is the ability of classes (or whatever mechanism that is used) to inherit properties (variables) and methods (functions) from other classes.

Let's make a really simple example in PHP:

class Person
{
    public function name()
    {
        // Do something.
    }
}

class Teacher extends Person
{
    public function teach()
    {
        // Do something.
    }
}

class Student extends Person
{
    public function learn()
    {
        // Do something.
    }
}

$teachers = new Teacher();
$students = new Student();

$teachers->name();
$teachers->teach();

$students->name();
$students->learn();

In this simple example we have extended the usage of the Person class into the Teacher and Student classes and added the name() method from the Person class. Why is that useful? Because it is now possible to "extend" our code with more classes if we need more functionality without having to change the original code in any way, yet at the same time we can still facilitate the code located in the "Person" class via the newly added classes. At the same time related actions and resources still gets grouped together in relation to each other.

From a more philosophical point of view we can say that "every teacher is a person" and "every student is a person" and as such should be able to access their "names", but not every "Teacher" is a "Student" and vice versa. Hence, it is not possible for the "Student" class to extent the "Teacher" class nor for the "Teacher" class to extend the "Student" class.

In general inheritance should be avoided and used as little as possible because when projects grow many classes can quickly become quite dependent upon each other resulting in what's called "object oriented spaghetti", but the concept is a powerful concept.

Polymorphism: is the ability to treat objects of different types in a similar manner, but the way people understand and implement this is very different.

One way to achieve polymorphism is by the usage of "inheritance" as described above, ie. classes inherit code from other classes. Another and much better way is a feature called an "interface".

With the usage of an interface classes don't inherit from each other, instead a general "blueprint" is defined with properties (variables) and methods (functions) but no real code exists within this interface. Other classes can then use this "blueprint" to implement the same set of properties and methods, but with completely different implementation.

There exists a big different in the implementation of interfaces between object oriented programming languages and it is necessary to study each language specification to understand the correct usage.

Let's view a very simple example of polymorphism implemented in PHP by the usage of an interface:

interface Output
{
    // No workable code in the interface. We just declaring a method.
    public function printSomething() {}
}

class HTML implements Output
{
    public function printSomething()
    {
        // Print HTML.
        echo '<h1>Hello world<h1>';
    }
}

class JSON implements Output
{
    public function printSomething()
    {
        // Print JSON.
        echo '{"Greeting": "Hello world"}';
    }
}

The "Output" interface has no code. It only specifies what every class that "implements" this interface must contain a printSomething() method. Each class that implements the "Output" interface then must have a printSomething() method, but the code in printSomething() method can be very different from each class. The classes that implements the interface can also still contain properties and methods not listed in the interface. All methods declared in an interface must be public as this is the nature of an interface.

This is useful because it again becomes possible to extend the code base with new classes with completely new functionality. In the above example if the need arose to suddenly print XML we could just add such a class, yet without having to change anything in the existing code.

We could then do the following:

$html = new HTML();
$json = new JSON();

// Let's print some HTML.
echo $html->print_something();

// Let's print some JSON.
echo $json->print_something();

If we then later needed XML we could simply do the following:

class XML implements Output
{
    public function printSomething()
    {
        // Print XML.
        echo '<greeting>Hello world</greeting>';
    }
}

// And then..

if ($use_json) {
    $json = new JSON();
    echo $json->printSomething();
} else {
    $xml = new XML();
    echo $xml->printSomething();
}

What's important to understand is that only a programming language that fully support these mechanisms in some way or another is a "true object oriented programming language".

When a programming language has built-in support for the object oriented paradigm it becomes easy to facilitate this paradigm, while it becomes more difficult (or impossible) in a language without.

Other terminology exists related to the object oriented paradigm, but most of this terminology are just synonyms or specifications of detail of the above principles.

Inheritance is often termed as "abstraction", because it serves as a kind of "abstracting away code" coming from the "parent" classes. The same implies to interfaces.

Another kind of abstraction is the abstraction a pure procedural language provides with a function in which code inside the function is "hidden" from the outside world. Some layer of abstraction is thus achieved in all programming languages even those that has no support or relation to the object oriented paradigm whatsoever.

Now we have been dealing with the object oriented paradigm perhaps we should take a brief look at the functional paradigm too.

In object oriented programming everything is viewed as an object. An object is a collection of properties and methods that do actions on those properties. In both procedural and object oriented programming variables have an "identity" and a "state". The identity is merely the name of the variable. Names serve to easy differentiate between variables. The "state" refers to "holding" the value of a variable in memory. When the state can change it's called "mutable state", when it cannot change it is called "immutable state".

In functional programming everything is literally a function. For example an array or a list is also a function or a group of functions. In functional programming, you have no data represented by variables. Data is not assigned to variables. Some values may be defined and assigned, but in most cases they are functions assigned to "variables". "Variables" in functional programming are immutable, meaning: They must not change values.

Even though most functional programming languages don't enforce immutability, like most object oriented languages don't enforce objects, if you change the value after an assignment you are not doing pure functional programming any more.

The important aspect of functional programming is that because no state exists and no assignment is done functions do not have any "side effects". Which means that if you call a function with the same parameters again and again you will always have the same result.

This is generally considered a huge advantage over object oriented programming because it greatly reduces the complexity in multi-threaded applications. It also makes it very easy to use automated tests to test code. And last, but not least, it helps eliminate bugs.

Since we need to express everything in functions, we need to be able to assign such functions to other functions, and we need to return functions rather than values. As such functional programming has "invented" the concept of higher-order functions.

This basically means that a function can be assigned to a "variable", sent as a parameter to another function, and returned as a result of a function.

Patkos Csaba in his Functional Programming in PHP states that there are three guidelines to functional programming:

Let's take a look at a simple example using Javascript.

We have an array of animals belonging to different species. We want to iterate over this array and pull all the dogs out.

In a pure procedural approach we would normally do something like this:

var animals = [
    { name: 'Fluffykins',   species: 'rabbit' },
    { name: 'Caro',         species: 'dog' },
    { name: 'Hamilton',     species: 'dog' },
    { name: 'Harold',       species: 'fish' },
    { name: 'Ursula',       species: 'cat' },
    { name: 'Jimmie',       species: 'fish' },
];

var dogs = [];
var animals_len = animals.length;
for (var i = 0; i < animals_len; i++) {
    if (animals[i].species === 'dog') {
        dogs.push(animals[i]);
    }
}

Now, let's change that into a functional approach instead:

var animals = [
    { name: 'Fluffykins',   species: 'rabbit' },
    { name: 'Caro',         species: 'dog' },
    { name: 'Hamilton',     species: 'dog' },
    { name: 'Harold',       species: 'fish' },
    { name: 'Ursula',       species: 'cat' },
    { name: 'Jimmie',       species: 'fish' },
];

var dogs = animals.filter(function(animal) {
    return animal.species === 'dog';
})

In the functional example we have removed the for loop and are only using functions. Tt might be difficult to see the advantages of functional programming, but the code in the functional example is much cleaner and much more safe since we're not dealing with variables any longer.

Truly object orientated

Did you know that psychology is a part of the object oriented paradigm? And did you know that you actually cannot do true object oriented programming in Java?

If you really want to understand the object oriented paradigm, then watch this YouTube video with James Coplien who is a writer, lecturer, and researcher in the field of computer science.

What really matters

When dealing with programming languages and programming paradigms what really matters are:

The first three options intermingle in some areas, but we generally need all those four in order to develop efficient, secure, and maintainable code with relative ease.

Unless we're dealing with only very small applications multiparadigm is often the best approach if supported by the programming language.

The most common approach is to develop entire applications in the same programming language using only one particular paradigm, even when the programming language support several paradigms.

In - for example - PHP web application development this is the "modern approach" most frameworks take. Even when the underlying architecture that the web application is built upon consists of a stateless nature, the framework couples everything together into one single "blob" of intermingled classes using the so called front-controller pattern and even the tiniest amount of bootstrapping - that is most efficiently handled using a simple structured approach - gets wrapped into huge sets of intermingled classes in order to force the entire application into an object oriented paradigm.

This approach has become very popular partly because it serves as an easy way to avoid code duplication, but mainly because it has been propagated by object oriented "fan boys" as the only correct approach.

When a web application is nothing more than a small to middle-size application no problems are detected using such an approach, but once the application requirements grow and the need for efficiency rises, such a design pushes the application to its knees and the front-controller pattern and object oriented approach becomes huge bottlenecks in the web application.

As with everything else in life, we need to try to keep our solutions simple, easy to understand, and efficient at implementation, yet without sacrificing crucial elements. This generally means that we must strive to find the perfect balance rather than being extremist in the implementation of one single "way of doing things".

Some notes about the Go programming language

In light of all of all of the above I think it is useful to share some thoughts about the Go programming language.

The Go programming language was developed at Google in 2007 by Robert Griesemer, Rob Pike, Ken Thompson. The language was announced and released in 2009 as Open Source and is was already in production at Google at that time.

Ken Thompson designed and implemented the original Unix operating system. He also invented the B programming language (direct predecessor to the C programming language), and he was one of the creators and early developers of the Plan 9 operating systems.

Robert Pike is best known for his work at Bell Labs, where he also was a member of the Unix team. He was also involved in the creation of the Plan 9 operating system from Bell Labs and the Inferno operating systems, as well as the Limbo programming language.

Pike, with Brian Kernighan, is the co-author of The Practice of Programming and The Unix Programming Environment. With Ken Thompson he is the co-creator of UTF-8.

Robert Griesemer has worked on code generation for Google's V8 JavaScript engine and Chubby, a distributed lock manager for Google's GFS distributed filesystem. He also worked on the design and implementation of the domain-specific language Sawzall, the Java HotSpot virtual machine, and the Strongtalk system. He has also written a vectorizing compiler for the Cray Y-MP and an interpreter for APL.

The Go programming language is as such designed by some of the most experienced and skillful people in the world.

The first thing I noticed when I decided to take a look at the Go programming language was that it lacked much of the so-called "modern" support of different paradigms that several other popular programming languages seems to keep getting stuffed with.

As an example, both PHP and C++ has been extended with new object oriented functionality in later years and these languages keeps growing bigger and bigger.

Contrary to this the Go programming language has been stripped of all these things on purpose and with very good reason.

What makes the Go programming language a brilliant feat of engineering is the fact that the designers has had very negative reactions to ideas from other programming languages that they deem "academic", "theoretical" and "non productive".

Some people think that the Go programming language ignores all the basic advancements in programming language design from the last 40 years. They consider it a language stuck in the 70's that's has been developed for mediocre developers. But this is very wrong.

Every single functionality has been very carefully planned and thought out and implemented in the most efficient and useful manner. Go was after all designed to solve Googles problems.

There is no borrowing, no pattern matching, no pure functional programming, no pure object oriented programming, no immutable variables, no option types, no classes, no generics (Go 2 may be getting generics), and a lot of other stuff is also missing, but Go still supports aspects of both procedural, functional, and object oriented programming.

The Go philosophy of simplicity and real pragmatism is the extreme opposite of what all those other programming languages drowns in of added complexity. The Go programming language as such is just as simple as C, yet at the same time much more expressive as many of the so-called "modern" functionality can be implemented using very simple approaches.

To me the Go programming language is one of the best designed programming languages since C and Erlang! It is so well designed that even though it is a compiled language, it becomes very useful and productive at tasks normally written in dynamic script languages such as Python, Ruby, or PHP.

Recommended reading: