Merriam-Webster’s dictionary kindly provides us with a thoughtful and well-rounded definition of the word paradigm. It says that a paradigm is “a philosophical and theoretical framework of a scientific school or discipline within which theories, laws, and generalizations and the experiments performed in support of them are formulated.” Such a definition sets the stage for this discussion. In this and the following papers, I intend to report on as many such “frameworks” as can be reasonably researched, discussing their areas of application and reasons for existence. This is with the overall goal that, in understanding how these various schools of thought are organized, one might be able to better assess new problems as they arise, deciding whether or not a solution is already in existence.
With that being said, we begin with an overview of two of the most important paradigms, “declarative” and “imperative.” One finds the roots of this nomenclature in grammatical laws. Something which is “imperative” is something that gives a command, as opposed to making a statement or describing something. The latter is the role of the “declarative” statement, which provides a fact or conveys information. In computer programming, this difference boils down very simply. In declarative programming, one must merely state what should be done. In imperative programming, one states how to do what should be done. If one were to analogize the situation, it would be akin to telling your spouse, “I am going to bake you some cookies,” as opposed to reciting a precise list of steps which constitute your mother’s chocolate-chip-and-peanut-butter delights.
This idea is old. Functional languages predate modern notions of computers in that one can trace their history right back to Alonzo Church’s lambda calculus, developed in the 1930s. In 1950s, lambda calculus found its implementation in the language LISP. On the other hand, imperative languages are as old as computers themselves, in the sense that computers only work in an imperative manner. Assembly languages, almost directly comparable to the machine code which computers run, are, for the most part, purely imperative. Another early language, FORTRAN, was released in 1957, and very closely resembled imperative assembly languages.
However, the differences these paradigms represent for the software developer are quite profound. In their book, The Structure and Interpretation of Computer Programs, the authors state: “Mathematics [the declarative point of view] provides a framework for dealing with notions of ‘what is.’ Computation [the imperative point of view] provides a framework for dealing precisely with notions of ‘how to.’” This difference is the crux of the matter. When one writes the expression: \(2+2=4 \), we rarely bother thinking about how summation works. We just do it (or we use a calculator for it). The same goes for division, and many other mathematical operations. Once we know how to do them, we often disregard the imperative aspect, and treat them declaratively. However, because computers work imperatively, we are forced to define such operations in terms of how they are accomplished. In this case, \(x+y=x+1_1+1_2+1_3 … + 1_y \). Below, we have listed the same operation (summation) written in two different forms (using Python 3), the first is declarative, the second, imperative.
As you can see, the declarative version is more verbose. Imperative programming assumes that you have some sort of “state” which can be changed, and those changes are passed on and are accessible to later instructions in the form of the environment. Furthermore, if instructions need repeating, it must be explicitly stated. Conversely, our declarative example is written in a form of declarative programming known as “functional.” There is no state. One can almost read it as easily as they do some mathematical definitions of summation.
However, it is inferred from the examples above that both functional and imperative programming paradigms can coexist in the same place and program. In this case, Python was used for both examples. The question is raised: because modern languages allow for a variety of approaches to solving problems, when would one use the declarative approach and when would one use the functional approach?
Obviously, imperative instructions and algorithms are required when working at a low-enough level. Even if something were written in a high-level, functional, language, it probably looks the same by the time it makes it down to the assembler. At the same time, any more complex computations than multiplication (if not supported in hardware) would be tedious to write if all we had to work with were basic iterative operations. Therefore, it is the conclusion of this writer that imperative and declarative methods must be used together, at some point and in some combination, to create any moderately complex system. The main tool which one would use in combining them would be abstraction. Perhaps an operation (like our sum function) were to be created in an imperative manner because there are no more primitive operations to be had. We are at the bottom of the abstraction tower. However, after the operation has been created, why not refer to it and use it as if it were a basic operation, in a declarative manner? (This has been thinking of many language designers, LISP included.)
Whatever paradigm one decides to make use of, they have a responsibility to know what they are doing and why. Decide, deliberately, whether you should guide your process in terms of how to do something, or in terms of what it should do.