What Is A Linker And Why Do We Need It?

Beginning programmers ask this all of the time, and in my circles I’m usually the one to answer this although I’m met with blank faces in response. It became more apparent that we need a good, solid article explaining exactly what a linker does when I saw a whole section on Wikipedia about the “debate” regarding whether or not we even need linkers.

If you are one of the growing community of people wondering what a linker is and why we need them, or just a curious user, then read on – this is for you.

The Basics

For both the casual reader and those whom may not have a grasp on the entire code-to-binary process, here’s one of my famous Giant-Colorized-And-Layered-Diagrams ™ providing an overview:

An overview of the compilation process

An overview of the compilation process

Jokes aside, that’s the entire process. The compiler first runs your code through the preprocessor (cpp) to expand macros and preprocessor definitions, then parses it based on the compiler’s language-specific lexer output. All the compiler does after parsing your code is generate machine code (machine-interpretable assembly language) based on the code found, then optionally optimize it.

(for the curious, most compiler errors occur during the parsing phase – i.e. a forgotten semicolon, etc.)

Where The Linker Comes In

What many don’t realize is that this is all the compiler does. The machine code generated by the compiler is in separate .o files named after the input files, each still using symbols instead of precise memory locations for things like variables and function names.

This is far from being executable, but this is also where the linker comes in.

The linker, called directly by the compiler after its done its job in most cases, first goes through a process called “relocation” according to the above diagram. This involves putting the machine code from the files in order according to how they were presented to the linker by the compiler.

This “order” is the basis for the executable format, e.g. ELF or COFF/EXE. The operating system needs a standard executable format so it knows where to find the stack section, data section, and code section of the resulting executable so it can be loaded and executed properly.

During the arrangement of the resulting executable(s), the linker is performing symbol resolution. This is the process of replacing a variable/function name with an exact memory address, so the program can find stuff in memory. After this is complete (and without errors), the resulting executable(s) and libraries are ready for distribution or execution.

Virtual Memory and the Linker

But wait, how does the linker assign an exact address to a variable without knowing whether or not that memory is already in use by the operating system at runtime?

It’s simple: magic.

…By magic, I mean the operating system “fools” the process/application into thinking it starts at a particular memory location, when in reality it doesn’t. This OS-specific starting address is also used by the linker during linking time as the organizational basis of the application.

Here’s the thing: just because the application (thanks to the linker) thinks it starts at memory location 0x0000ffff doesn’t mean it’s at that location in physical memory. The process sees it that way, but the operating system alone knows that this application is one of many pretending to be using this base address.

The OS kernel maps the application’s virtual memory (i.e. the fake base address and onwards) into physical memory via it’s internal memory management, and therefore every application is given the impression that it has all 4 gigabytes of memory to itself when really the OS kernel’s memory manager knows all along that the real address is somewhere in either physical RAM or paged out to the hard disk to conserve precious physical memory.

Still with me here? In a nutshell, the kernel simply makes the app believe it has the same starting address every time.

This is in contrast to older systems, where a runtime linker would map symbols to physical memory every time a program was executed to prevent overlapping. If this sounds slow and cumbersome, it indeed was.

Virtual memory is the preferred method for both linking consistency between applications, as well as better memory management. But a runtime linker still has it’s place, however:

Dynamic Linking

Ah, now it really gets fun. Sometimes, an application likes to put functions (or even static variables) into separate files, and execute them at runtime – even after the application has started executing!

This is called dynamic linking, represented by the D-Link sublayer in the above diagram. This is the process of an application loading a precompiled binary executable/library into it’s memory address space (the fake one, remember?) and resolving existing symbols to the newly loaded memory addresses.

If you understood the rest of the process earlier, it’s not much different in what happens next, only the application is already loaded and executing whenever it does this (in fact, the primary binary’s code is what loads up the file in the first place – the linker put this code in there so it could finish the job later during runtime).

In fact, most operating systems/C compilers and libraries do this these days to save code space: instead of every single application containing the code for printf() within it’s on-disk executable, this code resies in a separate shared library file with the application containing the minimal linker code to load it at runtime.

Most compiler/linkers (such as MSVC and GCC) do this transparently, only turning this feature off if told to statically link. Dynamic linking is done to save disk space, thanks to consolidating the shared code into one nonvolatile location and to ease bugfixes/updates to said code in the future rather than updating each individual application.

So while a linker may not seem necessary to some programmers, an understanding of virtual memory and shared libraries is key in order to grasp the concept of a linker. Without it, nothing in an executable could either load into or reference memory locations, and the application would just be a bunch of format-less object files (.o).

OS kernels have linkers too, but this is more complex since they reference their own kernel memory address space and execute code directly within it (possibly crashing the whole system in the process). The internal OS linker is for loading and executing device drivers, which are very loosely formatted libraries with specially exported symbols for execution.

So I hope this article clears the air about why we need linkers. I would think that modern computer science professors would explain this, but instead this topic is apparently more reserved for operating system classes focused on memory management, leaving the computer science students baffled by this “magic”.

Applied to South Park terminology:

  1. Write code
  2. Compile it
  3. ????
  4. Profit!

But there is no magic. Just a logical process of allowing applications to execute in an organized address space. But some still consider this magic none the less…


Anthony Cargile is the founder and former editor-in-chief of The Coffee Desk. He is currently employed by a private company as an e-commerce web designer, and has extensive experience in many programming languages, networking technologies and operating system theory and design. He currently develops for several open source projects in his free time from school and work.

More Posts - Website

There are no comments yet, add one below.

Leave a Comment

Your email address will not be published. Required fields are marked *