Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

C compiler, Device Drivers and Useful Developing Techniques

Save for later
  • 22 min read
  • 17 Mar 2017

article-image

In this article by Rodolfo Giometti, author of the book GNU/Linux Rapid Embedded Programming, in this article we’re going to focusing our attention to the C compiler (with its counter part: the cross-compiler) and when we have (or we can choose to) to use the native or cross-compilation and the differences between them.

(For more resources related to this topic, see here.)


Then we’ll see some kernel stuff used later in this article (configuration, recompilation and the device tree) and then we’ll look a bit deeper at the device drivers, how they can be compiled and how they can be put into a kernel module (that is kernel code that can be loaded at runtime). We'll present different kinds of computer peripherals and, for each of them, we'll try to explain how the corresponding device driver works starting from the compilation stage through the configuration till the final usage. As example we’ll try to implement a very simple driver in order to give to the reader some interesting points of view and very simple advices about kernel programming (which is not covered by this article!).

We’re going to present the root filesystem’s internals and we’ll spend some words about a particular root filesystem that can be very useful during the early developing stages: the Network File System.

As final step we’ll propose the usage of an emulator in order to execute a complete target machine’s Debian distribution on a host PC.

This article still is part of the introductory part of this article, experienced developers whose already well know these topics may skip this article but the author's suggestion still remains the same, that is to read the article anyway in order to discover which developing tools will be used in the article and, maybe, some new technique to manage their programs.

The C compiler


The C compiler is a program that translate the C language) into a binary format that the CPU can understand and execute. This is the vary basic way (and the most powerful one) to develop programs into a GNU/Linux system.

Despite this fact most developer prefer using another high level languages rather than C due the fact the C language has no garbage collection, has not objects oriented programming and other issue, giving up part of the execution speed that a C program offers, but if we have to recompile the kernel (the Linux kernel is written in C – plus few assembler), to develop a device driver or to write high performance applications then the C language is a must-have.

We can have a compiler and a cross-compiler and till now, we’ve already used the cross-compiler several times to re-compile the kernel and the bootloaders, however we can decide to use a native compiler too. In fact using native compilation may be easier but, in most cases, very time consuming that’s why it’s really important knowing the pros and cons.

Programs for embedded systems are traditionally written and compiled using a cross-compiler for that architecture on a host PC. That is we use a compiler that can generate code for a foreign machine architecture, meaning a different CPU instruction set from the compiler host's one.

Native & foreign machine architecture


For example the developer kits shown in this article are an ARM machines while (most probably) our host machine is an x86 (that is a normal PC), so if we try to compile a C program on our host machine the generated code cannot be used on an ARM machine and vice versa.

Let's verify it! Here the classic Hello World program below:

#include <stdio.h>

int main()
{
        printf("Hello Worldn");

        return 0;
}


Now we compile it on my host machine using the following command:

$ make CFLAGS="-Wall -O2" helloworld
cc -Wall -O2    helloworld.c   -o helloworld

Careful reader should notice here that we’ve used command make instead of the usual cc. This is a perfectly equivalent way to execute the compiler due the fact, even if without a Makefile, command make already knows how to compile a C program.


We can verify that this file is for the x86 (that is the PC) platform by using the file command:

$ file helloworld
helloworld: ELF 64-bit LSB  executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=0f0db5e65e1cd09957ad06a7c1b7771d949dfc84, not stripped

Note that the output may vary according to the reader's host machine platform.


Now we can just copy the program into one developer kit (for instance the the BeagleBone Black) and try to execute it:

root@bbb:~# ./helloworld
-bash: ./helloworld: cannot execute binary file


As we expected the system refuses to execute code generated for a different architecture!

On the other hand, if we use a cross-compiler for this specific CPU architecture the program will run as a charm! Let's verify this by recompiling the code but paying attention to specify that we wish to use the cross-compiler instead. So delete the previously generated x86 executable file (just in case) by using the rm helloworld command and then recompile it using the cross-compiler:

$ make CC=arm-linux-gnueabihf-gcc CFLAGS="-Wall -O2" helloworld
arm-linux-gnueabihf-gcc -Wall -O2    helloworld.c   -o helloworld

Note that the cross-compiler's filename has a special meaning: the form is <architecture>-<platform>-<binary-format>-<tool-name>. So the filename arm-linux-gnueabihf-gcc means: ARM architecture, Linux platform, gnueabihf (GNU EABI Hard-Float) binary format and gcc (GNU C Compiler) tool.


Now we use the file command again to see if the code is indeed generated for the ARM architecture:

$ file helloworld
helloworld: ELF 32-bit LSB  executable, ARM, EABI5 version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=31251570b8a17803b0e0db01fb394a6394de8d2d, not stripped


Now if we transfer the file as before on the BeagleBone Black and try to execute it, we get:

root@bbb:~# ./helloworld
Hello World!


Therefore we see the cross-compiler ensures that the generated code is compatible with the architecture we are executing it on.

In reality in order to have a perfectly functional binary image we have to make sure that the library versions, header files (also the headers related to the kernel) and cross compiler options match the target exactly or, at least, they are compatible. In fact we cannot execute cross-compiled code against the glibc on a system having, for example, musl libc (or it can run in a no predictable manner).

In this case we have perfectly compatible libraries and compilers but, in general, the embedded developer should perfectly know what he/she is doing. A common trick to avoid compatibility problems is to use static compilation but, in this case, we get huge binary files.


Now the question is: when should we use the compiler and when the cross-compiler?

We should compile on an embedded system because:

  • We can (see below why).
  • There would be no compatibility issues as all the target libraries will be available. In cross-compilation it becomes hell when we need all the libraries (if the project uses any) in the ARM format on the host PC. So we not only have to cross-compile the program but also its dependencies. And if the same version dependencies are not installed on the embedded system's rootfs, then good luck with troubleshooting!
  • It's easy and quick.


We should cross-compile because:

  • We are working on a large codebase and we don't want to waste too much time compiling the program on the target, which may take from several minutes to several hours (or even it may result impossible). This reason might be strong enough to overpower the other reasons in favor of compiling on the embedded system itself.
  • PCs nowadays have multiple cores so the compiler can process more files simultaneously.
  • We are building a full Linux system from scratch.


In any case, below, we will show an example of both native compilation and cross-compilation of a software package, so the reader may well understand the differences between them.

Compiling a C program


As first step let's see how we can compile a C program. To keep it simple we’ll start compiling a user-space program them in the next sections, we’re going to compile some kernel space code.

Knowing how to compile an C program can be useful because it may happen that a specific tool (most probably) written in C is missing into our distribution or it’s present but with an outdated version. In both cases we need to recompile it!

To show the differences between a native compilation and a cross-compilation we will explain both methods. However a word of caution for the reader here, this guide is not exhaustive at all! In fact the cross-compilation steps may vary according to the software packages we are going to cross-compile.

The package we are going to use is the PicoC interpreter. Each Real-Programmers(TM) know the C compiler, which is normally used to translate a C program into the machine language, but (maybe) not all of them know that a C interpreter exists too!

Actually there are many C interpreters, but we focus our attention on PicoC due its simplicity in cross-compiling it.


As we already know, an interpreter is a program that converts the source code into executable code on the fly and does not need to parse the complete file and generate code at once.

This is quite useful when we need a flexible way to write brief programs to resolve easy tasks. In fact to fix bugs in the code and/or changing the program behavior we simply have to change the program source and then re-executing it without any compilation at all. We just need an editor to change our code!

For instance, if we wish to read some bytes from a file we can do it by using a standard C program, but for this easy task we can write a script for an interpreter too. Which interpreter to choose is up to developer and, since we are C programmers, the choice is quite obvious. That's why we have decided to use PicoC.

Note that the PicoC tool is quite far from being able to interpret all C programs! In fact this tool implements a fraction of the features of a standard C compiler; however it can be used for several common and easy tasks.

Please, consider the PicoC as an education tool and avoid using it in a production environment!

The native compilation


Well, as a first step we need to download the PicoC source code from its repository at: http://github.com/zsaleeba/picoc.git into our embedded system. This time we decided to use the BeagleBone Black and the command is as follows:

root@bbb:~# git clone http://github.com/zsaleeba/picoc.git


When finished we can start compiling the PicoC source code by using:

root@bbb:~# cd picoc/
root@bbb:~/picoc# make

Note that if we get the error below during the compilation we can safely ignore it:

/bin/sh: 1: svnversion: not found


However during the compilation we get:

platform/platform_unix.c:5:31: fatal error: readline/readline.h: No such file or
 directory
 #include <readline/readline.h>
                               ^
compilation terminated.
<builtin>: recipe for target 'platform/platform_unix.o' failed
make: *** [platform/platform_unix.o] Error 1


Bad news, we have got an error! This because the readline library is missing; hence we need to install it to keep this going. In order to discover which package's name holds a specific tool, we can use the following command to discover the package that holds the readline library:

root@bbb:~# apt-cache search readline


The command output is quite long, but if we carefully look at it we can see the following lines:

libreadline5 - GNU readline and history libraries, run-time libraries
libreadline5-dbg - GNU readline and history libraries, debugging libraries
libreadline-dev - GNU readline and history libraries, development files
libreadline6 - GNU readline and history libraries, run-time libraries
libreadline6-dbg - GNU readline and history libraries, debugging libraries
libreadline6-dev - GNU readline and history libraries, development files


This is exactly what we need to know! The required package is named libreadline-dev.

In the Debian distribution all libraries packages are prefixed by the lib string while the -dev postfix is used to mark the development version of a library package. Note also that we choose the package libreadline-dev intentionally leaving the system to choose to install version 5 o 6 of the library.

The development version of a library package holds all needed files whose allow the developer to compile his/her software to the library itself and/or some documentation about the library functions.

For instance, into the development version of the readline library package (that is into the package libreadline6-dev) we can find the header and the object files needed by the compiler. We can see these files using the following command:

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
#root@bbb:~# dpkg -L libreadline6-dev | egrep '.(so|h)'
/usr/include/readline/rltypedefs.h
/usr/include/readline/readline.h
/usr/include/readline/history.h
/usr/include/readline/keymaps.h
/usr/include/readline/rlconf.h
/usr/include/readline/tilde.h
/usr/include/readline/rlstdc.h
/usr/include/readline/chardefs.h
/usr/lib/arm-linux-gnueabihf/libreadline.so
/usr/lib/arm-linux-gnueabihf/libhistory.so


So let's install it:

root@bbb:~# aptitude install libreadline-dev


When finished we can relaunch the make command to definitely compile our new C interpreter:

root@bbb:~/picoc# make
gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`"   -c -o clibrary.o clibrary.c
...
gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`" -o picoc picoc.o table.o lex.o parse.o expression.o heap.o type.o variable.o clibrary.o platform.o include.o debug.o platform/platform_unix.o platform/library_unix.o cstdlib/stdio.o cstdlib/math.o cstdlib/string.o cstdlib/stdlib.o cstdlib/time.o cstdlib/errno.o cstdlib/ctype.o cstdlib/stdbool.o cstdlib/unistd.o -lm -lreadline


Well now the tool is successfully compiled as expected!

To test it we can use again the standard Hello World program above but with a little modification, in fact the main() function is not defined as before! This is due the fact PicoC returns an error if we use the typical function definition. Here the code:

#include <stdio.h>

int main()
{
        printf("Hello Worldn");

        return 0;
}


Now we can directly execute it (that is without compiling it) by using our new C interpreter:

root@bbb:~/picoc# ./picoc helloworld.c
Hello World


An interesting feature of PicoC is that it can execute C source file like a script, that is we don't need to specify a main() function as C requires and the instructions are executed one by one from the beginning of the file as a normal scripting language does.

Just to show it we can use the following script which implements the Hello World program as C-like script (note that the main() function is not defined!):

printf("Hello World!n");
return 0;


If we put the above code into the file helloworld.picoc we can execute it by using:

root@bbb:~/picoc# ./picoc -s helloworld.picoc
Hello World!


Note that this time we add the -s option argument to the command line in order to instruct the PicoC interpreter that we wish using its scripting behavior.

The cross-compilation


Now let's try to cross-compile the PicoC interpreter on the host system. However, before continuing, we’ve to point out that this is just an example of a possible cross-compilation useful to expose a quick and dirty way to recompile a program when the native compilation is not possible. As already reported above the cross-compilation works perfectly for the bootloader and the kernel while for user-space application we must ensure that all involved libraries (and header files) used by the cross-compiler are perfectly compatible with the ones present on the target machine otherwise the program may not work at all! In our case everything is perfectly compatible so we can go further.

As before we need to download the PicoC's source code by using the same git command as above. Then we have to enter the following command into the newly created directory picoc:

$ cd picoc/
$ make CC=arm-linux-gnueabihf-gcc
arm-linux-gnueabihf-gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`"   -c -o picoc.o picoc.c
...
platform/platform_unix.c:5:31: fatal error: readline/readline.h: No such file or directory
compilation terminated.
<builtin>: recipe for target 'platform/platform_unix.o' failed
make: *** [platform/platform_unix.o] Error 1

We specify the CC=arm-linux-gnueabihf-gcc commad line option to force the cross-compilation. However, as already stated before, the cross-compilation commands may vary according to the compilation method used by the single software package.


As before the system returns a linking error due to the fact that thereadline library is missing, however, this time, we cannot install it as before since we need the ARM version (specifically the armhf version) of this library and my host system is a normal PC!

Actually a way to install a foreign package into a Debian/Ubuntu distribution exists, but it's not a trivial task nor it's an argument. A curious reader may take a look at the Debian/Ubuntu Multiarch at https://help.ubuntu.com/community/MultiArch.


Now we have to resolve this issue and we have two possibilities:

  • We can try to find a way to install the missing package, or
  • We can try to find a way to continue the compilation without it.


The former method is quite complex since the readline library has in turn other dependencies and we may take a lot of time trying to compile them all, so let's try to use the latter option.

Knowing that the readline library is just used to implement powerful interactive tools (such as recalling a previous command line to re-edit it, etc.) and since we are not interested in the interactive usage of this interpreter, we can hope to avoid using it. So, looking carefully into the code we see that the define USE_READLINE exists and changing the code as shown below should resolve the issue allowing us to compile the tool without the readline support:

$ git diff
diff --git a/Makefile b/Makefile
index 6e01a17..c24d09d 100644
--- a/Makefile
+++ b/Makefile
@@ -1,6 +1,6 @@
 CC=gcc
 CFLAGS=-Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`"
-LIBS=-lm -lreadline
+LIBS=-lm
 
 TARGET = picoc
 SRCS   = picoc.c table.c lex.c parse.c expression.c heap.c type.c 
diff --git a/platform.h b/platform.h
index 2d7c8eb..c0b3a9a 100644
--- a/platform.h
+++ b/platform.h
@@ -49,7 +49,6 @@
 # ifndef NO_FP
 #  include <math.h>
 #  define PICOC_MATH_LIBRARY
-#  define USE_READLINE
 #  undef BIG_ENDIAN
 #  if defined(__powerpc__) || defined(__hppa__) || defined(__sparc__)
 #   define BIG_ENDIAN


The above output is in the unified context diff format; so the code above means that into the file Makefile the option -lreadline must be removed from variable LIBS and that into the file platform.h the define USE_READLINE must be commented out.

After all the changes are in place we can try to recompile the package with the same command as before:

$ make CC=arm-linux-gnueabihf-gcc
arm-linux-gnueabihf-gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`"   -c -o table.o table.c
...
arm-linux-gnueabihf-gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`" -o picoc picoc.o table.o lex.o parse.o expression.o heap.o type.o variable.o clibrary.o platform.o include.o debug.o platform/platform_unix.o platform/library_unix.o cstdlib/stdio.o cstdlib/math.o cstdlib/string.o cstdlib/stdlib.o cstdlib/time.o cstdlib/errno.o cstdlib/ctype.o cstdlib/stdbool.o cstdlib/unistd.o -lm


Great! We did it! Now, just to verify that everything is working correctly, we can simply copy the picoc file into our BeagleBone Black and test it as before.

Compiling a kernel module


As a special example of cross-compilation we'll take a look at a very simple code which implement a dummy module for the Linux kernel (the code does nothing but printing some messages on the console) and we’ll try to cross-compile it.

Let's consider this following kernel C code of the dummy module:

#include <linux/module.h>
#include <linux/init.h>

/* This is the function executed during the module loading */
static int dummy_module_init(void)
{
    printk("dummy_module loaded!n");
    return 0;
}

/* This is the function executed during the module unloading */
static void dummy_module_exit(void)
{
    printk("dummy_module unloaded!n");
    return;
}

module_init(dummy_module_init);
module_exit(dummy_module_exit);

MODULE_AUTHOR("Rodolfo Giometti <[email protected]>");
MODULE_LICENSE("GPL");
MODULE_VERSION("1.0.0");


Apart some defines relative to the kernel tree the file holds two main functions  dummy_module_init() and  dummy_module_exit() and some special definitions, in particular the module_init() and module_exit(), that address the first two functions as the entry and exit functions of the current module (that is the function which are called at module loading and unloading).

Then consider the following Makefile:

ifndef KERNEL_DIR
$(error KERNEL_DIR must be set in the command line)
endif
PWD := $(shell pwd)
CROSS_COMPILE = arm-linux-gnueabihf-

# This specifies the kernel module to be compiled
obj-m += module.o

# The default action
all: modules

# The main tasks
modules clean:
    make -C $(KERNEL_DIR) ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- 
        SUBDIRS=$(PWD) $@


OK, now to cross-compile the dummy module on the host PC we can use the following command:

$ make KERNEL_DIR=~/A5D3/armv7_devel/KERNEL/
make -C /home/giometti/A5D3/armv7_devel/KERNEL/ 
       SUBDIRS=/home/giometti/github/chapter_03/module modules
make[1]: Entering directory '/home/giometti/A5D3/armv7_devel/KERNEL'
  CC [M]  /home/giometti/github/chapter_03/module/dummy.o
  Building modules, stage 2.
  MODPOST 1 modules
  CC      /home/giometti/github/chapter_03/module/dummy.mod.o
  LD [M]  /home/giometti/github/chapter_03/module/dummy.ko
make[1]: Leaving directory '/home/giometti/A5D3/armv7_devel/KERNEL'

It's important to note that when a device driver is released as a separate package with a Makefile compatible with the Linux's one we can compile it natively too! However, even in this case, we need to install a kernel source tree on the target machine anyway. Not only, but the sources must also be configured in the same manner of the running kernel or the resulting driver will not work at all! In fact a kernel module will only load and run with the kernel it was compiled against.


The cross-compilation result is now stored into the file dummy.ko, in fact we have:

$ file dummy.ko
dummy.ko: ELF 32-bit LSB relocatable, ARM, EABI5 version 1 (SYSV), BuildID[sha1]=ecfcbb04aae1a5dbc66318479ab9a33fcc2b5dc4, not stripped

The kernel modules as been compiled for the SAMA5D3 Xplained but, of course, it can be cross-compiled for the other developer kits in a similar manner.


So let’s copy our new module to the SAMA5D3 Xplained by using the scp command through the USB Ethernet connection:

$ scp dummy.ko [email protected]:
[email protected]'s password:
dummy.ko                                      100% 3228     3.2KB/s   00:00  


Now, if we switch on the SAMA5D3 Xplained, we can use the modinfo command to get some information of the kernel module:

root@a5d3:~# modinfo dummy.ko
filename:       /root/dummy.ko
version:        1.0.0
license:        GPL
author:         Rodolfo Giometti <[email protected]>
srcversion:     1B0D8DE7CF5182FAF437083
depends:        
vermagic:       4.4.6-sama5-armv7-r5 mod_unload modversions ARMv7 thumb2 p2v8


Then to load and unload it into and from the kernel we can use the insmod and rmmod commands as follow:

root@a5d3:~# insmod dummy.ko
[ 3151.090000] dummy_module loaded!
root@a5d3:~# rmmod dummy.ko
[ 3153.780000] dummy_module unloaded!


As expected the dummy’s messages has been displayed on the serial console.

Note that if we are using a SSH connection we have to use the dmesg or tail -f /var/log/kern.log commands to see kernel’s messages.

Note also that the commands modinfo, insmod and rmmod are explained in detail in a section below.

The Kernel and DTS files


Main target of this article is to give several suggestions for rapid programming methods to be used on an embedded GNU/Linux system, however the main target of every embedded developer is to realize programs to manage peripherals, to monitor or to control devices and other similar tasks to interact with the real world, so we mainly need to know the techniques useful to get access to the peripheral’s data and settings.

That’s why we need to know firstly how to recompile the kernel and how to configure it.

Summary


In this article we did a very long tour into three of the most important topics of the GNU/Linux embedded programming: the C compiler (and the cross-compiler), the kernel (and the device drivers with the device tree) and the root filesystem. Also we presented the NFS in order to have a remote root filesystem over the network and we introduced the emulator usage in order to execute foreign code on the host PC.

Resources for Article:





Further resources on this subject: