Sunday, November 18, 2012

Colorful Man Pages

    Manual pages are the most important documentation sources for developers. They are not useful only for linux commands but also for c/c++ functions and structures ever for kernel source codes'. In almost every linux distribution, pager is linked to "less" by default so that when an application needs to show output page to page on terminal, "less" is used. However "less" does not give a colorful output.

    If you want to see pages colorful which makes more readable, you can use "most" for that. Link your "pager" to the "most" instead of "less".

   Check your current pager:

        kays@debian:~$ update-alternatives --display pager
        pager - auto mode
          link currently points to /bin/less
        /bin/less - priority 77
          slave pager.1.gz: /usr/share/man/man1/less.1.gz
        /bin/more - priority 50
          slave pager.1.gz: /usr/share/man/man1/more.1.gz
        /usr/bin/pg - priority 10
          slave pager.1.gz: /usr/share/man/man1/pg.1.gz
        /usr/bin/w3m - priority 25
          slave pager.1.gz: /usr/share/man/man1/w3m.1.gz
        Current 'best' version is '/bin/less'.

   "less" is current pager. Install "most" package and make it your default pager as below:

        kays@debian:~$ sudo aptitude install most
        kays@debian:~$ sudo update-alternatives --install /usr/bin/pager pager /usr/bin/most 99

   You can use "--config" options as below by doing interactively:

        kays@debian:~$ sudo update-alternatives --config pager

   Now test your pager. Below is "socket" function manual page.

man socket

Friday, October 26, 2012

Get Rid Of MACRO Complexity When Tracing API Sources

     In C/C++ projects sometimes there are too many definitions and macro used. This is usually done to  add a layer between library sources and user implementation. Sometimes it is used to use template approach in C sources by "ifdef" and other similar preprocessors as below.

        /* unsigned integer multiplication in mul_ui.c */

        #define OPERATION_mul_ui
        #include "mul_i.h"

        /* signed integer multiplication in mul_si.h */

        #define OPERATION_mul_si
        #include "mul_i.h"

        /* mul_i.h */

        #ifdef OPERATION_mul_si
        #define FUNCTION               mpz_mul_si
        #define MULTIPLICAND_ABS(x)    ((unsigned long) ABS(x))

        #ifdef OPERATION_mul_ui
        #define FUNCTION               mpz_mul_ui
        #define MULTIPLICAND_UNSIGNED  unsigned
        #define MULTIPLICAND_ABS(x)    x
        #ifndef FUNCTION
        Error, error, unrecognised OPERATION

        void FUNCTION (mpz_ptr prod, mpz_srcptr mult,
                                MULTIPLICAND_UNSIGNED long int small_mult)
              /* ..... */

    At that time it is difficult to trace API sources and generally we lose our-self in definitions. Nowadays I use MPIR library which is common library for big numbers arithmetic and calculations. I need to trace API functions source codes and use functions implementations directly as inline into my own source codes in order to remove libraries function call costs. However there are plenty of definitions for types, functions, constants etc. As you know when compiler run, preprocessor is called at first and when processor finished its job then compiler starts to compile sources.

    In order to cope with that MACRO definitions complexity we can run gcc and tell him just run preprocessor and then exit without compiling sources. With "-E" parameter gcc does exactly what we need:

         gcc -E main.c > main_proprocessed.c

Friday, March 2, 2012

Redirecting Output of already Running Process II

     We can redirect standard output of a running process with gdb. However, in this method application is stopped by gdb and continue again. In fact, we can redirect standard output to anywhere/any file without stopping process with
some extra code. At below, we will explain how to define signals to control redirecting operation.

     POSIX lets user to define their signals. You should use "SIGUSR1" to define your signal(man 7 signal). We define two new signals, add lines below to your source:

         /* user define signals */
         #define SIG_DIRECT_STDOUT_TO_FILE (SIGUSR1+51)
         #define SIG_REDIRECT_STDOUT (SIGUSR1+52)

     We should define new file where standard outputs will be redirected to it. Of course, we can get this file name as parameter, read from environment etc.

         #define CASE_LOG_FILE "/tmp/caselog.txt"

     We want to redirect output when first signal received and reassign original output file to original file. For that purpose we should get original output file info before redirection.

         #define STDOUT_FILE_SIZE 256
         static char stdoutFilePath[STDOUT_FILE_SIZE];

     We should define our signal handler which responsible with redirection:

         /* Function for signal directing stdout to case log file */
         static void appSigDirect(int sig)
             fprintf(stderr, "SIG_DIRECT_STDOUT_TO_FILE entered, redirecting stdout to CASE_LOG_FILE!");
             freopen(CASE_LOG_FILE, "w", stdout);

         /* Function for signal redirecting stdout to original file */
         static void appSigRedirect(int sig)
             fprintf(stderr, "SIG_REDIRECT_STDOUT entered, redirecting stdout to original file: %s", stdoutFilePath);
             freopen(stdoutFilePath, "a", stdout);

     In main function we should assign original output file to stdoutFilePath[] and assign handler to appropriate signals.

         /* get initial stdout direct file path */
         readlink("/proc/self/fd/1", stdoutFilePath, STDOUT_FILE_SIZE);

         if (signal(SIG_DIRECT_STDOUT_TO_FILE, appSigDirect) == SIG_ERR)
             return -1;

         if (signal(SIG_REDIRECT_STDOUT, appSigRedirect) == SIG_ERR)
             return -1;

     Compile your code and run as you wish. For example if you run your application as below and your application process id is "$APP_PID", you should see "/proc/$APP_PID/fd/1" linked to "/tmp/mylog.txt":

         myapp > /tmp/mylog.txt &

     Now you can tail "/tmp/mylog.txt" file to see outputs come to that file. When you want to redirect your logs to "/tmp/caselog.txt",
send SIG_DIRECT_STDOUT_TO_FILE signal to your application. Since SIGUSR1 is defined 10 in linux, if you are using linux our signal will be 61.

         kill -61 $APP_PID

     At that point your application will update outputfile to "/tmp/caselog.txt" (check "/proc/$APP_PID/fd/1") and output logs will be written to caselog.txt instead of mylog.txt file.

     When you want to reassing mylog.txt file as output file, you can use SIG_REDIRECT_STDOUT signal which number is 62 in linux.

         kill -62 $APP_PID

Monday, February 27, 2012

Redirecting Output of already Running Process I

Every process has opened stdin, stdout and stderr files by default. Those files are opened when process is created. If standard files redirected to any file(devices are also file type in linux) you can check it on "/proc/$PID/fd/" directory where $PID is process id. Sometimes we may need to redirect output to another file. In that case we can do this with gdb. On the other hand, If you need this for your own application you can add redirecting standard files as a feature with a small code.
We can test gdb method with tail command. Open /var/log/messages with tail command and send output to /dev/null device.

      root@work:~# tail -f /var/log/messages > /dev/null &
      [1] 32312

We have a background process with pid "32287" and standard output have been redirected to /dev/null device. 

       root@work:~# ls -l /proc/32312/fd/1
       l-wx------ 1 root root 64 Feb 27 18:23 /proc/32312/fd/1 -> /dev/null

After sometimes we want to write message logs to another file. We will use gdb to change stdout file. First we attach gdb to our process with pid:

       gdb -p 32312

In gdb command line, we will close stdout file which fd is "1" and then we will create our new file and it will be smallest suitable file descriptor which means "1". Then we will need to detach and quit gdb.

       (gdb) p close(1)
       $1 = 0
       (gdb) p creat("/tmp/mylog.txt", 0600)
       $2 = 1
       (gdb) detach
       Detaching from program: /usr/bin/tail, process 32312
       (gdb) q

Let's check redirected output file of our process again and see the our new file:
       root@work:~# ls -l /proc/32312/fd/1
       l-wx------ 1 root root 64 Feb 27 18:23 /proc/32312/fd/1 -> /tmp/mylog.txt

Thursday, September 29, 2011

Initializer Lists in C++0x

In C++0x, there's a new language feature called initializer_list. In this post, we will see some hints about this helpful feature.
We have not a uniform initializing method in the previous C++ standards. For example:
int i = 0;
    int j(0);
    char a[6] = "C++0x"; // ={'C', '+', '+', '0', 'x'};
    Student s("Bjarne", "Stroustrup");

vector<int> v(5); // 5 items
    //other 3 items are 0

There are different styles of initilizing which is confusing programmers. So, a uniform initializing method would be great: Thanks to C++0x.
First, we will see some quick usages of this feature to warm up:
vector<int> v = {8, 1, 7, 9}; //see the difference in previous vector initialization
    list<string> cities = {"Ankara", "Istanbul", "Izmir"};
    int a[3][4] = {
        {0, 1, 2, 3},
        {4, 5, 6, 7},
        {8, 9, 10, 11}

    void f(initializer_list<int> args) //This function takes an initializer list as argument(an immutable sequence)
        for (auto p = args.begin(); p != args.end(); ++p) cout << *p << "\n";
    f({1, 2}); // passing a list on the fly
    vector<double> v1
    }; // ok: v1 has 1 element (with its value 7)
    v1 = {9}; // ok v1 now has 1 element (with its value 9). how can we do that without initializer_list?
    // we should first clear the vector and then push '9' to the vector
    vector<double> v2 = {9}; // ok: v2 has 1 element (with its value 9)

Of course this new feature is widely used in standard library of C++0x. Here are two samples taken from GCC implementation of C++0x:

    * std::string class:

        In GCC implementation there's a new constructor that makes use of initializer_list,
     *  @brief  Construct string from an initializer list.
     *  @param  l  std::initializer_list of characters.
     *  @param  a  Allocator to use (default is default allocator).
    basic_string(initializer_list<_CharT> __l, const _Alloc& __a = _Alloc());

    Let's create a string object with this new style constructor,
std::string s({'C', '+', '+', '0', 'x'});
    std::cout << s << "\n";


    * vector class:
Assignment operator in vector class shows us how to use initializer_list,

    vector& operator=(initializer_list<value_type> __l) {
        this->assign(__l.begin(), __l.end());
        return *this;

    When we assign an initializer_list to the vector object, this operator=() function is called:
v1 = {9};

Analyzing the std::initializer_list implementation is a good work to understand the inner details.


Wednesday, August 17, 2011

Man Pages on Linux

Man pages are manual pages about programs, utilities and functions usually with their names. They can be seen by "man" command. Man pages are very useful and easier to access than the searching web or other documentation materials. In most Linux distributions they are located at /usr/share/man directory includes directories for each level with level number suffix.

        ls /usr/share/man

When you install your Linux system GNUX/Linux programs and utilities related manpages are installed by default. You can check this with your package manager; below is command for debian based distributions:

        aptitude search manpages

Most probably you already have manpages package, if you don't have you can install and test it for basic "ls" command as below:

        aptitude install manpages
        man ls

There are other man pages about C library, Posix standard libraries and kernel sources functions. You can install these man pages with your package manager except kernel man pages. You have to use kernel source codes to generate man pages for kernel functions. Lets check C and Posix man pages:

        aptitude install manpages-dev manpages-posix manpages-posix-dev

You can test installation for fopen() libc function and socket() posix function as below:

        man fopen
        man pthread_create

Sometimes there can be a command or tool with name of library functions. For examples there is a standard C library function exit() and also there is a built-in exit command. You can also check open, login, time etc. To see all man pages about a keyword you can use -a option with man:

        man -a time

There are also man pages for standard c++ library elements. In order to get those manual pages we should install libstdc++ document packages which suitable with your c++ version.

       aptitude install libstdc++6-4.4-doc

Let's see manual of map template and string class in c++:

       man map
       man -a string

For kernel man pages, you should have kernel source codes. If you do not have, check your kernel version with "uname -r" command and download from site. Usually kernel sources located under /usr/src directory. Untar kernel source package into /usr/src. Then enter top directory and use make to create and install kernel man pages to your system.

        sudo tar xjvf linux-2.6.32.tar.bz2 -C /usr/src/
        cd /usr/src/linux-2.6.32
        make help
        sudo make mandocs
        sudo make installmandocs

At this point, kernel man pages should be created and installed to your system by make from kernel source codes. Kernel man pages level is 9 and mostly located on /usr/local/share/man/man9/ directory. Now time to test, for example you can check kernel side print function as below:

        man printk

By the way, you can update your man pages index caches as command below:

        sudo mandb -c

Now you are completely ready to develop on Linux :)

Wednesday, August 3, 2011

2 practical scripts for embedded systems

1. Live Wireshark capture for a CPE from your PC

- tcpdump installed on the CPE
- CPE accepting SSH connections
- Wireshark and Putty installed on the PC (Windows)

You can use the following command to listen for the packets on the CPE interface and see the output on the Wireshark which is launched on your PC:

C:\Program Files\PuTTY>plink.exe -pw root@ export LD_LIBRARY_PATH=/usr/local/ssl/lib:/usr/sfw/lib ; /tcpdump -s 1500 -l -w- 'port!22' | "c:\Program Files\Wireshark\wireshark.exe" -k -i-

2. Extracting folders from a CPE without following symbolic links

- tar installed on the CPE and the Linux PC
- CPE accepting SSH connections

We know that we can use scp -r to extract folders from another Linux device, but scp will follow symbolic links. Here is how you do it without following symbolic links (this example will get the folders under /etc and /usr):

ssh root@ "cd /; tar cf - etc usr" | tar xvf -


Tuesday, August 2, 2011

Android From Scratch I

    Android is a software stack for mobile devices that includes an operating system, middleware and key applications. Android has a large community of developers writing applications ("apps") that extend the functionality of the devices. There are currently more than 250,000 apps available for Android. Developers write primarily in the Java language, controlling the device via Google-developed Java libraries.

    Android is linux based and open source. We can download sources and build images for supported boards. In this document, we will try to build kernel and rootfs for beagleboard. Beagleboard is manufactured by Texas Instruments and it is good for doing development. You can check beagleboard from here[1]. I have beagleboard C4 revision and all works below is done on it.

    There are many projects use beagleboard[2]. Some of them are aimed to run android on beagleboard such as 0xdroid, Beagledroid, Android on Beagle and Rowboat. The most popular projects which use android is Angstrom and Rowboat. Angstrom is an embedded linux distrubition for variety of embedded devices and off course it is ported for beagleboard too. Angstrom is built from Openembedded, you can check manual how to configure and build it from here[3]. Another project ,Rowboat, provides a stable Google Android base port for AM1808, OMAP35x, AM35x, AM37x, AM389x and DM37x platforms. Our board platfor is OMAP35x and we will use Rowboat to build images for beagleboard.

    There are a lot of documentation about compiling android sources for beagleboard however reading all of them takes too time and we tried to create a document to tell just necessary instructions. In document you will see links as references and if you want to go further, sure you can.

    We will use Gingerbread-2.3 version but you can see other version build instructions on reference links. Lets start!

    Android use git repository with repo tool. Then if you already do not have, you should install git and get repo tool at first as below:

        sudo apt-get install git-core
        chmod +x repo
        sudo mv repo /bin/

    We will use just "init" and "sync" commands of repo tool, you can see other commands and options from here[4].

    Now we need sources. You can download from repository over internet or you may download all needed repository as tar package. We will show both. 

    If you want to download tarball package just use these commands and your build environment gets ready:

        mkdir ti-android-rowboat
        cd ti-android-rowboat
        wget exports/TI_Android_GingerBread_2_3_Sources.tar.gz
        tar xzvf TI_Android_GingerBread_2_3_Sources.tar.gz
        repo sync --local-only

    Secondly, if you want to download from repository, follow these steps:
    Create a new directory and init repository as below. Your name and mail address will be asked while initing repository, you can enter anyting.

        mkdir ti-android-rowboat
        cd ti-android-rowboat
        repo init -u git:// -m TI-Android-GingerBread-2.3-DevKit-1.0.xml
        repo sync

    By commands above, we configured our manifest file as TI-Android-GingerBread-2.3-DevKit-1.0.xml. Our build operations will be up to this file. You can think it as configuration and selected application file. By sync operation, it gets sources. It may take hours depend on your bandwith.

    At this point, we have sources, toolchain and ready to build but we should add toolchain binary path to our PATH environment to be reachable when needed. In my system I add as below, you should enter your local path where it is.

        export PATH=/store/android/ti/gingerbread/TI_Android_GingerBread_2_3_Sources/prebuilt/linux-x86/toolchain/arm-eabi-4.4.3/bin:$PATH

    if you want to make it permanently you can add command into ~/.bashrc or ~/.profile scripts which run every shell login.

    Now time to build. Enter top directory and run commands to get kernel image and rootfs:

        cd ti-android-rowboat/TI_Android_GingerBread_2_3_Sources
        make TARGET_PRODUCT=beagleboard -j4 OMAPES=3.x

    Our target product is beagleboard. It is recommended to use more than one thread on multi core systems, specify -j number_of_threads for this. Set OMAPES variable to install proper version of SGX driver, our board is use version 3.x.

    After successful compilation, we should have kernel image in "kernel/arch/arm/boot/" directory. uImage will be used as kernel. In addition, we should have "out/target/product/beagleboard/root" and "out/target/product/beagleboard/system" directories which are our rootfs images sources. We should create a rootfs directory and copy root and system contents to this directory. Then we will create a tarball.

        cd out/target/product/beagleboard
        mkdir android_rootfs
        cp -r root/* android_rootfs
        cp -r system android_rootfs
        sudo ../../../../build/tools/ ../../../host/linux-x86/bin/fs_get_stats android_rootfs . rootfs rootfs.tar.bz2

    Now we should have kernel image and rootfs tarball. Next time we will explain how to write kernel and untar rootfs to mmc card partitions. Then we will use mmc card for running android on beagle board. Have a nice coding!



Protocol Encoding


Protocol: A protocol is a set of rules which have to be followed in the course of some activity. In this article, we use the term “protocol” as a set of formal rules that governs the exchange of information.
PDU: A PDU is an abstraction unit that hides the protocol specific fields.

Representing PDUs

In an abstract manner, we can think PDUs of simple records composed of some fields. It also gives us the opportunity for transforming the communication problem to programming domain. We can retrieve the value of a field or modify the fields of the PDU in a proper way in order to achieve the goal.

General objectives of representation[1] :
  1. Efficiency: The information in the PDU should be coded as compactly as possible.
  2. Delimiting: It must be possible for the receiver to recognise the beginning and end of the PDU.
  3. Ease of decoding: It should be easy for the receiver to find out exactly what information it has received.
  4. Data transparency: The representation should be such that arbitrary sequences of bits can be sent as data within the PDU.

Simple binary encoding:

It just encodes the value without any indicators.This type of encoding is not flexible because size and order of members of the PDU are fixed. The receiver can retrieve the desired value in constant time.

Li be the length of ith item then,

SBE:Algorithms(get parameter, add parameter)
/* Get value of Nth key
Value length is assumed 1
void get(buffer, N, &val)
    val = buffer[n];

void add(buffer, N, val)
    buffer[N] = val;

Type-Length-Value(TLV) encoding

TLV is more flexible than simple binary encoding because fields are laid in an arbitrary order, we can find the encoded value using the type as a key. Also it helps us to implementation to retrieve values using generic parsers.

Li be the length of ith item then,

Size of same amount of data that is encoded using TLV is greater than the one encoded using simple binary encoding.

Figure1.1 may help to understand the algorithms below.
#define TYPE_INDEX 0
#define LEN_INDEX 1
#define VAL_INDEX 2

int get(buffer, type) {
    val = 0;
    i = 0;

    while (1) {
        if (buffer[i] == t)
            val = parser[t](buffer + i + VAL_INDEX); //call the matching parser
        len = buffer[i + LEN_INDEX];
        i = i + len + 1; //jump to next type
    return val;

/* add the value to the end of the buffer */
void add(buffer, type, val, val_len) {
    len = length(buffer);
    buffer[len] = type;
    buffer[len + LEN_INDEX] = val_len;
    buffer[len + VAL_INDEX] = val;
    buffer[len + LEN_INDEX + VAL_INDEX + val_len + 1] = END;

int length(buffer) {
    while (1) {
        if (buffer[i] == END)
        len = buffer[i + LEN_INDEX];
        i = i + len + 1;
    return len;
These encodings both may be used for a protocol at the same time. Fixed parts are encoded using simple binary encoding, while optional parts are encoded using TLV.

Case study: DHCP

Let's try to figure out how the theory is applied to the real-world cases. DHCP is an automatic configuration protocol used on IP networks. Below you can find a DHCP packet structure:

As you see DHCP PDU includes two types of parts:
  • fixed(i.e, op, htype, hlen, hops, xid, ..)
  • variable(options)
Fixed fields are encoded in simple binary encoding, so lengths and orders of the fields are fixed. Any variation of field length or field order causes failure in communication.

DHCP options, variable part, are encoded in TLV. So, it is not a problem how the fields are ordered. Because the node retrieving the message can parse the options correctly by the help of self describing format of TLV. Some options are listed:

Code Description Length
53 DHCP Message Type 1
58 Renewal Time 4
59 Rebinding Time 4

We can retrieve the value of option 53 using get pseudo:

val = get(options, 53)

[1] Principles of Protocol Design-Sharp, Robin