Category Archives: Linux

Debugging Linux Kernel on QEMU Using GDB

Kernel is magic. Well, not really. All my experiences involving programming in userland. Could I step up to enter the land of kernel? Sure, but before that I need to arm myself with knowledge. Especially when dealing with kernel debugging.

This article will discuss about kernel debugging. When writing this, I use:

  • Linux Kernel 4.5
  • GDB 7.7.1
  • Qemu 2.5.0
  • Busybox 1.24.2
  • ParrotSec Linux for host

Although in the end we can run minimalistic kernel, this approach is not close to “real world” yet.


Download the kernel source code from and extract it to /usr/src

mv linux-4.5.tar.xz /usr/src
cd /usr/src
tar -xf linux-4.5.tar.xz -C linux

Download busybox and extract it to /usr/src. We will use this for creating initramfs.

mv busybox-1.24.2.tar.bz2
cd /usr/src
tar -xf busybox-1.24.2.tar.bz2 -C busybox

ParrotSec is debian derivative.

I use latest Qemu, you can read it from here.

Compile Linux Kernel

It’s a bit different to usual routine, we need to enable debug info.

cd /usr/src/linux
mkdir build
make menuconfig O=build

Select “Kernel hacking” menu.

Go to “Compile-time checks and compiler options”.

  • Enable the “Compile the kernel with debug info”
  • Enable the “Compile the kernel with frame pointers”
  • Enable the “Provide GDB scripts for kernel debugging”.

Search for “KGDB: kernel debugger” and make sure it is checked.

Go to the build directory and build from there

cd build
make bzImage -j $(grep ^Processor | wc -l)

Creating Initramfs

We need some kind of environment with basic command line tools. Something that provided by binutils, like: ls, cat, mkdir, etc. It is called initramfs (Initial RAM file system). The idea is to provide a minimal “root file system” as a temporary file system so Kernel can prepare all works before mounting the real file system. We will use Busybox.

cd /usr/src/busybox
mkdir build
make menuconfig O=build

Select “Busybox Settings” > “General Configuration” and uncheck the “Enable options for full-blown desktop systems” and check”. Go back and select “Build Options” and check “Build BusyBox as a static binary (no shared libs).

make && make install

This will create a new directory “_install” and install our busybox there. This _install will be base for initramfs.

Next we create a new “initramfs” directory and create some configuration files inside.

mkdir /usr/src/initramfs
cd /usr/src/initramfs
cp -R /usr/src/busybox/build/_install rootfs
cd rootfs
rm linuxrc
mkdir -p dev etc newroot proc src sys tmp

We don’t need “linuxrc” sine we are going to use initramfs boot scheme.

Create a file etc/wall.txt and fill it:

#                                    #
#      Kernel Debug Environment      #
#                                    #

Remember init? Once our kernel is up, we need init to spawn everything necessary. However in our minimalistic system, our init only needed to spawn tty. Now create and populate init file with following content.

dmesg -n 1

mount -t devtmpfs none /dev
mount -t proc none /proc
mount -t sysfs none /sys

cat /etc/wall.txt

while true; do
   setsid cttyhack /bin/sh

The cttyhack is a small utility for spawning tty device. This way, we can rest assure that when we execute the “exit” command new shell will be started automatically.

We need to make the init file is executable.

chmod +x init

Next we need to pack initramfs.

cd /usr/src/initramfs/rootfs
find . | cpio -H newc -o | gzip > ../rootfs.igz

Running Kernel on Qemu

Next thing to do is to launch the kernel inside Qemu.

qemu-system-x86_64 -no-kvm -s -S 
-kernel /usr/src/linux/build/arch/x86/boot/bzImage 
-hda /mnt/jessie.ext3 -append "root=/dev/sda"

At this point, we will see a blank QEMU terminal windows.

The -s option is a shorthand for -gdb tcp::1234, i.e. open a gdbserver on TCP port 1234.

The -S option stops the CPU to start at startup. Now QEMU is waiting for kernel to start in debug mode.

Running GDB

The QEMU had launched the kernel and waiting for debugging. The next thing is to launch GDB and do the debugging.


In our host, we need to load the the same kernel load in QEMU and point our target to QEMU.

file /usr/src/linux/build/arch/x86/boot/bzImage
set architecture i386:x86-64:intel
set remote interrupt-sequence Ctrl-C
target remote :1234

Let’s try it, using GDB:


As for now, GDB still not appreciate the size of registers changing. As for our kernel, there is a time when our kernel change from 16 bit to 32 bit (or 64 bit). You might notice that when we run QEMU we specify -S so QEMU will stop at startup. At that time, Linux will change to full 64 bit (or 32 bit) kernel. If you don’t do something, GDB will keep complaining about “remote packet reply is too long”.

To circumvent it, we can just disconnect and then reconnect.

set architecture i386:x86-64:intel
target remote :1234

QEMU [Paused]_021  LXTerminal_022

The post Debugging Linux Kernel on QEMU Using GDB appeared first on Xathrya.ID.

Capturing Coordinate of Touch Screen in Linux

Linux is dominating embedded system. It is mainly because of broad support of processor, such as: ARM, MIPS, PowerPC, etc. For some gadget, touch screen is an extra feature, other must have it. Whatever the reason, Linux support it. The fundamental thing in programming a system with touch screen is how to get coordinate of point touched by user.

This article will discuss about how to capture coordinate of point in touch screen. When I write this article I use Castles Technology’s EDC. I won’t disclose internal of the system used, but I should tell you that our discussion could be applied to Linux in general.

Some Knowledge

As usual, before we start we need to know some basic knowledge.

Linux is unix-like operating system. Everything in Linux is a file, including device. They are all stored inside /dev. Your first SCSI disk should be recognized as /dev/sda. Your DVD ROM might be recognized as /dev/sr0 (or /dev/dvd, a symlink to it).

You might also learn that device is categorized mainly as character device and block device. A character device is a class of device which send data by amount of character at a time, while block device will give you a block of data (typically some bytes).

Now direct our focus toward /dev/input. This is the location where device files for our input devices located. By input devices we means mouse, keyboard, or perhaps touch screen. Good, now spot eventX file where X is a number. Well, the number of files is depends on how much input device you have.

So how can we pinpoint the device?

In desktop I can see the /dev/input/by-id or /dev/input/by-path and see to which device they are pointing at. However, we don’t always have this luxury.

ls -la /dev/input/by-id/
ls -la /dev/input/by-path/

Another quick way to figure it out is by inspecting /proc/bus/input/devices. Yet, this might be not the case for most device. Also we need to parse some unneeded information.

cat /proc/bus/input/devices

Next option is dumping the raw data from file. Again this is not always the case.

cat /dev/input/event0 | hexdump -c

Last option is writing a small program, open the device, and read it. This works for me and we will discuss it deeper later.


I will leave the idea of “how you can connect to device” to you. I assume you have some way to write a program. We also need a way to direct I/O to device.

I also assume you can produce code for the device. Whether you have compiler inside, or just do cross compilation doesn’t matter.

Last, I assume you know our great programming language, C.


This is the sample working code I use to enumerate the device, open it, and capture the coordinate.

#include <stdio.h>
#include <stdlib.h>

/* For open and read data */
#include <unistd.h>
#include <fcntl.h>
#include <linux/input.h>

/* For directory listing */
#include <sys/stat.h>
#include <dirent.h>

/* Miscs */
#include <limits.h>
#include <string.h>

// We don't care it for now, let it be global.
DIR *dir = NULL;

/* Find first valid device which we can open */
int enum_and_open_dev()
    struct dirent *dirent;
    int fd;
    // Is it first time? Open it if yes.
    if (!dir)
        dir = opendir("/dev/input");

    if (dir)
        while ((dirent = readdir(dir) != NULL)) {
            if (!memcmp(dirent->d_name, "event", 5)) {
                fd = open(dirent->d_name, O_RDONLY);
                if (fd == -1)
                    continue;   // Not a valid file

                return fd;   // file is opened
        dir = NULL;
        return 0;

int get_dev_name(int fd)
    char buf[256] = "Unknown";

    /* Print Device Name */
    ioctl(fd, EVIOCGNAME(sizeof(buf)), buf);
    write(ofd, buf, strlen(buf));

int touch_screen_getxy(int fd)
    struct input_event ev;
    size_t ev_size = sizeof(struct input_event);
    size_t size;
    while (1) {
        size = read(fd, &ev, ev_size);
        if (size < ev_size) {
            write(ofd, "Error size when reading", 23);
            return -1;
        if (ev.type == EV_ABS && 
            (ev.code == ABS_X || ev.code == ABS_Y))
            sprintf(buf, "%s = %dn", ev.code == ABS_X ? "X" : "Y", ev.value);
            write(ofd, buf, strlen(buf));
    return 0;

int test_()
   int fd;

   fd = enum_and_open_dev();
   if (fd == 0) {
      write(ofd, "No readable device found", 24);
      return -1;


Just a note, ofd is a file descriptor where will pipe to my host, in other words a debug means for me.

enum_and_open_dev() is a function to enumerate available file in /dev/input/ and trying to open it. In my sample code, I only use the first valid file descriptor. After I got it, I want to know it’s name and then the main dish: the coordinates.


Just a single point is not enough. What about two or three simultaneous touch (multitouch) ? Well, save it for later.

The post Capturing Coordinate of Touch Screen in Linux appeared first on Xathrya.ID.

Capturing USB Data with Wireshark

Everyone loves USB devices. Many devices use USB as communication port. It is popular and steadily improve the standard. So, did you ever feel curious of what, how, and why the devices works? Whether you are a hardware hacker, hobbyist, or anyone interest in peripheral and low level, USB is very challenging. With wireshark, we have power to sniff or capture data stream sent by our USB devices to our host. The host is PC with Windows or Linux installed.

In this article we will discussing how can we capture data with wireshark. While writing this article I use following material:

  • Wireshark 2.0.1 (SVN)
  • Linux kernel 4.1.6

You can use any wireshark above 1.2.0 to get it works. I didn’t add Windows section yet because I didn’t confirm it yet.

Some Knowledge

Before we start, I think it is good to know some basic knowledge in USB. USB has specification. There are three way to use USB:

  • USB Memory

UART or Universal Asynchronous Receiver/Transmitter. This device use USB simply as receiving or transmitting way. They use USB nothing more than that, like other communication work.

HID is Human Interface Device. It is a class of USB which is for interface. Devices in this class are keyboards, mice, game controllers, and alphanumeric display devices.

Last is USB Memory, or we can say storage. External HDD, thumb drive / flash drive, they are part of this class.

As you might expect, the most common devices are either USB HID or USB Memory.

Now every USB device, especially HID or Memory, has magic number called Vendor Id and Product Id. They come in pair. The vendor Id is identifier to which vendor make this device. Product Id is identifying the product and not a serial number. See following picture.


That is a list of USB device connected to my box. To get this list we can invoke lsusb.

Let’s choose an entry. I have wireless mouse, Logitech. This is an HID device. This mouse comes with a receiver. It is detected and run as expected. Can you spot which is the device? Yes, the 4th entry. Here we have following:

Bus 003 Device 010: ID 046d:c52f Logitech, Inc. Unifying Receiver

The part ID 046d:c52f is Vendor-Product Id pair. The vendor id is 046d and the product id is c52f.

See Bus 003 Device 010. This inform us the Bus in which our device is connected. Note this.


We can run Wireshark as root to sniff USB stream. But as always, it is not recommended. We need to give enough privilege for our user to dump the stream from Linux usbmon. We can use udev for this purpose. What we will do is creating a group usbmon, make our account as usbmon member, create udev rules.

addgroup usbmon
gpasswd -a $USER usbmon
echo 'SUBSYSTEM=="usbmon", GROUP="usbmon", MODE="640"' > /etc/udev/rules.d/99-usbmon.rules

Next we need usbmon kernel module. If it is not loaded yet, invoke following command as root

modprobe usbmon


Open wireshark. See the interface list. You should see usbmonX where X is number. Here is mine (yeah, I use root):

The Wireshark Network Analyzer (as superuser)_048

If there is activity or stream in interface wireshark will show it as a wave graph. So which one should we choose? Did I ask you to note? Yes, the X or the number is corresponding to the USB Bus. In my case the target is usbmon3. Just open it and see the packet flow. Click on usbmon interface and click the blue shark fin icon.

-usbmon3 (as superuser)_049


What can we do after capturing? Well it depends. In general we can understand how devices and host communicate and maybe by this knowledge we can use our skill to reverse engineering it. Well, another article.

The post Capturing USB Data with Wireshark appeared first on Xathrya.ID.

Five Ways to Communicate with Embedded Device in Linux

Embedded system / device running operating system such as Linux / BSD is not so uncommon today. Most of them fall in category as routers, servers, NAS devices and mostly have communication interface (serial port with RS-232, or even fancy USB). We can communicate with these kind of devices, by redirect our I/O to this port. That’s the hardware part, how can we feed information to it? What tools should we use?

This article will mention five ways to communicate with embedded device. Our operating system of choice is Linux.

Of course we need to specify, our embedded system is the one which is designed to send/receive data / command using communication line.


The physical port might be one and only, but how it is referred by our system? Thus we should check. The port commonly referred as ttyS*, ttyACM*, ttyUSB*. It is analogous to COM* on Windows. Let’s check by dmesg command. Connect our box with our device and invoke following command:

dmesg | egrep --color 'serial|ttyS|ttyACM|ttyUSB'

This one will be our example:

[    1.245258] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[    1.265727] serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
[    1.286713] 00:07: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[    1.307321] 00:08: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A

Let’s get initial report regarding the port. We need the configuration information associated with serial port (in this context):

setserial -g /dev/ttyS[0123]

Sample outputs:

/dev/ttyS0, UART: 16550A, Port: 0x03f8, IRQ: 4
/dev/ttyS1, UART: 16550A, Port: 0x02f8, IRQ: 3
/dev/ttyS2, UART: unknown, Port: 0x03e8, IRQ: 4
/dev/ttyS3, UART: unknown, Port: 0x02e8, IRQ: 3

After confirmed, we can use following method to communicate.

CU Command

You might also want to read my old article: using CU to communicate with modem.

CU, abbreviated from call up, is an old unix command used to call up another system and act as dial in terminal. In some unix it is preinstalled. If not, you can always install it, most modern distro available.

Next, we use following command to communicate:

cu -l /dev/device -s baud-rate-speed

In our case, /dev/device would be /dev/ttyS0. Our baud rate is anything predefined value you want such as 19200 or 115200. Make sure the baudrate in both end is the same value.

cu -l /dev/device -s baud-rate-speed

In this example, I’m using /dev/ttyS0 with 115200 baud-rate:

cu -l /dev/ttyS0 -s 115200

To exit enter tilde dot (~.).

Screen Command

Most case, screen is used as a trick to run a process in server when we are remotely connect to it. We can also use screen to communicate with device.

screen /dev/device baud-rate

Minicom Command

Minicom is another approach. It is a tool designed for the job. Before we use, we need to setup it.

minicom -s

You will see some menu there. The most crucial part here is Serial port setup. Make sure the port and baud rate are set correctly.

Invoke minicom to do the job. We don’t need to specify command to minicom everytime we invoke it if we have saved the configuration before.


PuTTY Command

PuTTY, yes it is available also on Linux. It is a free and open source gui X based terminal emulator client for the SSH, Telnet, rlogin, and raw TCP computing protocols and as a serial console client. To use it, invoke PuTTY and wait for its GUI.


On Session, select Serial. Again, specify the port as Serial line and the baud rate as Speed.

Tip command

Last but not least, is using tip command. To use tip, we invoke following:

tip -baud device

For example:

tip -115200 ttyS0

The post Five Ways to Communicate with Embedded Device in Linux appeared first on Xathrya.ID.

Running Debian MIPS Linux in QEMU

Have you ever want to try system other than your PC? MIPS for example.

As a reverse engineer, I sometimes want to run a MIPS Linux system so that I can observe, develop, and testing somethings. However, I don’t have much room for another device, so virtualization might be a solution.

In this article we will try to run MIPS Linux on QEMU. In specific, Debian MIPS Linux, with following materials used:

  1. Slackware64 14.1
  2. QEMU 2.1.50

The article written here should be as generic as possible so I hope it can be used for different setup you use.

Obtain the Materials

Refer to this article to build a QEMU, if you don’t have one: Installing QEMU from Source

Next, we need to download the kernel images and a disk image which has Debian installed there. Go to this site to download. The ‘mips’ directory is for Big Endian MIPS and ‘mipsel’ is for Little Endian one. Choose what you want but in this case I will download both of them. In specific, we will test kernel version 3.2.0 (denoted as vmlinux-3.2.0-4-5kc-malta) with Debian Wheezy (denoted as debian_wheezy_mipsel_standard.qcow2)

At this point, we have (at least):

  1. QEMU installed
  2. Debian kernel
  3. Disk Image with qcow2 format.

Setup Bridged Networking

In order to make QEMU environment connected to the network, we need to do some additional setup.

Now create the two new files, /etc/qemu-ifup and /etc/qemu-ifdown. Make sure you give them executable permission. Also make sure you have right configuration, like GATEWAY and BROADCAST address. Also pay attention to the USER. It is a username you should specify when you want to run qemu.

# First take eth0 down, then bring it up with IP address
/sbin/ifconfig eth0 down
/sbin/ifconfig eth0 promisc up
# Bring up the tap device (name specified as first argument, by QEMU)
/usr/sbin/openvpn --mktun --dev $1 --user $USER
/sbin/ifconfig $1 promisc up
# Create the bridge between eth0 and the tap device
/usr/sbin/brctl addbr br0
/usr/sbin/brctl addif br0 eth0
/usr/sbin/brctl addif br0 $1
# Only a single bridge so loops are not possible, turn off spanning tree protocol
/usr/sbin/brctl stp br0 off
# Bring up the bridge with ETH0IPADDR and add the default route
/sbin/ifconfig br0 $ETH0IPADDR netmask broadcast $BROADCAST
/sbin/route add default gw $GATEWAY


# Bring down eth0 and br0
/sbin/ifconfig eth0 down
/sbin/ifconfig br0 down
# Delete the bridge
/usr/sbin/brctl delbr br0
# Bring up eth0 in "normal" mode
/sbin/ifconfig eth0 -promisc
/sbin/ifconfig eth0 up
# Delete the tap debice
/usr/sbin/openvpn --rmtun --dev $1

To starting network bridge, just invoke

/etc/qemu-ifup tap0

and then invoke

/etc/qemu-ifdown tap0

to stop it.

Running the Debian MIPS

After all preparation we have done, it’s time for actual thing.

Go to the directory where we store kernel and disk image, for example $HOME/debian-mipsel, and then invoke following command:

qemu-system-mips64el -net nic -net tap,ifname=tap0,script=no,downscript=no 
-M malta -kernel vmlinux-3.2.0-4-5kc-malta -hda debian_wheezy_mipsel_standard.qcow2 
-append "root=/dev/sda1 console=tty0"

We can also create a script to simplify it.



echo "Stopping eth0, starting tap0"

/etc/qemu-ifup tap0 || quit 1 "Failed to start tap0"

echo "Starting Debian MIPS"

$qemu -net nic -net tap,ifname=$iface,script=no,downscript=no 
-nographic -M malta -kernel $kernel -hda $hda -append "root=/dev/sda1 console=tty0"

If everything goes well, you should see Debian is booting and then greeted with sweet login prompt.

Further Configuration

It’s nice to work with QEMU window. But you should admit taht QEMU console is very limiting, so you need SSH connection to do most of your work. You can, by installing OpenSSH inside Debian system using apt:

apt-get update
apt-get install openssh-server

The post Running Debian MIPS Linux in QEMU appeared first on Xathrya.ID.

Archiving and Compression – Variety in Options for Unix Machine

In any part of this world, people would agree that information should be stored in minimum size. Ideally, any information gained can be stored with size as small as possible but can contain data as large as possible. In the past, size would be really matter. A bigger storage device such as floppy disk would only store around 4MB, far smaller than the smallest capacity we can find today. Even though we now have such an abundant resource today, the same problem still arise: how to store information as minimum as possible and also how to collect scattered data into a single named resource. One of solution proposed is using archive files.

A practical problem we would meet in daily life is data transmission. Often we are involved in transfer data activity, whether we send or receive data. Storage is cheaper, but bandwidth is not in the same condition. Bandwidth is limited and speed would be slower than read/write operation on storage. We agree that a smaller size would theoretically transferred faster than bigger size in same link / connection. Also, if we have multiple file we want to send, it is wise to join them as a single file and then send them rather than send them one by one. The case we see here is how archive files take role.

The Archive Files – About Archiving and Compressing

An archive file is a file that is composed of one or more files along with metadata. Archive file is used to collect multiple data files together and pack them as a single file to achieve easier portability and storage. An archive file might also be compressed to reduce the size. Basically, archive file relies to two part: archiving and compressing.

Archiving is an activity of combining a number of files together into one archive file, or series of archive files. This enable easier transportation as well as storage.

Compression is the reduction in size of data. The process usually employ an algorithm which can modify data to create more compact file.

There are many archive files format. Different algorithm might create different result and thus different size. Also note that an archive format might only consist of archiving only process, or compression only process. Aside of that, some format also offer both archiving and also compression at one.

In this article, I will present some known format for archiving data in UNIX and UNIX-like machine.

Archiving Only

File Extension MIME type Official Name Description
 .a .ar  Unix Archiver  Traditional archive format on Unix-like system. Today used mainly for the creation of static library
 .cpio  application/x-cpio  cpio  Obsolete archive format. Used by RPM files for archiving.
 .shar  application/x-shar  Shell archive  A self-extracting archive that use the Bourne shell (sh).
 .iso  ISO-9660 image  An archive format originally used mainly for archiving and distributing the exact, nearly exact, or custom-modified contents of an optical storage medium (CD-ROM or DVD-ROM). This archive file can also be burned to a CD using appropriate tools.
 .mar  Mozilla archive  Archive format used by Mozilla for storing binary diffs. Used in conjunction with bzip2 (like popular .tar archive)
 .tar  Tape archive  A common archive format. Generally used in conjunction such as gzip, bzip2, xz, etc.

Compressing Only

File Extension MIME type Official Name Description
 .bz2  application/x-bzip2  bzip2  Open Source compression format using Burrows-Wheeler transform followed by a move-to-front transform and finally Huffman coding.
 .gz  application/x-gzip  gzip  GNU Zip, using DEFLATE algorithm.
 .lz  application/x-lzip  lzip  An alternate LZMA algorithm implementation with support for checksums and ident bytes.
 .lzma  application/x-lzma  lzma  Using LZMA compression.
 .lzo  application/x-lzop  lzop  An implementation of the LZO data compression algorithm.
 .rz  rzip  A compression designed to do particularly well on very large files containing long distance redundancy.
 .xz  application/x-xz  xz  Compression format using LZMA2 to yield very high compression ratios.
 .z  application/x-compress  pack  Old Huffman coding compression format.
 .Z  application/x-compress  compress  Traditional LZW compression format.

Archiving and Compressing

File Extension MIME type Official Name Description
 .7z  application/x-7z-compressed  7z  Open source file format used by 7-zip
 .afa  application/x-astrotite-afa  AFA  Compress and doubly encrypt the data (AES256 and CAS256)
 .arc  ARC
 .arj  application/x-arj  ARJ
 .ba  Scifer  Binary Archive with external header
 .cfs  application/x-cfs-compressed  Compact File Set  Open source file format
 .dar  Disk Archiver  Open source file format. Files are compressed individually with either gzip, bzip2, or lzo
 .kgb  KGB Archiver  Open sourced archiver with compression using the PAQ family of algorithm and optional compression
 .rar  application/x-rar-compressed
 .sitx  application/x-stuffitx  Stuffit X  Compression format common on Apple Macintosh computer.
 .sqx  SQX  Royalty free compressing format
 tar.gz, .tgz, .tar.Z, .tar.bz2, tbz2, .tar.lzma, .tlz  application/x-gtar  Tar with gzip, compress, bzip2, or lzma  Tarball format combines tar archives with a file-based compression scheme which is common used on Unix.
 .xar  XAR
 .yz1  YZ1  Yamazaki Zipper Archive. Compression format used in DeepFreezer archiver utility.
 .zip, .zipx  application/zip  ZIP
 .zz  Zzip  Use compression algorithm based on Burrows-Wheeler transform method.

The post Archiving and Compression – Variety in Options for Unix Machine appeared first on Xathrya.ID.

Administrating and Monitoring Linux Process

Whoever can conclude that kernel is the core / brain of Operating System. Yet, computer with kernel only is useless because operating system only do managing computer while task we need such as writing document, watching video, listening to music, etc are a user programs. All of our concern is programs, not the kernel. Operating System just provide environment , where user programs can run.

In today era, people forget the difference between programs and operating system, simply because modern operating system installs a good number of programs along with the operating system itself.

When we running a program, the program will be loaded to memory and working. We can run two or more identical program, but in memory they are treated as different program. For example, you open notepad twice, one for editing a.txt and other for editing b.txt. Although both are notepad but they work for different purpose. And the program running in memory is referred as a process.

A program, can start one or multiple process to complete the task it was designed to do. Processes which get started will execute and die on its own without any user intervention. And the one who has responsible for such thing is kernel itself.

In this article we will discuss about different aspects of a process in Linux. As Linux is a mimic of Unix Operating System, you may find it similar to other Unix-based Operating System.

Process Identification (PID)

Each and every process in Linux Operating System is identified by a number which is referred as process id.

Process ID, or PID, is nothing more than a unique identification number to identify a process in Linux. When we say unique, it means two process running at same time cannot have same PID. PIDs are 16-bit number that are sequentially assigned to different process when they are spawned.

After the execution (when process exit), the PID number is released and can be reused by other process. In Linux, to handle processes Kernel maintains a table of process. The name is simply Process Table. Process table store any information of process and of course they are identified by their PID. The PID number start from 1.

Like I said before, when a process exit its PID is released and can be reuse by other process. We also cannot guarantee a process can get same PID when it is executed (ex: today A got PID number 1653, tomorrow it can be 1435, 1206, or anything). But, we can guarantee that there is only one process that get same PID number each and everytime. This process is INIT.

INIT is the first program that’s run by Linux system. Therefore, Init will always get PID 1. It is also parent of all other process in Linux. Here is some point to make the concept clear:

  1. Init is the parent and the first process, which always gets the PID number 1
  2. There is a limit to the maximum number of processes that can be run on a Linux operating system.

The maximum number of PID can be found from the file /proc/sys/kernel/pid_max. Commonly the maximum PID is set as 32768, which means 32768 number of process can be run simultaneously at a time in system. Let’s remember that Linux is assigning PID sequentially which means a PID number 435 won’t be given unless PID number 434 already given. However, this won’t always true. When PID counting reach maximum PID allowed, Linux will start from beginning and search for available PID number.

Higher PID number such as 30000 on a system does not mean 30000 process are running on system. It means 30000 process has been run so far (which implied some might already dead). And also, a PID number of 1000 does not always mean that it was started before the PID number 30000

Listing Running-Process

There are different tools available in Linux. A common tool used is ps. Here is the example of its usage:

root@BlueWyvern:/# ps aux
root         1  0.1  0.1   2064   624 ?        Ss   08:16   0:00 init [3]
root         2  0.0  0.0      0     0 ?        S<   08:16   0:00 [migration/0]
root         3  0.0  0.0      0     0 ?        SN   08:16   0:00 [ksoftirqd/0]
root         4  0.0  0.0      0     0 ?        S<   08:16   0:00 [watchdog/0]
root         5  0.0  0.0      0     0 ?        S<   08:16   0:00 [events/0]
root         6  0.0  0.0      0     0 ?        S<   08:16   0:00 [khelper]
root         7  0.0  0.0      0     0 ?        S<   08:16   0:00 [kthread]
root        10  0.0  0.0      0     0 ?        S<   08:16   0:00 [kblockd/0]
root        11  0.0  0.0      0     0 ?        S<   08:16   0:00 [kacpid]
root       175  0.0  0.0      0     0 ?        S<   08:16   0:00 [cqueue/0]
root       178  0.0  0.0      0     0 ?        S<   08:16   0:00 [khubd]
root       180  0.0  0.0      0     0 ?        S<   08:16   0:00 [kseriod]

There we got list of running process. Let’s inspect each column given.

Gives us clue about the name of the user who started the process. In other words, the owner of process.
The PID number of process. And yes we see init has PID number 1.
The amount of CPU usage in percentage for process.
The amount of memory utilized by process, in percentage.
The amount (size) of virtual memory used.
Resident Set Size. Portion of RAM (Physical memory) that a process is using.
The terminal name from where the process started. The ‘?’ mark indicated that the process is not run from terminal. It can be either a daemon, or process started by the system, or a process related to a cron job.
Will be discussed in next section
The time when process started.
Command or the name of running program.

The Process State (in ps)

A process has a state which obviously state the condition of process. A process have different state option and it can be classified into different states, depend on its current status. For example, a process can be idle, waiting for a user input, a process can be waiting for an IO request to be completed, A process can be zombie, or even orphan. Let’s go through different states of a process in Linux.

Let’s first understand interruptible and uninterruptible processes in Linux.

Processes which are not currently doing any task, and are waiting for any interrupt, or say for example a process which is waiting for an input from the user is normally in the interruptible sleep state. This state of a process is denoted by “S”. You will find a large number of processes in the system with an S state. This is because, most of the processes are in the started state, and are waiting for something to execute as required.

Processes that are waiting for something to get completed are said to be in uninterruptible state. For example, a process that is waiting for an IO request to be completed goes into an Uninterruptible state. Such a state is denoted by D in the ps command output. You cannot send any message, to a process that is in uninterruptible state, because it will not accept any signal or message. You will normally not see, too many processes in D state.

Another state here is Zombie which represented by Z. This kind of process will be discussed later.

Process status column also tells the priority of the process over other processes. There are two symbols that indicate the priority of the process.

“<” indicate’s that the process is having a higher priority than other processes. In other words, this kind of process having VIP access and will be treated special than other process. On the other hand N, indicates that the process is nice to other’s. Which means a process with the status of has a lower priority compared to other processes running on the system. If no < or it means the process is “regular” process and will be treated using usual handle.

The Zombie Process

We have discussed that INIT process is the first program called during boot time. INIT is the parent process of all other processes in Linux. All processes that run on Linux has a parent process to it (directly or indirectly).

A zombie process is a process which has characteristics of a real zombie (dead body without soul is called as a zombie). It is a dead process without any task left to do (without utilizing any system resource). When a child process completes its execution, it informs its parent process that it has completed its execution. After informed, parent has job to get the complete status of the process and then finally remove it from the process table so the PID Is available for reuse. But there are some irresponsible parent process that takes a lot of time to collect complete status of its children, even after being informed. Due this reason, the children becomes zombies till the parent does its job of collecting information and removing them form the list.

The “irresponsible” parent is the process who are responsible for the existence of Zombie Process. This might happen due to programming inefficiencies and bugs. But generally a zombie cannot cause any sort of harm, because they are not consuming any system resource, just sitting in the process table. But if a large number of zombie process occurred, we might runs out of PID number.

A parent can be forced to reap their child processes. This can be done by sending SIGCHLD signal to parent. This signal will ask the parent to reap their children who are zombies. Suppose 1234 is the parent process of a zombie process, we can send signal to that parent by:

kill -s SIGCHLD 1234

Note that kill command is not exactly killing a target (in this case 1234), but sending a signal to that target. A quick command to list all zombie process and also their parent would be:

ps aux | awk '{ print $8 " " $2 }' 
| grep -w S | awk '{ print $2 }' 
| while read line; do
echo $line " has parent " `ps -p $line -o ppid=`;

The zombie process is on the left side while the parent is on the right side of “ has parent “.

Orphan Process

Orphan process is literally a process which has became orphan. It is unfortunate process whose parents die leaving the children behind. Parents can die due to accident (ex: crash). However some process immediately adopts the children whose parents have been died. In this example, INIT adopts almost all orphan process immediately. In such case, we will see PID Number 1 will be their parent process.

In Linux and Unix, whenever parent get killed, all children get destroyed. This normally happen, until and unless, a process is intentionally made orphan, so that init adopts it as its child. This process of intentionally making a process orphan, is useful when we want to run a program from a shell, for a very long period of time, and it also does not require any user intervention in between. Normally when we run a program from a command line shell, the program becomes the child of the shell process. So when we logout from the shell, all child gets killed. But in some cases, we want the process to run until it gets completed by itself.

This can be achieved by purposely making a process orphan, so that the INIT process immediately adopts it. This is done with the help of a command called as nohup in Linux.

nohup sh &amp;

The above command starts the process with nohup, so that it does not get killed even after the shell is exited. And the final & indicates that the process should be run in background.

Foreground and Background Process

Previously when we were running our example script using & at the end of the command, it makes that process go in the background. Now let’s understand what’s a background and foreground process.

To understand what is background and foreground, let’s see an example. If we are working on a linux terminal, then each and every command we execute will first need to complete, before we get the shell prompt. For example, if we want to run a command like yum update, from terminal, we need to first wait for the command to complete, before we can run another command on the same terminal.

What if we want to run multiple commands one after the other at the same time, without waiting for each of them to complete. In order to do this, we need to send a command (the process), to the background. A background process will run concurrently with the foreground process. Whenever you run a command, its by default run on the foreground. But there are methods that can be used to send a foreground process to background.

apt-get update

When we run the above command, it will run the foreground by default, and we will have to wait till the command completes, to get the shell prompt back again to run another command. But we can pause the process or suspend the process by pressing CTRL + Z. If we do that, some words will be prompt back to shell, something like this:

[1] + Stopped                 yum update

As soon as we press CTRL + Z, the process was suspended/stopped. Which is clear from the above message [1]+ stopped yum update. Pressing CTRL+Z, is not a good solution to run another command or to get the shell prompt back, because it suspends the already running process. We need the system to run this process, as well as allow us to run another commands, by releasing the shell prompt. Moreover, we cannot say we are running multiple program if we suspend one, right? ?

On the previouse prompt, [1] indicates the job number id. This number is differ with PID number. As we only have one job here, which is suspended it is given the job id number of 1. To resume the suspended process, we can execute fg followed by it’s job number id.

root@BlueWyvern:~/# fg %1
yum update

But even if we resume the process, with the help of fg %1, it will once again come to the foreground, and we still need to wait till the process gets completed, to run another commands. A good solution to this problem is to send the process to background, so that the process will continue to run, as well you will get the shell prompt back, to run another commands.

root@BlueWyvern:~/# bg %1
[1]+ yum update &

The above command, will resume the yum update process and will also put that process in the background, so that you can run other commands. If you have multiple processes, running in the background, then you can view their status, with the help of jobs command, as shown below.

root@BlueWyvern:~/# jobs
[1]+  Running                 yum update &

You can anytime switch between background and foreground with the help of fg and bg command in linux, by simply giving the exact job number as the argument(%1,%2 etc)

Alternatively you can always use the symbol &, immediately after the command, to send it to the background. So for example, let’s send the same yum update command to background during run time.

root@BlueWyvern:~/# yum update &
[1]  1234

As you can see, using the &, symbol will also do the same thing of assigning a job id number and sending it to background.

Monitoring a Process

We have used one process-monitor command, ps. Ps can list process and gives details. However we cannot see it realtime as ps only show all process running when ps command execute. In other word, it is static. If we want to monitor process for any single second, we cannot execute ps consecutively. Also, ps is static which means we can not see the dynamic of process. The dynamic of process for example: a process with PID x suddenly consume much CPU. For that purpose, we cna use another tool, top.

When we execute top command, we will be prompted with a screen and some information. For example:

top - 05:23:08 up  7:09,  4 users,  load average: 0.00, 0.11, 0.10
Tasks:  94 total,   1 running,  93 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.7%us,  0.7%sy,  0.0%ni, 97.0%id,  0.0%wa,  0.3%hi,  0.3%si,  0.0%st
Mem:    515444k total,   441964k used,    73480k free,    73364k buffers
Swap:        0k total,        0k used,        0k free,   276772k cached

31571 root      15   0  2196 1008  804 R  0.7  0.2   0:00.02 top
4535 root      15   0  1968  644  564 S  0.3  0.1   0:23.77 hald-addon-stor
1 root      15   0  2064  624  536 S  0.0  0.1   0:00.94 init
2 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 migration/0
3 root      34  19     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/0

top command will keep on updating it stats (by default: 3 seconds). However we can change this value by:

top -d 1

which top will updating every 1 second.

Although top command is a pretty nice tool to monitor process details on a linux system, it also provides us the complete overview of the system in one interface. If you see the top area of the command output, you will be able to see the complete details of a running system.

Some details we can get from above state:

  • There is no user logged in
  • System Load average
  • Uptime
  • Total number of process, with how many are sleeping, running, stopped, or zombie.
  • CPU statistics, memory statistics

You can sort the output of the top command, based on memory usage, cpu usage, swap usage etc. With the help of sorting option’s available in the command. You can get the options by pressing SHIFT + O, which will display the list of sorting options. Pick one and we can get sorted process based on profile we chose.

Another interesting command for monitoring a process is pstree. This will display the parent child relationship of all process in the system using tree graph. For example this is the snapshot of the result of my pstree.

|                `-2*[{NetworkManager}]
|                 |-akonadi_archive
|                 |-akonadi_maildis
|                 |-akonadi_mailfil
|                 |-akonadi_nepomuk---{akonadi_nepomu}
|                 |-akonadiserver-+-mysqld---35*[{mysqld}]
|                 |               `-21*[{akonadiserver}]
|                 `-{akonadi_contro}
|               `-2*[{udisks-daemon}]

Now you can see that Ini is at the top, which means it is the parent of all process.

Most of the command line utilities that shows us the process details, actually fetch their information from the /proc. This is node created by filesystem to maintain process. If we go to /proc, we will be able to see separate directories for each running process. For example, this is my result:

dr-xr-xr-x 240 root       root                     0 May 13  2013 ./
drwxr-xr-x  28 root       root                  4096 May  9 18:56 ../
dr-xr-xr-x   8 root       root                     0 May 13  2013 1/
dr-xr-xr-x   8 root       root                     0 May 13  2013 10/
dr-xr-xr-x   8 root       root                     0 May 13  2013 1024/
dr-xr-xr-x   8 root       root                     0 May 13  2013 1070/
dr-xr-xr-x   8 root       root                     0 May 13  2013 1071/
dr-xr-xr-x   8 root       root                     0 May 13  2013 11/
dr-xr-xr-x   8 root       root                     0 May 13  2013 1141/
dr-xr-xr-x   8 root       root                     0 May 13  2013 1142/
dr-xr-xr-x   8 root       root                     0 May 13  2013 1143/
dr-xr-xr-x   8 root       root                     0 May 13  2013 1146/
dr-xr-xr-x   8 root       root                     0 May 13  2013 1150/
dr-xr-xr-x   8 root       root                     0 May 13  2013 12/

If we go to one directory, let say 1141 and list the inside, we can get something like this:

attr             cpuset   limits      net            root       status
auxv             cwd      loginuid    ns             sched      syscall
cgroup           environ  maps        oom_adj        sessionid  task
clear_refs       exe      mem         oom_score      smaps      wchan
cmdline          fd       mountinfo   oom_score_adj  stack
comm             fdinfo   mounts      pagemap        stat
coredump_filter  io       mountstats  personality    statm

We have short time here so I won’t cover each directory here. At least, let’s understand some of them.

    1. cmdline file tells about the command line arguments that were used while running this process
    2. cwd links the process to the working directory of the process
    3. exe is the executable of the process.
    4. environ file contains the environment variables of the process


    contains a complete information about the process. Most of the system monitoring tools fetch process information form this location only.

Sending Signal to Process

We have seen SIGCHLD before. We send signal using kill. Most of the people who use Linux in day to day are aware of this command. Like I said, although it has name kill, it can achieve more than killing. It is used for sending a signal. If no signal specified, killing-signal will be used.

There are various signal can be sent to a process. The way the process responds to a particular signal, depends upon how that process was programmed. Let’s discuss some commonly used kill signal’s

We have discussed that programs run from a terminal get’s exited, when you exit the terminal. Basically when you exit the terminal, the program’s run from that terminal receives a SIGHUP signal. SIGHUP signal tell’s the process to hang up. Basically this signal will terminate the process.

What happens when you type CTRL + C or CTRL + Z, to a running process. CTRL + C will send a SIGINT, signal to the running process (Most of the times, it will terminate the running application). We have previously seen that a running program can be suspended using a CTRL + Z. The shell will send a SIGSTOP signal to a running program when you press CTRL + Z.

Kill command is used with different signal level’s as an argument. The signal type can be specified as word such as SIGHUP, SIGINT, SIGKILL  or as their equivalent number such as 1, 2, 9 etc. Let’s see the following snippet. The first two commands are the basic syntax which are equivalent to each other. The last two are the working example of the first two.

kill -s SIGINT 1234
kill -2 1234

The post Administrating and Monitoring Linux Process appeared first on Xathrya.ID.

Split and Join Files in Linux

Sometimes, we are in situation where a large file is too large to be stored on a single Flash Drive. Or maybe our file is too big to be and exceed file size limit. In these cases we need to split file into smaller files.

Fortunately, Linux has a built in utility to do split and join. And yes, it should be shipped default on your Linux distribution. Split and join is packed together in GNU Coreutils.

In this article, we will discuss about how to use Linux utilities to do split and join files, also discussing about backup process on Linux.

Split and Join

In this scenario, we have an iso file: slackware64-14.0-install-dvd.iso. The size is estimated as large as 2.2GB. We will split the file into some files with each chunk have maximum 450MB in size.

To do splitting, invoke this command:

split -d -b 450m slackware64-14.0-install-dvd.iso slackware64-14.0-install-dvd.iso.part

At this point, we have six files. The generated file have extensions .partXX where XX is the part number. Five of them (from slackware64-14.0-install-dvd.iso.part00 to slackware64-14.0-install-dvd.iso.part04) has size 450MiB and the rest (slackware64-14.0-install-dvd.iso.part05) has size 48.9MiB

Now, how to recover the splitted files? Of course we need to join them altogether and form the original file. At this point I want to join the part and the join file will have filename slackware64-14.0-install-dvd-join.iso. To do so, we can invoke following command:

cat slackware64-14.0-install-dvd.iso.part00 
slackware64-14.0-install-dvd.iso.part01 slackware64-14.0-install-dvd.iso.part02 
slackware64-14.0-install-dvd.iso.part03 slackware64-14.0-install-dvd.iso.part04 
slackware64-14.0-install-dvd.iso.part05 > slackware64-14.0-install-dvd-join.iso

Another way to do so is using following command:

cat slackware64-14.0-install-dvd.iso.part{00..05} &gt; slackware64-14.0-install-dvd-join.iso

Where {00..05} is parts we want to join.

The post Split and Join Files in Linux appeared first on Xathrya.ID.

Linux Kernel Source & Versioning

Kernel Versioning

Anyone can build Linux kernel. Linux Kernel is provided freely on From the earlier version until the latest version are available. Kernel is release regularly and use a versioning system to distinguish earlier and later kernel. To know Linux Kernel version, a simple command uname can be used. For example, I invoke this and got message

# uname -r

At that command output, you can see dotted decimal string 3.7.8. This is the linux kernel version. In this dotted decimal string, the first value 3 denotes major relase number. Second number 7 denotes minor release and the third value 8 is called the revision number. The major release combined with minor release is called the kernel series. Thus, I use kernel 3.7

Another string after 3.7.8 is gnx-z30a. I’m using a self-compiled kernel and add -gnx-z30a as a signature of my kernel version. Some distribution also gives their signature after the kernel, such as Ubuntu, Fedore, Red Hat, etc.

An example of building kernel can be read at this article.

Kernel Source Exploration

For building the linux kernel , you will need latest or any other stable kernel sources . For example we have taken the sources of stable kernel release version 3.8.2 . Different versions of Linux Kernel sources can be found at . Get latest or any stable release of kernel sources from there.

Assuming you have download the stable kernel release source on your machine, extract the source and put it to /usr/src directory.

Most of the kernel source is written in C .It is organized in various directories and subdirectories . Each directory is named after what it contains .  Directory structure of kernel may look like the below diagram.

Linux Kernel Source

Know let’s dive more into each directories.


Linux kernel can be installed on a handheld device to huge servers. It supports intel, alpha, mips, arm, sparc processor architectures . This ‘arch’ directory further contains subdirectories for a specific processor architecture. Each subdirectory contains the architecture dependent code. For example , for a PC , code will be under arch/i386 directory , for arm processor , code will be under arch/arm/arm64 directory etc.


LILO or linux loader loads the kernel into memory and then control is passed to an assembler routine, arch/x86/kernel/head_x.S. This routine is responsible for hardware initialization , and hence it is architecture specific. Once hardware initialization is done , control is passed to start_kernel() routine that is defined in init/main.c . This routine is analogous to main() function in any ‘C’ program , it’s the starting point of kernel code . After the architecture specific setup is done , the kernel initialization starts and this kernel initialization code is kept under init directory. The code under this directory is responsible for proper kernel initialization that includes initialization of page addresses, scheduler, trap, irq, signals, timer, console etc.. The code under this directory is also responsible for processing the boot time command line arguments.


This directory contains source code of different encryption algorithms , e.g. md5,sha1,blowfish,serpent and many more . All these algorithms are implemented as kernel modules . They can be loaded and unloaded at run time . We will talk about kernel modules in subsequent chapters.


This directory contains documentation of kernel sources.


If we understand the device driver code , it is splitted into two parts. One part communicates with user, takes commands from user , displays output to user etc. The other part communicates with the device, for example controlling the device , sending or receiving commands to and from the device etc. The part of the device driver that communicates with user is hardware independent and resides under this ‘drivers’ directory. This directory contains source code of various device drivers. Device drivers are implemented as kernel modules. As a matter of fact, majority of the linux kernel code is composed of the device drivers code , so majority of our discussion too will roam around device drivers.

This directory is further divided into subdirectories depending on the device’s driver code it contains.

contains drivers for block devices,e.g. hard disks.
contains drivers for proprietary cd-rom drives.
contains drivers for character devices , e.g. – terminals, serial port, mouse etc.
contains isdn drivers.
contains drivers for network cards.
contains drivers for pci bus access and control.
contains drivers for scsi devices.
contains drivers for ide devices
contains drivers for various soundcards.

Another part of a device driver, that communicates with the device is hardware dependent, more specifically bus dependent. It is dependent on the type of bus which device uses for the communication. This bus specific code resides under the arch/ directory


Linux has got support for lot of file systems, e.g. ext2,ext3, fat, vfat,ntfs, nfs,jffs and more. All the source code for these different file systems supported is given in this directory under file system specific sudirectory,e.g. fs/ext2, fs/ext3 etc.

Also, linux provides a virtual file system(VFS) that acts like a wrapper to these different file systems . Linux virtual file system interface enables the user to use different file systems under one single root ( ‘/’) . Code for vfs also resides here. Data structures related to vfs are defined in include/linux/fs.h. Please take a note , it is very important header file for kernel development.


This is one of the most important directories in kernel. This directory contains the generic code for kernel subsystem i.e. code for system calls , timers, schedulers, DMA , interrupt handling and signal handling. The architecture specific kernel code is kept under arch/*/kernel.


Along with the kernel/ directory this include/ directory also is very important for kernel development .It includes generic kernel headers . This directory too contains many subdirectories . Each subdirectory contains the architecture specific header files .


Code for all three System V IPCs(semaphores, shared memory, message queues) resides here.


Kernel’s library code is kept under this directory. The architecture specific library’s code resides under arch/*/lib.


This too is very important directory for kernel development perspective. It contains generic code for memory management and virtual memory subsystem. Again, the architecture specific code is in arch/*/mm/ directory. This part of kernel code is responsible for requesting/releasing memory, paging, page fault handling, memory mapping, different caches etc.


The code for kernel’s networking subsystem resides here. It includes code for various protocols like ,TCP/IP, ARP, Ethernet, ATM, Bluetooth etc. . It includes socket implementation too , quite interesting directory to look into for networking geeks.


This directory includes kernel build and configuration subsystem. This directory has scripts and code that is used to configure and build kernel.


This directory includes security functions and SELinux code, implemented as kernel modules.


This directory includes code for sound subsystem.


When the kernel is compiled , lot of code is compiled as modules which will be added later to kernel image at runtime. This directory holds all those modules. It will be empty until the kernel is built at least once.

Apart from these important directories , also there are few files under the root of kernel sources.

  • COPYING – Copyright and licensing (GNU GPL v2).
  • CREDITS – partial credits-file of people that have contributed to the Linux project.
  • MAINTAINERS – List of maintainers who maintain kernel subsystems and drivers. It also describes how to submit kernel changes.
  • Makefile – Kernel’s main or root makefile.
  • README – This is the release notes for linux kernel. it explains how to install and patch the kernel , and what to do if something goes wrong .


We can use make documentation targets to generate linux kernel documentation. By running these targets, we can construct the documents in any of the formats like pdf, html,man page, psdocs etc.

For generating kernel documentation, give any of the commands from the root of your kernel sources.

make pdfdocs
make htmldocs
make mandocs
make psdocs

Source Browsing

Browsing source code of a large project like linux kernel can be very tedious and time consuming . Unix systems have provided two tools, ctags and cscope for browsing the codebase of large projects. Source code browsing becomes very convenient using those tools. Linux kernel has built-in support for cscope.

Using cscope, we can:

  • Find all references of a symbol
  • Find function’s definition
  • Find the caller graph of a function
  • Find a particular text string
  • Change the particular text string
  • Find a particular file
  • Find all the files that includes a particular file

The post Linux Kernel Source & Versioning appeared first on Xathrya.ID.

Kernel Mode and Context

User Mode and Kernel Mode

In Linux, application / software is fall into two category: user programs and kernel. Linux kernel runs under a special privileged mode compared to user applications. In this mode, kernel runs in a protected memory space and has access to entire hardware. This memory space and privileged state collectively is known as kernel space or kernel mode.

On contrary, userapplications run under user-space and have limited access to resources and hardware. User space application can’t directly access hardware or kernel space memory, but kernel has access to entire memory space. To communicate with hardware, a user application need to do system call and ask service from kernel.

Different Contexts of Kernel Code

Entire kernel code can be divided into three categories.

  1. Process Context
  2. Interrupt Context
  3. Kernel Context

Process Context

User applications can’t access the kernel space directly but there is an interface using which user applications can call the functions defined in  the kernel space. This interface is known as system call. A user application can  request for kernel services using a system call.

read() , write() calls are examples of a system call. A user application calls read() / write() , that in turn invokes sys_read() / sys_write() in the kernel space . In this case kernel code executes the request of user space application. At this point, a kernel code that executes on the request or on behalf on a userapplications is called process context code. All system calls fall in this category.

Interrupt Context

Whenever a device wants to communicate with the kernel, it sends an interrupt signal to the kernel. The moment kernel receives an interrupt request from the hardware, it starts executing some routine in the response to that interrupt request. This response routine is called as interrupt service routine or an interrupt handler. Interrupt handler routine is said to execute in the interrupt context.

Kernel Context

There is some code in the linux kernel that is neither invoked by a user application nor it is invoked by an interrupt. This code is integral to the kernel and keeps on  running always . Memory management  , process management , I/O schedulers , all that code lies in  this category. This code is said to execute in the kernel context.

The post Kernel Mode and Context appeared first on Xathrya.ID.