Linux Kernel Module Skeleton File

Alright, in this post, I will show you the skeleton file structure of a Linux kernel module file. These entries are required to be present in the module file to get into the kernel.

Heads Up! Linux Kernel Experts, you can infuse the exact details missing as a comment, so I can update the post if that is the missing part.

I am assuming(that’s a bad thing, but I can’t help much), that you are getting *into this endeavor, which means, you have to have little idea of what you are trying to pursue.

Let me give you the skeleton file first:

#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/module.h>


/* Meta Informations */

MODULE_LICENSE("GPL");
MODULE_AUTHOR("Bhaskar Chowdhury");
MODULE_DESCRIPTION("Random kernel module demonstration");


/**
 * This is all about module insertion to the running kernel
 */

static int __init custom_module_init(void) {
        printk("This is to show you how you can make kernel module ,basic way\n");
                return 0;
}

/**
 * How to remove module from the running kernel
 */

static void __exit custom_module_exit(void) {
        printk("Removing dummy module\n");
}

module_init(custom_module_init);
module_exit(custom_module_exit);

You are supposed to compile it like every other C file. In general, nothing special is required. The custom_module_init() and custom_module_exit() functions have to be filled with the exact logic of the module’s purpose.

Now, you need a Makefile to get the damn module to be built, and here is a bare-bone one to help with :

obj-m += demo-kernel-module-build.o

all:
        make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules

clean:
        make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean

Pretty darn simple, right? It should give you a fair idea of how to get on with the proceedings about it. You can think of it as a starting point.

You might take a peek at the video I made Linux Kernel Module Writing Video.

If you ever wanted to write a module that is out of kernel tree1.

You can INSERT the kernel module by insmod command

insmod name_of_your_module.ko

You can REMOVE the kernel module by rmmod command

rmmod name_of_your_module.ko

You can get Information about the kernel module by modinfo command

modinfo name_of_your_module

You can get all the modules of the running kernel with nice enlisting

#!/usr/bin/env bash
 gawk '{print $1}' "/proc/modules" | xargs modinfo | gawk '/^(filename|desc|depends)/'

To enlist all the kernel modules by using lsmod command

lsmod

..and the output would look like this :

2023-09-30-045630_540x398_scrot.png

I have trimmed the output for brevity’s sake 🙂

How To Use Wpa_Supplicant Cli On The Command Line To Deal With Wireless Network

Well, managing network connections on a Linux device is kind of awkward, but not always. Because GUI tools were not apt enough to get things done more efficiently or minimal way. Wireless networks have become a lingua franca on Linux devices many moons ago. And the software that is doing the thing is called Wpa_Supplicant1.

It has a command line version, which is aptly called wpa_cli, and can be used on the command line to manipulate the wireless network-related stuff. It will be surprising if it is not installed by default to your choice of Linux distribution. And if it is so, then it is pretty darn simple to get and install by your distribution’s package manager.

Here in the below section, I will show you some rudimentary interactions with wpa_cli and you will see how easy it is to delve into.

Calling wpa_cli without any argument will put you in an interactive shell

2023-09-29-084347_775x217_scrot.png

Now you can Scan the network from this interactive shell

2023-09-29-084535_595x84_scrot.png

Now, you can see it spitted out the result as OK

How to see the network scan result

2023-09-29-085158_481x333_scrot.png

I have intentionally trimmed the right-hand side of the screenshot to hide the network names in my near vicinity.

To add a network

2023-09-29-085521_294x26_scrot.png

To set the network with specific SSID2

2023-09-29-085614_362x32_scrot.png

How to set the credentials

2023-09-29-085709_296x25_scrot.png

How to enable the network

2023-09-29-085754_235x33_scrot.png

The status of the connected network is shown

2023-09-29-085903_624x121_scrot.png

How to save the config for future use

2023-09-29-090024_297x34_scrot.png

How to quit this interactive shell, once done

2023-09-29-090134_103x14_scrot.png

That’s it.

Linux Compression Tools And Algorithms

Well, if you live enough with the computer and specifically with opensource system for some time, you are bound to come across some renowned and well-implemented compression tools and algorithms in various places. Sometimes these can be used explicitly and on other occasions, these are ingrained into the software.

So, the point I am trying to make is that these tools and related algorithms are quite unavoidable, especially if you lean onto the open system(read GNU/Linux and BSD parlance), and I lack exposure to other systems for over two decades.

Anyway, these tools are generally available by default in the system by the distribution people who ship distributions. If not, it should not be a big deal to get them installed in the system. But the catch is that some of the default software might not work without some of them.

One thing readily coming to mind, although it is not mandatory, important for the people, who are inclined to do so, that is, compiling the Linux kernel needs them to be available in the system and the specific menuconfig option will allow you to choose the compression of the kernel after build. Likewise, many other software need those to perform the operation for the specific software, which depends on its facility. Likewise,*FFmpeg’s libavutil library*1 includes its own implementation of LZO[3] as a possible method for lossless video compression

Let me enlist the popular ones here:

Gzip2

LZO3

XZ Utils4

Bzip25

Now, all of them come along with some binary to operate with the library attached to the algorithms.

Extracted the important things to note about Gzip:

  • a 10-byte header, containing a magic number6 (1f 8b), the compression method (08 for DEFLATE), 1-byte of header flags, a 4-byte timestamp, compression flags and the operating system ID.
  • optional extra headers as allowed by the header flags, including the original filename, a comment field, an “extra” field, and the lower half of a CRC-32 checksum for the header section.
  • a body, containing a DEFLATE-compressed payload
  • an 8-byte trailer, containing a CRC-32 checksum and the length of the original uncompressed data, modulo

…and

zlib7 is an abstraction of the DEFLATE algorithm in library form which includes support both for the gzip file format and a lightweight data stream format in its API. The zlib stream format, DEFLATE, and the gzip file format were standardized respectively as RFC 1950, RFC 1951, and RFC 1952.

Extracted the important things to note about LZO:

  • Higher compression speed compared to DEFLATE8 compression
  • Very fast decompression
  • Requires an additional buffer during compression (of size 8 kB or 64 kB, depending on compression level)
  • Requires no additional memory for decompression other than the source and destination buffers
  • Allows the user to adjust the balance between compression ratio and compression speed, without affecting the speed of decompression

The Linux kernel uses its LZO implementation in some of its features:

  • btrfs uses LZO as a possible compression method for file system compression.
  • initrd and initramfs use LZO as a possible compression method for initial RAM drive compression.
  • SquashFS uses LZO as a possible compression method for file system compression.
  • zram uses LZO with run-length encoding called LZO-RLE as the default compression method for RAM drive compression.
  • zswap uses LZO as the default compression method for virtual memory compression

XZ Utils can compress and decompress both the xz and lzma file formats

  • xz, the command-line compressor and decompressor (analogous to gzip)
  • liblzma, a software library with an API similar to zlib

Various command shortcuts exist, such as lzma (for xz –format=lzma), unxz (for xz –decompress; analogous to gunzip) and xzcat (for unxz –stdout; analogous to zcat).

bzip2 is a free and open-source file compression program that uses the Burrows–Wheeler9 algorithm

As an overview, a .bz2 stream consists of a 4-byte header, followed by zero or more compressed blocks, immediately followed by an end-of-stream marker containing a 32-bit CRC for the plaintext whole stream processed. The compressed blocks are bit-aligned and no padding occurs.

bzip2 uses several layers of compression techniques stacked on top of each other, which occur in the following order during compression and the reverse order during decompression:

  • Run-length encoding (RLE) of initial data.
  • Burrows–Wheeler transform (BWT), or block sorting.
  • Move-to-front (MTF) transform.
  • Run-length encoding (RLE) of MTF result.
  • Huffman coding.
  • Selection between multiple Huffman tables.
  • Unary base-1 encoding of Huffman table selection.
  • Delta encoding (Δ) of Huffman-code bit lengths.
  • Sparse bit array showing which symbols are used.

bzip2 is suitable for use in big data applications with cluster computing frameworks like Hadoop and Apache Spark, as the compressed blocks can be independently decompressed.

Slackware Linux After Update Cleanups Automated Way

Alright, as the title of the post suggests, this activity will be performed on a Linux distribution called Slackware1.

While sitting on it and updating activity is pretty common if you tend to hop onto it every few weeks. And, I am running kinda rolling release model with this distribution. The update is necessary to get the latest software on it to run plus the sadist pleasure of seeing the update happen on term!! See, I am NOT trying to give some other reason generally dished out by the “experts”, which always skips me.

Anyway,like everyone else in the wild, I too have my own way of updating the stuff in my system(what the big fuss about it???), and for that reason, I have written a mundane script(unlike the experts) to do the damn thing automated way, just for the sheer convenience but nothing else.

Here is the trivial and mundane stuff, which does the trick for me…don’t fret..

#!/usr/bin/env bash
TM="/usr/bin/time -f"
printf "Updating and Upgrading the system,please wait...\n\n\n"

printf "Hostname: %s\nDate    : %s\nUptime  :%s\n\n"  "$(hostname -s)" "$(date)" "$(uptime)"

printf "\n\n\n Checking the system capacity ...\n\n"


maxpoint="90"
per=`df / | awk 'END{print $5}' | tr -d %`
if  [ "$per" -le  "$maxpoint" ]; then

        printf "Ok...looks good...procced\n\n\n"

elif [ "$per" -gt "$maxpoint" ]; then

        printf "Not enough space...aborting!"
        exit 1
fi


$TM "\t\n\n Elapsed time: %E \n\n" slackpkg update  && slackpkg upgrade-all

if [[ $? -eq 0 ]];then
        notify-send "Update and Upgrade process done!"
else
        notify-send "Update process failed..pls check manually"
fi

Darn simple and ordinary.Period.

Now, in general, this update process pulls the stuff from upstream(I am using the current version, which means beyond stable) and this mechanism brings out files that will be overwritten by the process if you don’t take action at the end of the update.You can keep your old files and new files side by side, with a press of a key(generally that is done by pressing the letter K). Or you can overwrite with the other letter options O …and so forth.

I generally keep the old and new files once the update is finished doing update, I hop into those files and compare the changes(I have a homegrown automated way of doing this and that is extremely trivial to claim). The update, basically littered the etc directory with two kinds of files, one with .new extensions and other with the .orig extensions.

To avoid that clutter of an important directory i.e. etc, I have made a few lines of trivial bash code to put those files in a designated directory for later inspection. But before doing that, the script also takes a backup of the same directory with old(which means the last backup update content) to make a tarball of it.Then essentially go ahead create the same directory and put the files in it.The directories are duly time stamped for the sake of clear understanding and later references.

Now here comes the meat aka the trivial script to just what I have described in the above vignette.

#!/usr/bin/env bash
#===============================================================================
#
#          FILE: slackware_update_leftover.sh
#
#         USAGE: ./slackware_update_leftover.sh
#
#   DESCRIPTION: Clean things up after the update by putting files in  places
#
#       OPTIONS: ---
#  REQUIREMENTS: --- GNU coreutils
#          BUGS: ---
#         NOTES: --- Cleanliness of the /etc directory
#        AUTHOR: Bhaskar Chowdhury (https://about.me/unixbhaskar), unixbhaskar@gmail.com
#  ORGANIZATION: Independent
#       CREATED: 09/19/2023 05:37
#      REVISION:  ---
#===============================================================================

# License (GPL v2.0)

# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.

set -o nounset                              # Treat unset variables as an error


usage(){

   echo You are suppose to be Superuser to run this script.
   exit 1
}

# You are suppose to be root to be run this script otherwise fail.

if test  $UID -ne 0;then
        usage
fi

# Specific naming format for the newly created directories

backup_dir_with_new_extension="/etc/backup_new_config_$(date +'%F_%T')"
backup_dir_with_orig_extension="/etc/backup_orig_config_$(date +'%F_%T')"
search_dir=/etc
TAR="$(command -v tar)"
old_backup_dir1=$(find . -name "backup_new_config_*" -type d | tr -d "./")
old_backup_dir2=$(find . -name "backup_orig_config_*" -type d | tr -d "./")


cd "$search_dir" || exit 1

# Function to make a tarball of the existing directory filled with dot new
# extensions files and create a new directory to hold new files left the by
# updates.

config_backedup_with_new(){

         files=$(find "${search_dir}" -name "*.new" -type f -print)

        sh -c "\"${TAR}\" -czf previous_new_config.tar.gz \"${old_backup_dir1}\""
        mkdir -p "${backup_dir_with_new_extension}"

        for i in $files
do
        ls -l "$i"
        mv -v "$i" "${backup_dir_with_new_extension}"
done
}


# Function to make a tarball of the existing directory filled with dot orig
# file extensions and create new directory to hold new dot orig files left by
# the update

config_backedup_with_orig(){

        files=$(find "${search_dir}" -name "*.orig" -type f -print)

       sh -c "\"${TAR}\" -czf  previous_orig_config.tar.gz \"${old_backup_dir2}\""
        mkdir -p "${backup_dir_with_orig_extension}"

       for i in $files
do
        ls -l "$i"
        mv -v "$i" "${backup_dir_with_orig_extension}"
done
}

# Checking if calling the commands for the job is successful or not.
if test "$(config_backedup_with_new)" -eq 0;then

        echo Moved new extensions files successfully!
else
        echo Bloody hell...check manually
fi

if test "$(config_backedup_with_orig)" -eq 0;then

        echo Moved orig extentions files successfully!
else
        echo Bloody hell ....check manually
fi

This little thing plays well for the requirement and importantly, I just didn’t want to overdo it.

Git: Uncanny Exploration With It

Well, it seems the whole world moved with it if not all, at least 99 percent of the software-related projects are moving into it. Rest probably catching up in big time in fear of being left behind. This is such a powerful tool to invest your time to learn, which will give you benefit beyond your expectation. Yes, the learning curve like every other software is steep. That is why, the investment of time behind is so crucial. You just don’t want to waste your invaluable time not knowing things properly. I have written about it before, which you can explore by typing git in the search box in the upper-hand corner of this website. And it will enlist you all the stuff related to git.

In this post, I am going to show you a few of the command explorations I believe people ought to know sometime in their git usage time. Also, I am wildly expecting, whoever is reading this stuff, to at least have a basic understanding of it working and that too in the Linux Platform. There might be some subtleties on other platforms that I am not aware of due to my work environment confinement.

What the heck is git cat-file?1 And how it can be used?

This is essentially a command to figure out kind of stuff about the commits. Let me show you some…

As the documentation said, it provides three things, content, size, and type of any object in a git repository.

How to figure out the TYPE?

2023-09-16-133953_518x91_scrot.png

I am inside one of my random repositories and ran two distinct commands to get the type of the hash object.

git shorthash is an alias: alias.shorthash !git rev-parse –short @

How to see the CONTENT of the object?

2023-09-16-134348_577x339_scrot.png

Alright, the words type and dump are aliases to these below commands:

2023-09-16-134738_465x76_scrot.png

Okay, you have spotted another command along with those two, as the name of the alias suggests that is for the entire tree, which looks like this :

2023-09-16-135028_632x435_scrot.png

How to Bundle2 a project?

This sub-command basically creates a single file of the entire repository, which also includes the versioning information.

2023-09-16-140903_508x120_scrot.png

It essentially creates the project outside of this repository. Let me show you the effect of it below by getting into that created bundle.

2023-09-16-141341_611x98_scrot.png

…now see the log in this cloned repository :

2023-09-16-141532_1011x118_scrot.png

Archiving the project with git archive3

2023-09-16-142244_641x507_scrot.png

Keep aside the uncommitted changes by using stash4

2023-09-16-143143_680x171_scrot.png

Various common commands related to stashing are :

git stash list —> This will show you the list of stashed items

git stash apply —> This will let you apply saved stash OR

git stash apply {number} —> If there is more than one item in the stash list

Get the damn file from the remote repository

It is useful in a sense, that if you modify something on the local repository and don’t like it, then you can get the pristine copy of the same file from the remote repository.

git checkout origin/master – $path_to_file

Investigate using git reflog5

Typing git reflog at the command prompt shows you this kind of information

2023-09-16-145520_749x93_scrot.png

..and there is an aliased version of reflog , I called it logref , and it has a time stamp attached to the reflog entries.

2023-09-16-145935_917x83_scrot.png

See the difference? Okay, here is the alias entry for it :

alias.logref !git reflog –date=iso

Measure how much disk space is used by the pack files by using git count-objects6

2023-09-16-151309_471x176_scrot.png

Look at the size-pack value.

Show commit objects in reverse chronological order git rev-parse7

I have a script that shows me the latest commits on the HEAD of the Linux kernel source8 tree and it is like this :

#!/usr/bin/env bash

if [ "$1" != "" ];then
        branch="$1"
else
        branch="HEAD"

fi

printf "\n%s%s\n\n" $(git rev-list $branch ^$branch@{1} | wc -l) " commits were added by your last update to $branch:"

git --no-pager log $branch ^$branch@{1} --oneline

..and it shows the output like this :

2023-09-16-152206_1366x768_scrot.png

I think people who love some visual representation of the git internals might look here at Git For Computer Scientists.

Enough!

Books Are A Great Source Of Inspiration And Aspirations

Well, I have inculcated a habit of reading and writing for the sake of covering up my shortcomings in a sane way. And it has been helping along for quite some time now. You can get the entire list at Goodreads.

In this post, I am going to enlist a few of the books I have purchased(when forced) and read. All of them have been exceptionally mind-bending. Few are more than others, but all in all, it is a driving force of mine.

A mere reading doesn’t make much sense if the reader fails to get the essence of it.

Here are they for your visual pleasure :

* Principles Of Compiler Design

* The Soul of A New Machine

* Influence: The Psychology of Persuasion

* Thinking, Fast and Slow

* Compilers: Principles, Techniques, and Tools

* A Quarter Century of Unix

* The Design and Implementation of the 4.4 BSD Operating System (Unix and Open Systems Series.)

* Linux Kernel Internals

* The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win

* The Asshole Survival Guide: How to Deal with People Who Treat You Like Dirt

* The Manager’s Path: A Guide for Tech Leaders Navigating Growth and Change

* Good Boss, Bad Boss: How to Be the Best… and Learn from the Worst

* The Five Dysfunctions of a Team: A Leadership Fable

* Assembly Language Step-by-Step: Programming with Linux

* Outliers: The Story of Success

* Skin in the Game: Hidden Asymmetries in Daily Life

* The Righteous Mind: Why Good People Are Divided by Politics and Religion

* Broken Genius: The Rise and Fall of William Shockley, Creator of the Electronic Age (Macmillan Science)

* The Man Behind the Microchip: Robert Noyce and the Invention of Silicon Valley

* Understanding the Digital World: What You Need to Know about Computers, the Internet, Privacy, and Security

* Becoming a Technical Leader

* Secrets of Consulting: A Guide to Giving and Getting Advice Successfully

* How Linux Works: What Every Superuser Should Know

* The Winner Stands Alone

* Pro Git

* Learning Linux Binary Analysis

* The Rational Optimist: How Prosperity Evolves

* Systems Performance: Enterprise and the Cloud

* Python For Unix And Linux System Administration

* Linux System Programming

* Devops Troubleshooting: Linux Server Best Practices

* Brilliant Blunders: From Darwin to Einstein – Colossal Mistakes by Great Scientists That Changed Our Understanding of Life and the Universe

* In Search Of Excellence: Lessons from America’s Best-Run Companies

* Man’s Search for Meaning

* The Alchemist

* Systems Programming (McGraw-Hill computer science series)

* Mastering Regular Expressions

* The Design and Implementation of the FreeBSD Operating System

* Design Patterns: Elements of Reusable Object-Oriented Software

* The Art of Computer Programming: Volume 3: Sorting and Searching

* The Art of Computer Programming, Volume 2: Seminumerical Algorithms

* The Psychology of Computer Programming

* The Mythical Man-Month: Essays on Software Engineering

* Computer Networks

* Modern Operating Systems

* Essential System Administration: Tools and Techniques for Linux and Unix Administration

* Learning the bash Shell

* Classic Shell Scripting

* Unix Network Programming, Volume 1: Networking APIs – Sockets and XTI

* Linux Device Drivers

* UNIX Internals: The New Frontiers

* A Discipline of Programming

* Programming Pearls

* Software Tools

* The C Programming Language

* The Practice of Programming (Addison-Wesley Professional Computing Series)

* Linux System Programming: Talking Directly to the Kernel and C Library

* The Art of Computer Programming, Volume 1: Fundamental Algorithms

* The UNIX Programming Environment

* Advanced Programming in the UNIX Environment

* Programming Perl

* sed & awk

* Linux Kernel Development

* Understanding the Linux Kernel

* The Design of the UNIX Operating System

* The Art of UNIX Programming

* The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary

* Mindset: The New Psychology of Success

* The Art of Thinking Clearly

* The Bed of Procrustes: Philosophical and Practical Aphorisms

* Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets

* Antifragile: Things That Gain from Disorder

* The Black Swan: The Impact of the Highly Improbable

* The AWK Programming Language

* On Intelligence

* Lions’ Commentary on UNIX 6th Edition with Source Code

This list is certainly incomplete, as I am not yet able to finish up with other books I have been capturing and reading. Maybe in the future time, when I get some spare time induct them.

Linux IRQ And SysRq Effects

Well, the topic of this post is a little dense and important. But, I am not sure how much I shall be able to dish out to you for comfort. So, take it with a pinch of salt. 🙂

IRQ –> Interrupt Request.

What is an IRQ?1.

In a computer, an interrupt request (or IRQ) is a hardware signal sent to the processor that temporarily stops a running program and allows a special program, an interrupt handler, to run instead. Hardware interrupts are used to handle events such as receiving data from a modem or network card, key presses, or mouse movements.2

Okay, you can see the IRQs in the Linux system like this :

2023-09-14-084348_1366x768_scrot.png

Running irqtop3 shows like this :

2023-09-14-085122_1366x768_scrot.png

So, userspace tools can be a great help, provided you can parse the output properly, not only in this case but with most of the userspace tool’s output. But for that interpretation to take effect, you need to arm yourself with the understanding of how things work underneath. It will help you to decipher the essence of the result and help you to make the decision on the basis of it.

This is a bloody good pointer on interrups4. There are various kinds of interrupts and the system reacts to them differently. Here is the category I have picked out of that reference for the lazy 🙂

  • Exceptions
  • Interrupt request OR Hardware Interrupt
    • IRQ Lines OR Pin-based IRQ
    • Message Signaled Interrupt
  • Software Interrupt

The IRQ-related stuff resides in the Linux kernel like this :

2023-09-14-093547_1192x60_scrot.png

Go and explore it.

SysRq5 is a mechanism to achieve some specific system-level action, while the system is running or freezing.

Linux Kernel compiled with the “CONFIG_MAGIC_SYSRQ” option enabled.

How do you invoke it??

Simply press ALT + SysRq + command key.

You can cross-check your running system kernel for the SysRq key

2023-09-14-105610_597x72_scrot.png

Check whether the mechanism is activated or not

2023-09-14-110056_472x77_scrot.png

The values of the SysRq

0 - disable sysrq completely
1 - enable all functions of sysrq
2 - enable control of console logging level
4 - enable control of keyboard (SAK, unraw)
8 - enable debugging dumps of processes etc.
16 - enable sync command
32 - enable remount read-only
64 - enable signaling of processes (term, kill, oom-kill)
128 - allow reboot/poweroff
256 - allow nicing of all RT tasks

Please consult the Linux Kernel Documentations Page for SysRq Keys6.

Linux And Glibc Juxtaposed

Well, busting the myths about the misconception or vaguely understanding the relation between those pieces of software is much vaunted.

So, here in this short post, I shall be giving you little but to-the-point explanations of what it looks like from an end-user perspective.

GLIBC –> GNU C Standard Library1.

Linux Kernel Interface API2.

Let’s start with a blunt truth They are NOT dependent on each other and have never been. Period.

In fact, a few of the glibc components depend on the way the kernel interacts with the stuff formerly offered. Let me touch upon a couple of them, so they will be more vivid to you.

For instance, malloc system provided by the glibc has nothing to do with kmalloc we have in the linux kernel. Similarly, kfree has the same treatment in the kernel.

Glibc3 provides a wrapper to the common system calls for better management of the software. But, the Linux kernel4 has most of its own functions written from the ground up, as per the requirement. It comes with various libraries not limited to C but expands to math too.

You could build separate glibc or build multiple of various versions of it in the same machine and use it as per your requirements.

The beauty of glibc, as the core library that its developers made sure that the older program works with the newer version. Why? Because glibc supports symbol versioning*5.

Note that in Linux it is the combination of the kernel and glibc that provides the POSIX API. Glibc adds a decent amount of value – not every POSIX function is necessarily a system call6, and for the ones that are, the kernel behavior isn’t always POSIX conforming.

There is a stark difference between calling a normal function and calling a system call7, the involvement with the kernel is NOT normal. So, it has to have a way to attach the facility to call those system calls, mainly due to the linker complaint about unresolved stuff. Every system call is architecture specific assembly language thunks is used to call in the kernel. Glibc provides the mechanism underneath to deal with that complexity. If you are inclined, you might implement your own way of calling those stuff or use some other alternatives to do the same.

But, the application that is supposed to run and build probably has a much greater effect on glibc than anything else. However, you could use other variants of it, like Musl8.

Finally, I want to leave you with a cautionary note about the glibc upgrade ,it has been reported many times by end users that fiddling with it breaks the system consistency left and right. So, if you have that kind of inclination about dealing with stuff, probably you don’t know, then go ahead. Otherwise, please give yourself some peace of mind by sticking with the distribution’s way of doing things. At least, people are paid to fix the damn thing, if the hell broke and certainly not your problem.

Emacs As Mail Client Specifically as Mu4E

Alright, I am going to be gun-barrel straight about the specific facility inside Emacs as a mail client and that is called Mu4E1. There are others, and I do use them too, but that is not what I should spill over this post. Hence, the confinement.

I have said it before and saying it again, Emacs is a kind of departmental store and houses so many things under one roof. But, it is entirely up to the end user to decide which one to opt for and which one to leave out. I have been careful about that notion, solely driven by the ethos not to bloat the damn thing.

In quest of that, I have learned and inducted some facilities in my Emacs configuration and been using it for some time now. One of them is Mu4E and I kinda like it. Importantly, it does the job for me that I intended to do with it.

If you missed somehow my config towards that specific facility inside Emacs, then let me provide you with exact details about it, right here for your convenience sake…

;;Mu4e setup
(setq load-path (append load-path '("~/.emacs.d/mu/mu4e")))
(require 'mu4e)

(setq user-full-name "Bhaskar Chowdhury"
      user-mail-address "unixbhaskar@gmail.com")
(setq mu4e-get-mail-command "getmail"
      mu4e-update-interval 300
      mu4e-attachment-dir "~/attachments")

(setq mu4e-mu-binary "/usr/local/bin/mu")

This is the basic thing you need to induct in your dot emacs or init.el file to get it working. Change the content of it as per your need.

You are supposed to have getmail2 installed in the system beforehand to retrieve the mail by it.

But there are more….

;;(require 'org-mu4e)
(require 'mu4e-contrib)
(require 'smtpmail)
(auth-source-pass-enable)
(setq auth-source-debug t)
(setq auth-source-do-cache nil)
(setq auth-sources '(password-store))
(setq message-kill-buffer-on-exit t)
(setq message-send-mail-function 'smtpmail-send-it)
(setq mu4e-attachment-dir "~/attachments")
(setq mu4e-compose-complete-addresses t)
(setq mu4e-compose-context-policy nil)
(setq mu4e-context-policy 'pick-first)
(setq mu4e-view-show-addresses t)
(setq mu4e-view-show-images t)
(setq smtpmail-debug-info t)
(setq smtpmail-stream-type 'starttls)
(setq mm-sign-option 'guided)

Pretty dense, right? But, these damn things bring some integration with the software and interface for convenience. It certainly does not end here, so more to add for the sake of a fully functioning system …so follow..

(use-package mu4e
     :ensure nil
     :config

     (setq mu4e-change-filenames-when-moving t)
     (setq mu4e-update-interval (* 10 60))
     (setq mu4e-getmail-command "getmail")
     (setq mu4e-maildir "~/gmail-backup")
     (setq mu4e-sent-folder "/sent")

     (setq mu4e-maildir-shortcuts
       '( (:maildir "/Inbox"              :key ?i)
       (:maildir "/Greg(GKH)"             :key ?g)
       (:maildir "/Linus"                 :key ?l)
       (:maildir "/Andrew_Morton"         :key ?a)
       (:maildir "/Al_Viro"               :key ?v)
       (:maildir "/Jonathan_Corbet"       :key ?j)
       (:maildir "/Paul_E_McKenney"       :key ?p)
       (:maildir "/linux-kernel"          :key ?k)
       (:maildir "/Thomas_Gleixner"       :key ?t))))

Well, change the inboxes as per mail storage and communication style. But the syntax would be like this for sure.

Few little to add to all of the above pieces, and here they are

;; Mu4e Alerts
 (use-package mu4e-alert
     :after mu4e
     :hook ((after-init . mu4e-alert-enable-mode-line-display)
            (after-init . mu4e-alert-enable-notifications))
     :config (mu4e-alert-set-default-style 'libnotify))
;; Visual line mode and Flyspell mode
(add-hook 'mu4e-view-mode-hook #'visual-line-mode)
(add-hook 'mu4e-compose-mode-hook 'flyspell-mode)
(setq mu4e-compose-in-new-frame t)
(setq mu4e-compose-format-flowed t

You are good enough to read the commented line to discover what the above stanza is. You are supposed to have libnotify package installed in the system beforehand. And that almost everyone has these days, because, it works other things in the system too. So, good to have that one.

A little bit more adds to the niceties of it..so here they are:

;;Refiling folders

(setq mu4e-refile-folder
  (lambda (msg)
    (cond
      ;; messages from Linus go to the /Linus folder
      ((mu4e-message-contact-field-matches msg :from
         "torvalds@linux-foundation.org")
        "/Linus")
      ((mu4e-message-contact-field-matches msg :from
         "viro@zeniv.linux.org.uk")
        "/Al_Viro")
      ((mu4e-message-contact-field-matches msg :from
         "gregkh@linuxfoundation.org")
        "/Greg(GKH)")
      ((mu4e-message-contact-field-matches msg :from
         "akpm@linux-foundation.org")
        "/Andrew_Morton")
      ((mu4e-message-contact-field-matches msg :from
         "corbet@lwn.net")
        "/Jonathan_Corbet")
      ((mu4e-message-contact-field-matches msg :from
         "paulmck@kernel.org")
        "/Paul_E_Mckenney")

      ;; messages sent by me go to the sent folder
      ((find-if
         (lambda (addr)
           (mu4e-message-contact-field-matches msg :from addr))
         (mu4e-personal-addresses))
        mu4e-sent-folder)
      ;; everything else goes to /archive
      ;; important to have a catch-all at the end!
      (t  "/archive")
)))

So, if you are ever undecided about your mails then you might include a stanza like this, and it will segregate things for you.

Well, we are almost there to get every piece together and run it flawlessly 🙂

;; mu4e marks

(add-to-list 'mu4e-marks
  '(tag
     :char       "g"
     :prompt     "gtag"
     :ask-target (lambda () (read-string "What tag do you want to add?"))
     :action      (lambda (docid msg target)
                    (mu4e-action-retag-message msg (concat "+" target)))))

(add-to-list 'mu4e-marks
  '(archive
     :char       "A"
     :prompt     "Archive"
     :show-target (lambda (target) "archive")
     :action      (lambda (docid msg target)
                    ;; must come before proc-move since retag runs
                    ;; 'sed' on the file
                    (mu4e-action-retag-message msg "-\\Inbox")
                    (mu4e~proc-move docid nil "+S-u-N"))))

(mu4e~headers-defun-mark-for tag)
(mu4e~headers-defun-mark-for archive)
(define-key mu4e-headers-mode-map (kbd "g") 'mu4e-headers-mark-for-tag)
(define-key mu4e-headers-mode-map (kbd "A") 'mu4e-headers-mark-for-archive)

Marking the specific mail with some letter for their significance or arrival makes it easy to wade through the thousands of mail(which is what I am accustomed to)with ease. It works as a filter too.

A few nifty things have to be added for the sake of completeness, so here we are almost at the fag end …hold on…

'(mu4e-display-update-status-in-modeline t)
 '(mu4e-icalendar-diary-file "~/.emacs.d/OrgFiles/refile.org")
 '(mu4e-mu-binary "/usr/local/bin/mu")

Essentially, you have to have mu binary/package installed in the system to all it works. And have to point to the location of the binary.

I do use doom-modeline , so here are a few specific configurations related to that, if you use that thing 🙂

;; Whether display the mu4e notifications. It requires `mu4e-alert' package.
(setq doom-modeline-mu4e t)

It will show you the status of the mail count on the modeline itself. Nice to be notified and shows some visual stuff.

Promise, these are the last few bits…I know it is already long …but I just to make sure it looks complete… 🙂

;; Line number and Column number
(column-number-mode)

(dolist (mode '(org-mode-hook
                mu4e-main-mode-hook
                mu4e-view-mode-hook
                mu4e-compose-mode-hook
                mu4e-headers-mode-hook
                mu4e-org-mode-hook

(add-hook mode (lambda () (display-line-numbers-mode 0))))

(add-hook 'text-mode-hook #'display-line-numbers-mode)
(add-hook 'prog-mode-hook #'display-line-numbers-mode

Just to make sure, that line number does not show up in the composing buffer and other buffers related to mu4e, because that would be annoying.

Here is something that captures the mail-related stuff ….

;;Org mode stuff
(define-key mu4e-headers-mode-map (kbd "C-c c") 'org-mu4e-store-and-capture

Phew! Finally done with the groundwork and have time to bring up the interface with a keyboard shortcut, so I have this in dot Emacs (it could be different for you)

;; Mu4e shortcut

(global-set-key (kbd "M-m") 'mu4e

Now for the eye candy stuff to show the actual interface 🙂

2023-09-05-094318_1366x768_scrot.png

You can press the letters in between the [] brackets to get into the corresponding mail dirs or to perform certain actions.

You are encouraged to read this page3 to get yourself accustomed to this software.