Linux Consolidated Advance Materials

TL;DR It is a longish post, if you run out of patience, that is certainly not my fault. 🙂

Well, all of these, what I am going to dish out in this post is coming out of my long accumulated note files. I thought these might be useful to some people, who are looking for materials to study and apply. This is in no particular order, I just scanned through and found a few I can share at this moment. So, pay attention to your benefit. 🙂

Make file variables shorthand

Why? Delving with Linux sooner or later pushes you into a situation, where you are supposed to write some make file for your good. And here are some file-specific variable shorthand that might come in handy.

$@: the target filename.
$*: the target filename without the file extension.
$<: the first prerequisite filename.
$^: the filenames of all the prerequisites, separated by spaces, discard duplicates. 
$+: similar to $^, but includes duplicates. 
$?: the names of all prerequisites that are newer than the target, separated by spaces. 
$@: mute the command itself
$(info the message need to be printed)

Library linking breakage and fixing

This is a very common thing if you haven’t lived long with Linux. Why? It entails lots of reasons. So, knowing the reason and importantly fixing it might help in a good way.

You see the points extracted out of this: https://rosshemsley.co.uk/posts/linking

Points:

Dynamic or shared libraries are loaded up by your program at runtime. They contain lookup tables that map symbols to shared executable code. If you give someone a binary that links a dynamic library that they don’t already have, the OS will complain about missing libraries when they try to run it.

Dynamic or “shared” libraries have names that start with lib and finish with .so. Unless you’re on a Mac, where they end with .dylib.1

Dynamic libraries themselves can link to other dynamic libraries. These are known as transitive dependencies. All dependencies will need to be found to successfully run your binary.

If you want to move a binary from one machine (where it was compiled) to another, you’ll almost certainly find that at least some of the shared libraries needed by your binary are no longer found. This is usually the first sign of trouble…

Linux knows how to find libraries because it has a list of known locations for shared libraries in /etc/ld.so.conf. Each time you run ldconfig, the OS updates its cache of known libraries by going through directories in this file and reading the libraries it finds. OS X works differently… see ld and friends.

Use ldd (linux) or otool -L (OS X) to query your binary for the missing libraries. Beware that it is not safe to do this on a binary you suspect may be malicious 😞.

You can safely copy dynamic libraries from one machine to another. As long as the environments are similar enough…2 . In a perfect world (on linux), you could just copy the library you want to use into /usr/local/lib (the recommended place for unstable libraries) and then run ldconfig to make your OS reload its library cache.

Of course, on OS X things work totally differently. Dynamic libraries have an install name that contains the absolute path. This path is baked into your binary at compile time. You can use install_name_tool to change it. Good luck!

On linux, Adding libraries to /usr/local/lib makes them visible to everything, so you may want to copy your library somewhere else so that only your binary knows how to find it. One way to do this is using rpath…

You can set the rpath attribute of your binary to contain a directory hint for your OS to look in for libraries. This hint can be relative to your binary. This is especially useful if you always ship libraries in a relative directory to your binary. You can use @origin as a placeholder for the path of the binary itself, so a rpath of @origin/lib causes the OS to always look in <path to your binary>/lib for shared libraries at runtime. This can be used on both OS X and linux, and is one of the most useful tools for actually getting things working in practice.

If your OS isn’t finding a dynamic library that you know exists, you can try helping your OS by setting the environment variable LD_LIBRARY_PATH to the directory containing it – your OS will look there first before default system paths. Beware, this is considered bad practice, but it might unblock you in a pinch. OS X has DYLD_LIBRARY_PATH, which is similar, and also DYLD_FALLBACK_LIBRARY_PATH, which is similar, but different (sorry).

Dynamic libraries also have a thing called a soname, which is the name of the library, plus version information. You have seen this if you’ve seen libfoo.so.3.1 or similar. This allows us to use different versions of the same library on the same OS, and to make non backwards-compatible changes to libraries. The soname is also baked into the library itself.

Often, your OS will have multiple symlinks to a single library in the same directory, just with different paths containing version information, e.g. libfoo.so.3, libfoo.so.3.1. This is to allow programs to find compatible libraries with slightly different versions. Everything starts to get rather messy here… if you really need to get into the weeds, this article will help. You probably only need to understand this if you are distributing libraries to users and need to support compatibility across versions.

Of course, even if your binary only depends on a single symbol in a dynamic library, it must still link to that library. Now consider that the dependency itself may also link other unused transitive dependencies. Accidentally “catching a dependency” can cause your list of shared library dependencies to grow out of control, so that your simple hello world binary ends up depending on hundreds of megabytes of totally unused shared libraries 😞.

One solution to avoiding “dependency explosions” is to statically link symbols directly into your binary, so let’s start to look at static linking!

Static libraries (.a files) contain a symbol lookup table, similarly to dynamic libraries. However, they are much more dumb and also a total PITA to use correctly.

If you compile your binary and link in only static dependencies, you will end up with a static binary. This binary will not need to load any dependencies at runtime and is thus much easier to share with others!

People On The Internet will recommend that you do not not distribute static binaries, because it makes it hard to patch security flaws. With dynamic libraries, you just have to patch a single library e.g. libssl.so, instead of re-compiling everything on your machine that may have linked the broken library without your knowledge (i.e. everything).

People who build production systems at companies recommend static libraries because it’s way the hell easier to just deploy a single binary with zero dependencies that can basically run anywhere. No one cares about how big binaries are these days anyway.

Still, more people on the internet remind you that only one copy of a dynamic library is loaded into memory by the OS even when it is used by multiple processes, saving on memory pressure.

The static library people remind you that modern computers have plenty of memory and library size is hardly the thing killing us right now.

The OS X people point out that OS X strongly discourages the use of statically linked binaries.

Static libraries can’t declare any kind of library dependencies. This means it is your responsibility to ensure all symbols are baked correctly into your binary at link time – otherwise, your linker will fail. This can make linking static libraries painfully error-prone.

If you get symbol not found errors but literally swear that you linked every damn thing, you probably linked a static library and forgot a transitive dependency that is needed by it. This pretty much sucks as it’s basically impossible to figure out where that library comes from. Try having a guess by looking at the error messages. Or something?

Oh, and you must ensure that you link your static libraries in the correct order, otherwise you can still get symbol not found errors.

If you are starting to think it might be hard to keep track of static libraries, you are following along correctly. There are tools that can help you here, such as pkgconfig, CMake, autotools… or bazel. It’s quite easy to get going and achieve deterministic platform-independent static builds with no dynamic dependencies… Said no one ever 😓.

One classic way to screw up, is to compile a static library without using the -fPIC flag (for “position independent code”). If you do not do this, you will be able to use the static library in a binary, but you will not be able to link it into a dynamic library. This is especially frustrating if you were provided with a static library that was compiled without this flag and you can’t easily recompile it.

Beware that -fpic is not the same as -fPIC. Apparently, -fPIC always works but may result in a few nanoseconds of slowdown, or something. Probably you should use -fPIC and try not to think about it too much.

Your compiler toolchain (e.g. CMake) usually has a one-liner way to link a bunch of static libraries into a single dynamic library with no dependencies of its own. However, should you want to link a bunch of static libraries into another static library… well I’ve never successfully found a reliable way to do this 😞. Why do this you may ask? Mostly for cffi – when I want to build a single static library from C++ and then link it into e.g. a go binary.

Beware that your compiler/linker is not smart! Just because the header files declare a function and your linker manages to find symbols for it in your library, doesn’t mean that the function is remotely the same. You will discover this when you get undefined behavior at runtime.

Oh, and if the library you are linking was compiled with a #define switch set, but when you include the library’s headers, you do not set the define to the same value, welcome again to runtime undefined behavior land! This is the same problem as the one above, where the symbols end up being incompatible.

If you are trying to ship C++, another thing that can bite you is that the C++ standard library uses dynamic linking. This means that even the most basic hello world program cannot be distributed to others unless they have a compatible version of libstdc++. Very often you’ll end up compiling with a shiny new version of this library, only to find that your target is using an older, incompatible version.

One way to get around libstdc++ problems is to statically link it into your binary. However, if you create a static library that statically links libstdc++, and your library uses C++ types in its public interface… welcome again to undefined behavior land ☠️.

Another piece of classic advice is to statically link everything in your binary apart from core system libraries, such as glibc – which is basically a thin wrapper around syscalls. A practical goal I usually aim for is to statically link everything apart from libc and (preferably an older version of) libstdc++. This seems to be the safest approach.

Ultimately, my rule of thumb for building distributed systems is to statically link everything apart from libc and (an older version of) libstdc++. You can then put this library/binary into a Debian package, or an extremely lightweight Docker container that will run virtually anywhere. Setting up the static linking is a pain, but IMO worth the effort – the main benefits of dynamic libraries generally do not apply anymore when you are putting the binary in a container anyway.

Finally, for ultimate peace of mind, use a language that has a less insane build toolchain than C++. For example, Go builds everything statically by default and can link in both dynamic or static libraries if needed, using cargo. Rust also seems to work this way. Static binaries have started becoming fashionable

LD_LIBRARY_PATH Trouble and Solutions

It is very often caused by the misaligned or misplaced library in the system.

Pointer : https://www.hpc.dtu.dk/?page_id=1180

This little note is about one of the most “misused” environment variables on Unix systems: LD_LIBRARY_PATH . If used right, it can be very useful, but very often – not to say, most of the time – people apply it in the wrong way, and that is where they are calling for trouble. So, what does it do?

LD_LIBRARY_PATH tells the dynamic link loader (ld. so – this little program that starts all your applications) where to search for the dynamic shared libraries an application was linked against. Multiple directories can be listed, separated by a colon (:), and this list is then searched before the compiled-in search path(s), and the standard locations (typically /lib, /usr/lib, …).

This can be used for

testing new versions of a shared library against an already compiled application re-locating shared libraries, e.g. to preserve old versions creating a self-contained, relocatable(!) environment for larger applications, such that they do not depend on (changing) system libraries – many software vendors use that approach.

Sounds very useful, where is the problem?

Yes, it is useful – if you apply it in the way it was invented for, like the three cases above. However, very often it is used as a crutch to fix a problem that could have been avoided by other means (see below). It is even getting worse, if this crutch is applied globally into a user’s (or the system’s!) environment: applications compiled with those settings get dependent on this crutch – and if it is eventually taken away, they start to stumble (i.e. fail to run).

There are other implications as well:

Security: Remember that the directories specified in LD_LIBRARY_PATH get searched before(!) the standard locations? In that way, a nasty person could get your application to load a version of a shared library that contains malicious code! That’s one reason why setuid/setgid executables neglect that variable! Performance: The link loader has to search all the directories specified until it finds the directory where the shared library resides – for ALL shared libraries the application is linked against! This means a lot of system calls to open(), that will fail with “ENOENT (No such file or directory)”! If the path contains many directories, the number of failed calls will increase linearly, and you can tell that from the start-up time of the application. If some (or all) of the directories are in an NFS environment, the start-up time of your applications can really get long – and it can slow down the whole system! Inconsistency: This is the most common problem. LD_LIBRARY_PATH forces an application to load a shared library it wasn’t linked against, and that is quite likely not compatible with the original version. This can either be very obvious, i.e. the application crashes, or it can lead to wrong results if the picked-up library does not quite do what the original version would have done. Especially the latter is sometimes hard to debug.

How can I check which dynamic libraries are loaded?

There is the ldd command, that shows you which libraries are needed by a dynamically linked executable, e.g.

$ ldd /usr/bin/file
        linux-vdso.so.1 =>  (0x00007fff9646c000)
        libmagic.so.1 => /usr/lib64/libmagic.so.1 (0x00000030f9a00000)
        libz.so.1 => /lib64/libz.so.1 (0x00000030f8e00000)
        libc.so.6 => /lib64/libc.so.6 (0x00000030f8200000)
        /lib64/ld-linux-x86-64.so.2 (0x00000030f7a00000)

This is a ‘static’ view since ldd doesn’t resolve dependencies and libraries that will get loaded at runtime, e.g. by a library that depends on others. To get an overview of libraries loaded at runtime, you can use the pldd command:

$ ldd /bin/bash
        linux-vdso.so.1 =>  (0x00007ffff63ff000)
        libtinfo.so.5 => /lib64/libtinfo.so.5 (0x0000003108a00000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00000030f8600000)
        libc.so.6 => /lib64/libc.so.6 (0x00000030f8200000)
        /lib64/ld-linux-x86-64.so.2 (0x00000030f7a00000)
$ pldd 24362
24362:  -bash
/lib64/ld-2.12.so
/lib64/libc-2.12.so
/lib64/libdl-2.12.so
/lib64/libtinfo.so.5.7
/usr/lib64/gconv/ISO8859-1.so
/lib64/libnss_files-2.12.so

As you can see, there are two more .so-files loaded at runtime, that weren’t on the ‘static’ list.

Note: pldd is originally a Solaris command, that usually is not available on Linux. However, there is a Perl-script available (and installed on our machines) that extracts this information from the /proc/<PID>/maps file. How to avoid those problems with LD_LIBRARY_PATH?

A very simplistic answer would be: “just don’t use LD_LIBRARY_PATH!” The more realistic answer is, “the less you use it, the better off you will be”.

Below is a list of ways how to avoid LD_LIBRARY_PATH, inspired by reference [1] below. The best solution is on the top, going down to the last resort.

If you compile your application(s) yourself, you can solve the problem by specifying the correct location of the shared libraries and telling the linker to add those to the runpath of your executable, specifying the path in the ‘-rpath’ linker option:

cc -o myprog obj1.o ... objn.o -Wl,-rpath=/path/to/lib \
   -L/path/to/lib -lmylib

The linker also reads the LD_RUN_PATH environment variable, if set, and thus you can specify more than one path in an easy way, without having to use the above linker option:

export LD_RUN_PATH=/path/to/lib1:/path/to/lib2:/path/to/lib3
cc -o myprog obj1.o ... objn.o -L/path/to/lib1 -lmylib1 \
   -L/path/to/lib2 -lmylib2 ...

In both cases, you can check with ldd, that your executable will find the right libraries at start-up (see above). If there is a ‘not found’ message in the ldd output, you have done something wrong and should review your Makefile and/or your LD_RUN_PATH settings. There are tools around, to fix/change the runpath in a binary executable, e.g. chrpath under Linux. The problem with this method is, that the space in the executable that contains this information (i.e. the string defining the path) cannot be extended, i.e. you cannot add additional information – only overwrite an existing path. Furthermore, if no runpath exists in the executable, there is no way to change it. Read the man page for chrpath for more information. If you can’t fix the executable, create a wrapper script that calls the executable with the right LD_LIBRARY_PATH setting. In that way, the setting gets exposed to this application, only – and the applications that get started by that. The latter can lead to the inconsistency problem above, though.

#!/bin/sh

LD_LIBRARY_PATH=/path/to/lib1:/path/to/lib2:/path/to/lib3

export LD_LIBRARY_PATH

exec /path/to/bin/myprog $@

Testing a LD_LIBRARY_PATH from the command line:

$ env LD_LIBRARY_PATH=/path/to/lib1:/path/to/lib2:/path/to/lib3 ./myprog

This sets LD_LIBRARY_PATH for this command only. Do NOT do:

$ export LD_LIBRARY_PATH=/path/to/lib1:/path/to/lib2:/path/to/lib3

$ ./myprog

since this will pollute the shell environment for all consecutive commands! Never put LD_LIBRARY_PATH in your login profiles! In that way you will expose all the applications you start to this – probably problematic – path!

Unfortunately, some ISVs ship software, that puts global LD_LIBRARY_PATH settings into the system profiles during the installation, or they ask the user to add those settings to their profiles. Just say no! Try you solve the problem by other means, e.g. by creating a wrapper script, or tell the vendor to fix this problem.

Glibc installation dependency

This is a crucial piece. Because a lot depends on this software.

Bash: sh
Binutils: ar, as, ld, ranlib, readelf
Diffutils: cmp
Fileutils: chmod, cp, install, ln, mknod, mv, mkdir, rm, touch
Gcc: cc, cc1, collect2, cpp, gcc
Grep: egrep, grep
Gzip: gzip
Make: make
Gawk: gawk
Sed: sed
Sh-utils: date, expr, hostname, pwd, uname
Texinfo: install-info, makeinfo
Textutils: cat, cut, sort, tr

Beginner’s guide to Linker

How pipes work in Linux

Toolchains

https://www.toolchains.net/

This is good enough to ponder for a few months.

Linux Sans Flashy Stuff To Mitigate Distraction

Well, I have absolutely no clue how you operate your computer and certainly don’t want to guess the pattern. Because every workflow is quite different unless you are paid to follow the rules of the shop from where earn your month-end paycheck.

What I adopted two decades back and tried to apply in general in my life, helped me fall on my face many a time. But, like my other lacuna, am blessed with stubbornness, so get over the fall as quickly as possible. But, that is not always the case, though.

People who are not born with or have the metal to excel cognitively at scale, they have to have some sort of mechanism in their armory to get along or provide stuff that helps them to survive and essentially provide the basic amenities of life.

By any stretch of the imagination, I can not count myself in that category, where people do things effortlessly. I am jinxed with a fact, most of the time, it takes at least a second effort to get the damn thing out as expected by me. Except, for a few cases, where it clicked in the first, lucky me!

Computing is an upheaval task to get along smoothly. Whether you believe it or not at this juncture of time, where you are facilitated by many supportive mechanisms. I can recollect, a decade or two, it was not as affluent as it is today. Boon in disguise. Kiddos embrace all the offerings in open arms, without giving much thought to the avalanche of trouble it also brings along. Nope, I am not trying to rant about it but to make sure people get it.

The glittering effect of the cheap and shiny things overawed the power of simplicity at bay. People just have their guns just by looking at the tip of the iceberg. Also, I don’t blame most people, they have been fed with aplomb to engulf it and react to the providing. But, the consequences are not so pleasant nor desirable.

How to encounter this ever-thrusting implication on you, especially related to computing(which essentially reflects on life, because it is so integral today)? Fortunately, there are ways to help yourself kinda things. It is a long route but a better route to be traveled.

Let me start, with what I did and continuously doing to get my life aligned with my expectations. See! The stressing on my expectations. That matters a lot. When I decided that computing would be a thing I would be doing for the rest of my life, I had to give it a long thought of what is necessary to make me feel that I am not lost. In that quest, I have applied a few things to my day-to-day computing, which has helped me immensely to date.

Here are a few things:

  • I had to figure out what interested me the most about computing platform
  • What exactly I am going to do with my interest in that platform?
  • Eliminate so-called recommended stuff and evaluate things for my needs.
  • Curious to know from people I believe have done something useful.
  • Didn’t follow blindly what every other tom-dick-harry doing at that time.
  • Embrace DIY in vengeance and figure collaborate with other people

…… the list could go on some length, which I refrain from doing so. It was not an easy decision to practice but to live with every single day. Lots of little incidents impacted the way I have trimmed down the present computing scenario for me it is the effect of sticking with my liking.

I am inclined to do better for myself and importantly any possibly good way to do better for people, not just by saying mere words(that kind are flooding space for a long time, miraculously they are getting away, what a skill!)but by providing actual work to benefit them.

Shortcomings should not be an excuse to defer to doing something, that benefits you and importantly others. I personally do not hold back, knowing very well my limitations. Trying to improve has an ingrained pain and overcoming that is not certainly fucking fun. But, until I have made an attempt to overcome it(..and still doing so), how do you know, when things stuck, how it feels the enigma or when things work what the ecstasy means?

I am a dogged worker and I cling on to something I value. Adopting something takes way more time and discarding becomes so natural. Nope, you don’t have to be blatantly dismissive in your actions to show your preferences.

Linux and real people around that teach me, how to be focused and be productive. The posers are getting fad away more quickly than I have imagined. People I look up to are rare species and show all the qualities of better human beings. They are a handful and I am happy with that number.

I wrote about Linux And The Rule Of Elimination in my previous post about the endeavor I am into.

Sticking to my requirements is the key to letting go of all the fads and sticking to a platform for a long time gives you the benefit, which certainly serves me well to evaluate things in that spectrum much more concisely. I am not afraid or ever hesitated to ask for help or clue. The primary reason is not like being stuck with something that hinders progress.

Linux provides pain, whether you like it or not. Sometimes quite unbearable and I was on the verge of giving up. Alas! My tenacity to discover more about it has not allowed me to do so. This brings a lot of learning curves and days of head-banging. Also, seeking help is probably easy from the known places but getting the right kind of help needs lots of work involving so much elimination. Why? Because, most places provide you with help, based on a certain set of understanding, which might fit or might not fit. Then it comes to the evaluation phase to understand the help. Nope, blatant induction is certainly NOT the way forward to understand something correctly.

The easy route is for the timid. It sounds great, but the tough route actually imposes so much of a burden on your shoulders to learn the correct thing to make it an easy thing in the future. And it is time-consuming. Whoever told you otherwise, is probably a schmuck or has lived a utopian life. So, that kind of information gets trashed the moment I hear it without engaging my gray cells.

Linux allows you to have software of your choice and a system to be made of your choices. No, it comes with the cost of being prudent and vigilant of your stuff, a slip will gobble up your and others’ time immensely. But, that is how you live on the edge, the thrill of dying looms large if you are not careful enough.

On the enterprise, the impact is less, because of two main things:

  • The software comes from renowned vendors, who are providing service too.
  • The stuffs are more stable than bleeding edge, so have predictability.

But it is always the case of diverging interest groups between two poles, enterprise and personal computing. Nothing new about it. Stability has its priority in the enterprise and it is bound to be. Because business can not be run on fragility, can they? Likewise in the embedded world, where predictability at high, because of the nature of the device, the implementation happened.

I used to hop a lot when I was in a nascent stage of using Linux. And gradually realized, that it cost me dearly my time and nothing new is getting added to my repertoire for useful. I have written about that phenomenon of distribution hopping surge here Nope, Linux Distro Hopping Doesn’t Help. What it also does, when I stopped doing so, allows me to concentrate more on the problem and in essence solve it with much ease. So, in essence, your head is not getting fragmented over some vendor’s peculiarity of dishing out their offering.

I have written it many a time before, one more time for the sake of refreshment of your memory, I have various Linux distributions (Gentoo, Slackware, and Debian) in their own physical slice and things get transferred from machine to machine. A very specific Window Manager (been sticking with it for the past 7 years now, and chances of moving away from it are dim) i.e. I3 Window Manager, and some specific set of software that drives my day-to-day activity. Unfortunately, nothing fancy I need or crave for, you see, the limitation of my choice.IOW, pretty boring.

I don’t know, it never strikes me enough to jump or integrate something others are using and getting popularized by some fad. Yes, I was influenced, when I saw people I value use some specific software, that intrigue me in some way. But, it takes time to evaluate from my perspective, whether it is a good fit or not. If the person is fluent enough in something, does it make any sense to my workflow? Oftentimes, the answer is plain NO. Now, I have adopted some software once I was intrigued by seeing the demo of specific people I value, and the point is I am still using that software after almost 15 years now, daily. So, once I am hooked on something not generally let it go easily.

Now, the question might pop up in the head, is it good to be confined?? In what way? If the confinement brings value and produces more useful stuff, don’t hesitate.

Linux And The Rule Of Elimination

Alright, living with something I love and care deeply about with all my shortcomings has enabled me to look inside me and my life.

Embracing Linux way back in the late 90s was one of the bloody good decisions I have made in life that I am so proud of. Despite all the ill advice and glittering of shallow shiny stuff’s popularity never fazed me the slightest bit. See, my tenacity is reflected in my words and I am not saying it for the sake of getting traction.

Predominantly, the adoption of a particular technology in that nascent stage of technological embark helps me to fathom the nuance of surviving with it. And since the lesson learned quite early in the stage of getting along, seems to have had a profound impact on my life.

Initially, two things matter most of the dive into this:

  • The lack of bend of mind of mine
  • The possibility of doing something useful without being clogged.

And that initiates a lot for me. The route is long and torturous(like everyone else) but the fruit it bears for me gives me a deep satisfaction of accomplishing something I always wanted to. Nope, I am not complacent nor I ever will be, because, the damn thing called life keeps coming at you all the time to embrace something to go forward.

There were phases, where I was super weighed by ever-so-present people’s blatant promotion of something very feeble stuff. Later, I discovered, or came to know, that it was more of a thing about one of them in the herd thing keep their sanity and ego intact. Never ever bought that notion or encouraged or entertained anyone to impose that on me. Do I miss out on that? Probably, but I just didn’t give a f… to ponder.

It took me more time than normal to get over the depicted “norm” of the schmuck’s armory than to find out what actually play well. Again, this kinda realization leads me to some thoughts:

  • I have nothing to prove to anyone.
  • Nor do most people’s voices ever matter to life.
  • Pleasing everyone is not possible and you have to take some tough call.

I have been sticking to those thoughts and boy! It has provided me rich benefits over the decades.

Eliminating stuff and people who I figure are not good enough to be part of my life is the key and it takes me more time to evaluate than necessary. But, in the end, I believe, I made the right decision which helped me survive in this harsh world.

Linux/UNIX helped me inject a thought that *minimalism” is the thing and it should be nurtured and embraced with vengeance. Whether it is about software/hardware selection or life in general.

Nope, it is not all hunky-dory. It has an inherent cost of “not being part of” something very dominant. But, at least that allows me to have a say in my way, damnit that is what matters most.

The efficiency got tenfold high when I started to discard things and also people from my life with conscious understanding. It is all I am measuring my way of looking at life, not by the “said rule of life”. Did I falter? Yes, I did, that was too many occasions. I felt miserable then, but like everything else, either it healed itself or I put a conscious effort to get over it. Because sticking with that trauma will not help to move forward.

Learning from better people and trying to embrace their stuff in my own way of adopting made me realize that putting in an honest effort has long-term value. And I have been hooked on it since then.

Also, by being really blunt, when I don’t like something, generally, I don’t hold back but let the offender know about my feelings. It clears things in a jiffy, either we are getting along or we are not. Nope, I am not living in a glass house and it proved too costly to live in it. 🙂

Some technical aspect of elimination helps to do my stuff regarding computing, to a finite set of things, which importantly, I love to do. I have a specific set of software that helps me to do the day-to-day life operation with a computer. Special cases arrive very seldom and I obey the rule of it if I really need that ploy into my workflow.

Open source software is one of the prime ethos of giving you freedom to live with. But it comes at a price if you are willing to pay for it. The do-it-yourself is one of the coherent parts of that freedom. Thankfully, the lesson learned very early days of adopting it. Figuring out with lots of trial and error methods and striking off what is not so important is a very time-consuming process and it took me years of perseverance. Sure, we forward and the requirement gets uplifted but not so much that you have to bring in what is all available there. The more you have the more time it consumes to maintain and manage those.

The famous words Less Is More hold very true especially for me the mentioned reasons earlier in this article. It might be less compared to a plethora of other things, but very intense to manage and needs more attention to details. The overhead gradually goes away after some time when you are settled with those limited options.

I have been running/using Linux exclusively for over two decades and I am a bit faintly disappointed with it. Yes, from time to time, especially in the early days, it quite often brought me to my knees, and days were longing and withstanding the pain it inflicted upon me. It was mostly because of my lack of understanding and overboard with things that were not so important. Plus, learning has its bearing too. A self-taught person needs to be more careful and spend more time evaluating stuff to embrace. Otherwise, the chances would be high to get your feet in the trap.

I have been aggressively eliminating stuff from computing requirements and in life too. The road is not smooth nor ever will be. Getting jolted quite often does not diminish my inclination to optimize resources for betterment. Somehow, I just can’t let it go out of my system.IOW, I have been hooked on it for a long long time.

Getting involved in Linux Kernel development also taught me about the process’s Black & White enigma. Visiting expert’s work and talking to them taught me to look for something in work, which I was missing completely. Those little interactions with them cleared so many clouds in my mind and I can apply the learned lessons more aptly now. Likewise, as am getting older the stupidity of mine and the other peoples come to light, and blind faiths(had I have any? If at all)are going into oblivion.

Not necessarily that I hold things tight all the time. I have been proven wrong many a time in the past. Should I be emotional about that fact? Rather I swallowed the bitter pill and learned the lesson to forge ahead. I have to take a leaf out of better people’s lives and try to not engage in a spat for an unjustified reason, whatever the provocation may allude to it.

Learning anything is/was always a steep curve for me and I mentioned the reasons above. But that’s not certainly a point try to make again and again that started to sound like an excuse to not do the necessary things to improve. I have a barrier (which is extremely hard way ingrained in me), to consume from whom and discard from what.

I am putting an effort to survive in this world like everyone else. I am not apologetic about my follies, without them I wouldn’t be true to myself or others.

Linux Tool Find Is A Swiss Army Knife Of System Administration

Okay, as stated at the fag end of the last article I wrote, I am going to write about a tool, name find. And here we are, in this post, I will try to show you the minimal use case of using the tool. It is an invaluable tool in any sysadmin’s armory. Importantly, it makes things easier for them to use other stuff efficiently.

Just a note, there is a package maintained by most of the Linux distributions, named findutils1 and it has some other close-knit siblings in it. So, please be aware and get that package.

Generally, looking for something in the system can be done with find in a much faster way. The only impediment might be the damn syntax, especially, those flags, until you get accustomed to those. Or, you probably, automate a few of those frequently happening activities to put things inside a script and give it an apt name(this is very important, I can not stretch more the importance of having it).

Inside of the script, always … always use find instead of other tools for finding. Why? Because it has got precise things with flags that make things play well within your script.

Nope, I am writing this article NOT to show off a plethora of examples, which are littered on the internet and absolutely good for people. Here I am trying to make you understand(if you take that with a pinch of salt 🙂 ) and realize how good it will be to use a tool which stood the test of time and is still very relevant and kicking.

Because, the files are scattered in the system in various locations, due to the nature of the operating system’s requirement and the other system tools to be able to find and use, we are often left wondering about the stuff we are looking for in the system. The aid we seek in the very first place is to use find from the command line or from the comfort of the editor environment. People wrote many a thing, which wraps the find command to suit the particular editor environment they are sitting in. I think that integration with the editors makes things easy for people at least one step, that is to go out of that convenient zone and try things out at the command line and then come back.

If you are a person, who is more comfortable looking things up on the internet, than using the man command at the terminal to see the documentation then you might be interested to see this page2, I love manpages that have examples.

Now, the thing is you are best served, when you use it on the command line or within a script. The precision it provides matches none other.

Alright…alright you are itching to see some examples and I am not going to disappoint you regarding that, but with a very limited set.

Finding the latest file in the directory

find $HOME/bibliography/pdf_docs/ -maxdepth 1 -type f -newermt $(date '+%F') -ls | gawk '{ print $11}' | sort -f -i -r | head -1

See! It is not just alone, but get help from other known tools (i.e. head,gawk,sort) all the way.

Find empty files

find . -type f,d -empty -print0

Here, find is instructed to find out all the empty files and directories of the specific directory, where it is fired from. That dot signifies the local directory you are sitting on. And it takes an -empty flag too. Notice, that it also directs the -type flag to have files and directories both with comma-separate flags. Lastly, it prints out the stuff without the newlines appended to the filenames.

*The difference between print and print0 is the former output with newlines* *and second one output without newline.*

Let me show you the difference visually 🙂

Find print expression with newline, this is the default

2024-03-01-031846_672x106_scrot.png

And this is the default way to find print stuff on stdin.

Find print expression without the newline aka -print0

2024-03-01-032113_1912x91_scrot.png

This form is more suitable to infuse with other tools along the pipeline.

Find any specific filename extension with regex

2024-03-01-033415_746x167_scrot.png

Find dot files and use fzf to enlist them and then use vim to open the selected one

find $dir -maxdepth 1 -name ".*" -type f | fzf | xargs -I {} vim {}

That $dir variable has to hold the directory path, where you want to search. It also introduces another important flag, i.e. maxdepth , which can be a great help, if you try to maneuver with a big directory with lots of files but want to restrict the search-specific directory level.

I have mentioned that findutils package comes with bundles with other utilities too and those included are :

  • locate3
  • updatedb4
  • xargs

To use locate you have to build the database first by running updatedb. People generally run this command from cron at certain intervals. Because the database has to be kept in sync with the system running it by hand doesn’t make much sense.

Here is how it can be used from the command line:

2024-03-01-041453_1920x1200_scrot.png

About xargs!5 Ah, it is the closest cousin of find and use extensively. You saw the very example just above that I have used to run Vim against the output holding of the piped command.

Let me show you three classic cases of using xargs aptly 🙂

Move files from A to B in a bunch at once

find . -name "*.bak" -print 0|xargs -0 -I file mv file ~/old

This command looks for files with specific extensions i.e.bak prints them without newline(I showed you in the above) and then piping it to xargs to move to a specific directory.

Using Sed6 to change something

ls |xargs -n1 -I file sed -i '/^Beg/d' file

Add file name to the first line of a file

ls |sed 's/.txt//g'|xargs -n1 -I file sed -i -e '1 i\>file\' file.txt

DON’T use file as file name literally, you have to use the actual file name

Linux Unix Tools Cheat Sheets At One Place

Alright, here we are and I thought it would be good to have a place, where all the scattered stuff is put together for the real lazy people and they might be able to copy and paste stuff from this place. (Dangerous ploy, please don’t).

HEADS UP! For heaven’s sake and your betterment sake, please spend some time to figure it out, two things:

– Why the heck it is happening?

– And how the heck it is happening?

See, the difference! And bloody important distinction. Kinda ingrained requirement to get forward and do something meaningful. Those queries keep you on your toes and for a good reason. It certainly takes up your time, why miser about it when the damn thing justifiably demand it? You will soon come to know why it is important to spend time behind things you want to do some productive work.

I am going to enlist a few PDFs & text files as a reference to fall back on. In fact, this is the main motivation to get your attention on them. And practice as often as you can.

GREP Manual

Grep Manual

 You can always query the Duckduckgo search engine about the specific cheat sheet and you get it right on the first search page, like this :

2024-02-24-131619_1920x1200_scrot.png

 

No, I haven’t forgotten a massively important tool i.e. find, and we call it the Swiss Army Knife of system administration. I shall be writing about it in a couple of days. It deserves its dedicated post.

Emacs Regex

Well, let’s start with an oddity it imposes on its users. Nonetheless, you will get accustomed to its follies, once you fall in love with it. (It applies to every other context too).

The first thing you noticed is that the damn regex in Emacs doesn’t allow /d notation of regular expression. But, that is quite a standard way of representing a numeric in every other regexes. Well, you have to mention the numeric in normal syntax1 classes.

Second, use the regex in Emacs way of invoking stuff. There are a handful of them:

2024-02-21-072253_625x309_scrot.png

And this thing is extracted out of much recommended Emacs Wiki2.

Third, some of the very common syntactical notation you must/should learn at the very beginning, that you find the Syntax Of Regular Expressions.

Fourth, Emacs has the package name rx3, which converts human-readable syntax to regular expressions, a very handy tool to make things easy.

Fifth, you might give a shot at regex builder4, This little window allows you to build the damn regex inside Emacs.

Sixth, You can always look at the Emacs Syntax Table5, this is for meta information.

Seventh, you can always bring up regular expression syntax by invoking C-h s and it will show the regex related to that specific mode you are in.

Eighth, this is a good place to find some Emacs Regex Examples6.

Vim Regex

Alright …alright this is such a beast that after a certain time, you just can not ignore that damn thing. I am outlining Vim’s Regex here in this post.

Here are a few important stuff to remember, and I am skipping the details for a later post. Let’s get the thing up and running, IOW, get accustomed to it.

1) / --> general search / forward ..bonus,stay in search more C-g and C-t

2) .  --> any chanracter

3) ?  ---> search backwards

4) \<word boundary\>

5) \(pattern grouping\)

6) [] --> character class i.e [a-zA-Z]

7) [^] ---> negating a class

8) \d \x \o ---> instead of number int/hex/octal  /D,/O,/X --> non-version

9) \l \L \u \U ---> uppercase and lowercase

10) \s --> whitespace chanracter

11) \S --> non-whitespace chanracter

12) /[0-9]\{3} --> quantifier ..any digit repeat 3 times

13) /[0-9]\{2,4} ---> range repeater min,max

14) * ---> zero or more /word*

15) + ---> one or more  /word\+

16) ? --> zero or one  /word\?

17) greedy is absolute like "bad" and it will match only bad and lazy start with a - {-n,m} ex: /fo\{-2,5} then it will show  foo to fooooo

18) Anchors ^ for beginning and $ for ending

19) \n ---> backreference  i.e \1 \2 \3

20) \0 --> whole match example:  %s/[0-9]\+/"\0"

21) zero width :h /zero-width  example : ^ $ \< \>

22) \ze ---> lookahead example: foo\zebar  /foo\(bar\)\@=

23) \zs ---> lookbehind example : bar\zsfoo \@<=  /\(foo\)\@<=bar

24) Lookahead and lookbehind together : /\(foo\)\@<=bar\(baz\)\@=

25) \%[] ---> optional match ,example : /bull%[shit] will match either bull, bullshit

26) \s_  ----> whitespace with newline

27) \v --> very magic , example /\v(foo)+

28) & --> the whole match , example \0 could be replace by this

29) g --> global command , example g/^$/ d

30) \w --> match a word /W --> not match a word

And I have a video up there on YouTube, if you like to see the live motion of these things.

Linux And Opensource: The Burden Of Maintaining Your Stuff

Alright, here are some follies, if you say so, but the damn thing is an integral part of open-source software from its inception. While it gives you the freedom to choose and play at your will, it also imposes the burden of maintaining those stuffs yourself and that is not a faint heart’s job.

Phew! Said some condensed stuff in the above vignette to explain a simple thing that you choose the thing to get on, provided you are absolutely sure what needs to be done.

When you are up to it, nothing is more pleasing and wonderful than the feeling of accomplishment. And if you are slacking(most of us, let’s face it) the impact is sometimes so unbearable that we seek sideways. Alas! It brings down even more owes into our lives than solving the issue at hand. Kinda downward spiral effect to make things miserable.

Why this is happening? Because our lust or in a broader sense to reach the far-reaching stuff is uncanny and unfathomable. We fall victim to it many times in our lives. The irony is that, the falling is just a minuscule of what we somehow manage to resist ourselves to fall on.

Now the real trouble is that if your managed stuff is not in sync with the upstream maintainers one, then you need to spend time to figure out what has been changed since you last pulled in? And takes a toll on you. But, if you are in an air-tight environment then it might be alright to have the old thing lying around and do the job for you. It might sound feasible but actually quite oppressing and you might be missing out on so many good things on offer with the more recent stuff.

That is why, look at the Linux Distribution Maintainers …bloody hell, they are doing an astonishing job of keeping things abreast(almost) with the upstream and providing their own stuff along with that.

Likewise, any open-source project with certain popularity(I know Curl is irreplaceable and we all love it and use it every single day!), and Curl shows how to maintain an open-source project with grace and integrity.

Say, I built a lot of tools for my own convenience, and for doing so, I probably learned some version of the software to be the base of it. But, when the upstream moves and my tools are lagging, a problem abounds. And it happened quite often despite my very conservative approach to tooling and maintaining it for the longest possible time of use.

Now, sometimes I do build stuff straight from the source because the timid reason is that I hop between different Linux distributions, although limited numbers, precisely four(3 frequently and 1 seldom). And every damn distribution maintains their stuff in their way! Then how do you sync? Sooner or later it is bound to go weary. Not only that, but the way I choose to operate on computer is absolutely my problem too. I have a common home and common data section, those are mounts every boot with different distributions. And that mounts also happen to have so many tools I use. The only way to use a single version of a particular tool or software is to build from the source and keep it in a common accessible place.

Sometimes, I have to build the damn tool whenever I hop to another place, at least once, and inflate that into the local file system. That’s it. But why is it necessary? Because every damn distribution maintains a different version of the software as per requirement of the package they are building.

I want to skip that, so keep things in one place that can be accessed all the time from different distributions. This is not always feasible or recommended. It is working for me, it does not mean it works for you this way. It is a long and convoluted route to travel and chances are you end up spending more time just to get things this way.

For instance, my lack of patience to get on with distribution’s offering sometimes provokes me to do stuff, that I have described above. Say, two pieces of crucial software I use and depend on ships with the specific distribution which failed to get on or comply with the way I use those specific things. What do I do? I go and get it from upstream and built the damn thing myself to sort out the pain.

Head up! It is not as straight as it sounds. Sometimes, as I mentioned above, takes way more time and way more effort to get things in line with your own requirements. What the distribution is provided is that at some phases of a distribution release cycle. So, you will be out of luck, if some feature you are relying on or trying to get your hands dirty to see if it fits your workflow. Oh, sometimes this process exposed a lot of stuff I am not aware of or never imagined that the damn thing depend on it. My lacuna and I try to learn from it. So, the next occasion arrives, I shall be reminded of my stab at it.

The most problematic phase is when you are stumped by your own shortcomings and I have had that moment every now and then while doing this kind of thing. Nope, I haven’t learned the lesson and still pursuing that path.

Linux Terminal Emulators So Many To Choose From

Well, we all have been using one or another for some reason, if you are predominantly sitting on Linux for some time. There are so many terminal emulators available to choose from. And every single thing has some way or other some sort of different offering. The best part, though, is they are all destined to one damn thing, that is emulating1 a terminal. Period.

Here are a few popular ones:

x-terminal-emulator mate-terminal gnome-terminal terminator xfce4-terminal

..and these..

urxvt rxvt termit Eterm aterm uxterm xterm roxterm

…and these…

lxterminal terminology qterminal lilyterm tilix terminix konsole kitty guake tilda alacritty hyper

I have used quite a few over the decades of sitting on my beloved operating system. But reached the point, where I stuck with a single thing for the sake of keeping my sanity and consistency’s sake.

A few years back, precisely I opted for a few specific pieces of software to induct into my workflow and since then I have stuck with them. The terminal emulator, I have been using is called ST2 , which is made available by people behind the popular Suckless3 project. Their ethos of minimalism entices me and it seems with little effort you can make yourself a nice piece of stuff without much overhead.

I have built my terminal as per my need with a few patches. The patching is needed, because they provide the software with bare minimal capability or usability for normal use. So, to make it useful all the time, you have to have some mechanism built into it. And that mechanism is by doing some patching to the base software they provide. Thankfully, people write various patches for different capabilities and post them online, so you can pick whatever you need and build it. So did I.

I have built two variants of color schemes with the set of patching I needed. Here they look like :

2024-02-14-073655_1920x1200_scrot.png

A Solarized-Dark and a Gruvbox-Dark 🙂

All I did was bring down all the patches I needed and build the stuff. You can refer to that repository, where I have put things up for people to consume if they are inclined to. My ST Terminal Build Repository.

WTF How To Solve Webkit-gtk Compile Error Problem In Gentoo

Bloody hell !! This damn thing bugs me a couple of times and I thought it might bother other people too. And importantly knowing the reason eluded most people like me, who failed to give attention to the failure log properly.

It all started when Gentoo updated for webkit-gtk package and having set the *MAKEOPTS and MAKEJOBS set as per the cpu and memory available in the system will certainly backfire and it did to me.

How it looked like when it failed:

2024-02-13-082208_986x511_scrot.png

Now, BAM! What does that even mean?? Not so withstanding. So, like an intuition, I scrolled up and saw the log said this :

internal compiler error: Illegal instruction

Hmmm, so clue, but what about it explicitly? After spending some hours(resisting the temptation to go to the internet and search), I have surrendered and found this :

x86_64-pc-linux-gnu-g++: fatal error: Killed signal terminated program cc1plus

So, it is running out of memory ….irks…the machine has 16 gigs of RAM and still it is failing and it is not involved in any other memory-consuming activity.

Puzzling! Right?

My make.conf file having a -j10 -l10 setting to speed things up, although it could go up and I haven’t put the fullest thing in place not to allow the machine to freeze in some compile time. So, the setting was lowered despite this obstruction.

Compiling a big package like this is time-consuming, as you can see from the history of many other packages. I have been sitting on fairly bumped hardware and it should not take that long

Alas!

2024-02-13-084549_471x94_scrot.png

Now, I have lowered the values of compilation and while it going behind the scenes and writing this piece, the entire system is very sluggish …meh

It would certainly take more time for sure. I don’t mind letting the machine do this kind of stuff once in a while and thankfully it is like so.