Essential Bash Terminal Navigation Shortcuts


Here is a list of the bash terminal shortcuts that I found most useful to know when navigating in Linux terminal. Learning these by heart will increase the speed of your console navigation. I will add more as time goes by.

Tab (while typing a command)

You need to know this. Basically auto-completes the command for you when possible.

Arrow Key Up

Cycles through your history, useful when you are running a command that you know you had previously typed.


Reverse search. Press Control-R and start typing what you intend to run. Control-R again to loop through the results. Chances are, somewhere in your history you have executed the same command. Then press tab to accept the command that reverse search has found for you. Press Control-G to undo the reverse search and return to what you previously typed.


Go to beginning of line.


Go to end of line.


Delete from cursor to the front. Super useful for the times when you type in a wrong password and you know it, and want to delete everything you typed to re-type it.

Control-K and Control-Y

Control-K cuts from cursor to end of line. Use Control-A to go to the front of the line and Control-K to cut. Control-Y does the pasting of the last cut text.

Control-Alt-F1 .. F8

Multiple console log-ins. Useful for the times where there’s no X Windows and you need to navigate between programs/jobs. Next best thing to multiple gnome terminals in X Windows.

Shift-PgUp / PgDown

In the console, this scrolls the screen up/down.


Interrupt (kill) the current foreground process running in in the terminal. This sends the SIGINT signal to the process, which is technically just a request—most processes will honor it, but some may ignore it.


Suspend the current foreground process running in bash. This sends the SIGTSTP signal to the process. To return the process to the foreground later, use the fg process_name command.


Close the bash shell. This sends an EOF (End-of-file) marker to bash, and bash exits when it receives this marker. This is similar to running the exit command.



Fix for Low Resolution in Console After Installing NVIDIA drivers (CentOS 7)

Installing NVIDIA requires the blacklisting of the default nouveau drivers.

It is the fault of the nvidia driver that the console resolution became 640x 480. Here is a hack to get it back to your desired resolution.

Set the following grub kernel parameter:

vga = 791

> sudo vim /boot/grub2/grub.cfg

For your desired kernel launching menuentry and between rhgb and quiet insert vga=791


menuentry 'CentOS Linux......{
    linux16 ...  rhgb vga=791 quiet 


See here for the full list of settings and corresponding vga=? numbers

Reboot and you should see that you will get a higher default resolution when you boot to the command line console.

Note that this is intended to be a quick fix, the next time you run grub2-mkconfig the new parameter will likely be removed as the grub.cfg will be re-generated.

Compiling Static vs Dynamic Libraries on CMake


Why compile statically?

This allows for ease of deployment, at the expense of a larger binary executable.
You don’t have to copy the libraries that you use manually to the target system

CMake link_libraries() Magic

I use CLion, which (currently) enforces the use of CMake in compiling C/C++ projects. In your CMakeLists.txt file, first make sure you link the directory to find your files:


CMake has a magic link_libraries() function which takes in the library specified and determines how you want it to be compiled (statically or dynamically linked).

If you type


It is interpreted as a dynamic linked library.


Tells CMake to look for this static library file in the linked directories, and build it statically into your binary.

Order of Static Linking Matters



I met this error.

/usr/local/lib/libPocoXML.a(XMLWriter.o): In function `Poco::XML::XMLWriter::XMLWriter(std::ostream&, int)':XMLWriter.cpp:(.text+0x28b3): undefined reference to `Poco::UTF8Encoding::UTF8Encoding()'XMLWriter.cpp:(.text+0x28cc): undefined reference to `Poco::UTF8Encoding::UTF8Encoding()'

It seems that the libPocoXML.a static library is trying to call functions in libPocoFoundation.a but can’t find them.
Reversing the order of linking the libraries helps.


This is because when CMake links the libPocoXML.a library, it makes a note of the external functions that are called and looks for them to be linked in the subsequent libraries. One example here is the Poco::UTF8Encoding::UTF8Encoding() function.

What is happening here is that CMake links the libPocoXML.a, looks for the function in subsequent libraries that are linked and finds nothing. Reversing the order allows libPocoXML.a to find the desired function later on in libPocoFoundation.a.

This only happens for static library compilation due to how CMake interprets it.
Check your binary using ldd :

ldd DSPBox =>  (0x00007fff2eb46000) => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/ (0x00007f4a209cd000) => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/ (0x00007f4a2078c000) => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/ (0x00007f4a20580000) => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/ (0x00007f4a2036a000) => /usr/local/cuda/lib64/ (0x00007f4a1972f000) => /usr/local/lib/ (0x00007f4a194d3000) => /lib64/ (0x00007f4a19299000) => /lib64/ (0x00007f4a19094000) => /lib64/ (0x00007f4a18e8c000) => /lib64/ (0x00007f4a18b84000) => /lib64/ (0x00007f4a18881000) => /lib64/ (0x00007f4a1866a000) => /lib64/ (0x00007f4a18454000) => /lib64/ (0x00007f4a18091000) => /lib64/ (0x00007f4a17e3c000) => /lib64/ (0x00007f4a17c25000) /lib64/ (0x00007f4a20c53000)


Statically linking a file which has dynamic file dependencies.

This worked


but not the static version.


/usr/local/lib/libtiff.a(tif_jpeg.o): In function `TIFFjpeg_destroy’:/home/pier/Software/Development/tiff-3.8.2/libtiff/tif_jpeg.c:377: undefined reference to `jpeg_destroy’/usr/local/lib/libtiff.a(tif_jpeg.o): In function `TIFFjpeg_write_raw_data’:/home/pier/Software/Development/tiff-3.8.2/libtiff/tif_jpeg.c:320: undefined reference to `jpeg_write_raw_data’/usr/local/lib/libtiff.a(tif_jpeg.o): In function `TIFFjpeg_finish_compress’:…..
If you dig in deeper, you are able to find the dependancies using readelf

 cd /usr/local/
libreadelf -d | grep 'NEEDED'

0x0000000000000001 (NEEDED)             Shared library:
[] 0x0000000000000001 (NEEDED)             Shared library:
[] 0x0000000000000001 (NEEDED)             Shared library:
[] 0x0000000000000001 (NEEDED)             Shared library: []

So apparently we still need the above dynamic libraries even for libtiff.a. Fortunately these files come preinstalled in CentOS and most distributions.
So all you need to do now is :

link_libraries(libtiff.a jpeg z)

libc and libm are linked by default by gcc. CMake interprets this as : compile libtiff.a statically into the binary, but its dependancies libjpeg and libz are still dynamically linked. Get them dynamically from the linked system folders.
To be sure, check using ldd again:

ldd DSPBox =>  (0x00007ffdfc5ee000) => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/ (0x00007fa41cf23000) => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/ (0x00007fa41cce2000) => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/ (0x00007fa41cad6000) => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/ (0x00007fa41c8c0000) => /usr/local/cuda/lib64/ (0x00007fa415c85000) => /lib64/ (0x00007fa415a12000) => /lib64/ (0x00007fa4157fc000) => /lib64/ (0x00007fa4155df000) => /lib64/ (0x00007fa4153db000) => /lib64/ (0x00007fa4151d3000) => /lib64/ (0x00007fa414eca000) => /lib64/ (0x00007fa414bc8000) => /lib64/ (0x00007fa414806000) => /lib64/ (0x00007fa4145ee000) => /lib64/ (0x00007fa4143d8000) /lib64/ (0x00007fa41d1a9000)

No more dynamic library requirement for libtiff!

Row major and Column major Explained, using Python against Matlab

In both cases, a 2d array of 2 rows, 4 columns is created.
Then, it is reshaped to 1 row, 8 columns.
This clearly demonstrates how Matlab stores the array data column-wise, and Python stores the array data row-wise after it is flattened out to 1d.

Matlab (Column Major)

Screen Shot 2017-06-04 at 10.51.20 AM

>> a = = [1 2 3 4 ; 5 6 7 8]

a = 
 1 2 3 4
 5 6 7 8

a = a(:)

a =


Python (Row-major)

Screen Shot 2017-06-04 at 10.51.13 AM

>> import numpy as np
>> a = np.array([[1, 2, 3, 4], [5, 6, 7, 8]])
>> a
array([[1, 2, 3, 4], 
        5, 6, 7, 8]])

>> a = a.tostring() # shows the memory arrangement, in a string

Out[] : b'\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\x06\x00\x00\x00\x07\x00\x00\x00\x08\x00\x00\x00'

If you look at the string printed, it shows that the elements are arranged in the fashion 1, 2, 3, 4, 5, 6, 7, 8.

Why is this important to know, you ask?

For optimization, it is important. In Python, C/C++, the elements will be laid out contiguously row-wise in memory. So you should strive to process the elements row-wise. The entire row could be copied to cache and worked on from there by the OS. If you processed the elements column wise, contrary to how it is laid out in memory, you would incur expensive context switching as the OS copies in the elements you require into cache for processing for every column-wise element you process!


Conquer CSV files in Python


CSV files are common for data manipulation in Python, in cases where you extract the data from an excel sheet. Here is a short tutorial on how to extract some data out of a csv file, along with other nifty tricks along the way.

1) Easy Binary Conversion

You can use this to convert to binary.

your_binary_string = "0100000010001"
int_value = int(your_binary_string, 2)

Of course, you can extend this to octal, etc.

2) File reading and list comprehension

Suppose you have a whole csv of binary numbers. You need to read it out to python as a list.
Read the csv as string, and convert it to int easily.

Your csv data looks like this :


and you just want to get the values in bold. Meaning putting each of the the fourth and third column values in a tuple, converted to integer value.

import matplotlib.pyplot as plt
import numpy as np
import csv

filename = 'yourfile.csv'

with open(filename, 'rt') as csvfile:
 reader = csv.reader(csvfile)
 # the following line uses list comprehension to iterate through every row in the csv
 # and creates row number of tuples in a list
 # You can call next(reader) to skip a row, for example if the 1st row is just labels
 iq = [(int(row[4], 2), int(row[3], 2)) for row in reader]

You get something like this:

[(4079, 63914), (4353, 61824), (3683, 62088), (3813, 61592) .... ]

At this point, if for some reason you want to get all the 1st values in each tuple in one list, and the 2nd values in each tuple in another list, you can do:

i, q = zip(*iq)
i = list(i)
q = list(q)

You will get for i :

[4079, 4353, 3683, 3813, 3231, 3201, 2750, 2550, 2201, 1874, 1559, .... ]

and q:

[63914, 61824, 62088, 61592, 61459, 61228, 60994, 60880, 60655, 60587... ]

3) Writing CSV files

With the previous i and q lists you extracted, you can now write it out as a csv.

with open('complex_out.csv', 'w') as csvfile:
 fieldnames = ['i', 'q']
 writer = csv.DictWriter(csvfile, fieldnames = fieldnames)
 for row_num in range(len(i)):
 writer.writerow({'i':i[row_num], 'q':q[row_num]})

Your csv file looks like this :


3) Converting data types

From the above data, I know that my number is really a 16bit binary representation.
Now, I am told that this is a 2’s complement representation.
So each tuple should be really (int16, int16), with +ve and -ve values possible. Fortunately Python allows us to do this easily.

iq = np.array(iq, np.int16) # creates an array with iq existing values, converted to int16
array([[ 4079, -1622],
 [ 4353, -3712],
 [ 3683, -3448],
 [ -567, 5301],
 [ -216, 5329],
 [ 146, 5332]], dtype=int16)

Now we want to convert it to complex64, for further processing down the line.

sig = iq.astype(np.float32).view(np.complex64) # convert the values to float32, before viewing it as a complex64 (2 float32s)
array([[ 4079.-1622.j],
 [ 4353.-3712.j],
 [ 3683.-3448.j],
 [ -567.+5301.j],
 [ -216.+5329.j],
 [ 146.+5332.j]], dtype=complex64)
sig = sig.ravel() # flatten it to 1D
array([ 4079.-1622.j, 4353.-3712.j, 3683.-3448.j, ..., -567.+5301.j,
 -216.+5329.j, 146.+5332.j], dtype=complex64)

Hooray, now we’re ready to do further processing on this data!

Python: Using ioloop To Send to 2 Protocols Without Threading

ioloop is very useful for pre-scheduling functions to be executed in the future. In this manner, you can simulate interspersing sending to 2 protocols without threading. (in this case, “serial” and “tcp” at 100Hz and 1000Hz respectively.)

In this simple code, io_loop.add_timeout keeps pre-scheduling events to happen in the future. Do note that this all runs in one thread, so make sure your sending functions printTCP() and printSerial() are never blocking. I use this for simple use cases where I am just reading from a file and sending to two different sources. Also, in my code I use a lot of zmq, and only required the simple use cases of zmq.eventloop. For more advanced use cases, you may want to take a look at tornado.ioloop. (on which pyzmq’s event loop is based on)

import functools
import datetime
from zmq.eventloop import ioloop

def printTCP():
 print('Sent TCP')

def printSerial():
 print('Sent Serial')

io_loop = ioloop.IOLoop.instance()

# Send serial messages at 100Hz
serial_deadline = datetime.timedelta(seconds=0.1)
for x in range(1000):
 io_loop.add_timeout(serial_deadline, functools.partial(printSerial))
 serial_deadline += datetime.timedelta(seconds=0.1)

# Send tcp messages at 1000Hz
tcp_deadline = datetime.timedelta(seconds=0.01) # separate deadline for sending ins pulses
for x in range(10000):
 io_loop.add_timeout(tcp_deadline, functools.partial(printTCP))
 tcp_deadline += datetime.timedelta(seconds=0.01)


You will get something like this:

 Sent Serial
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent Serial
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent Serial
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent Serial
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent Serial
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent Serial
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP
 Sent TCP

Super Easy Sequence Diagrams – PlantUML is a very easy to use sequence diagram maker that is free. It has a extremely intuitive syntax that will allow you to create complex diagrams within minutes.

There’s a online version here :

Take for example :


participant FDC_FPGA

box "FPGA" #LightYellow
 participant FDC_FPGA
end box

box "MPC" #LightBlue
 participant Kernel
 participant User_Space
end box

User_Space -> Kernel : Wait Data Ready ioctl
activate User_Space
activate Kernel
FDC_FPGA -> Kernel : Data Ready Interrupt
Kernel -> User_Space : Data Ready Awake
deactivate Kernel
deactivate User_Space
User_Space -> Kernel : Setup Read Descriptors ioctl
Kernel -> FDC_FPGA : Set Registers
User_Space -> Kernel : Enable DMA Start ioctl
Kernel -> FDC_FPGA : Set Registers
User_Space->Kernel : Wait DMA Done ioctl
activate Kernel
activate User_Space
FDC_FPGA -> Kernel : Dma Done Interrupt
Kernel -> User_Space : Dma Done Awake
deactivate User_Space
deactivate Kernel


This gives a very nice diagram that is ready for presentation :


I use it all over the place for Software Design Documents. The best thing is – it has “source code” , so anytime there’s a change done, all I have to do is to use the “source” to regenerate the diagram. Happiness!