Apple Keyboard – Get Function Keys Working Properly in Ubuntu


If you are using an apple keyboard, and you need the function keys to work straight away, instead of needing to press Fn-F4 , for example, try this.

Create a file , fn.service in /etc/systemd/system/

sudo chmod +x /etc/systemd/system/fn.service

fn.service contents :

Description=Job that enables fn keys on apple keyboard

ExecStart=/bin/echo -n 0x02 > /sys/module/hid_apple/parameters/fnmode


In the /etc/systemd/system directory, do this:

sudo systemctl enable fn.service

sudo systemctl start fn.service

Your function keys should be running as per normal now.

They will also function as per normal every reboot of the Ubuntu system.

Kubernetes – Offline Installation Guide (Part 2 – Master, Workers and GPUs)



To re-iterate, the setup is as follows (CentOS 7 based):

Screen Shot 2017-11-19 at 11.22.44 AM


We will use weave net for our pods, so get the weave net startup yaml file.

export kubever=$(kubectl version | base64 | tr -d ‘\n’)

Alternatively, use this file I pre-downloaded on an internet computer. Link

At the node designated as your K8s master, type:

kubeadm init —kubernetes-version=v1.8.1 —apiserver-advertise-address=

Take note of the join command – save it to txt or something because you will need this to join your other nodes later.

As your admin user (user requires “wheel” access), run the following. Don’t run it as root.

mkdir -p $HOME/.kube​
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config​
kubectl apply -f weave.yaml

This procedure makes your current user a Kubernetes user.

Allow pods to be scheduled on GPU1 master​. Our master has 512GB ram and 8 GPUs – no way we’re gonna waste that!

kubectl taint nodes –all

At GPU1 master​,

kubectl get svc

​Take note of the cluster IP. You will need it for the next step.


route add  gw

​Add the above line to /etc/rc.local (on worker node only) to always create this route on boot-up. The nodes sometimes reference the cluster-ip which is not seen by them. This will route the request through the master node, which is

In our setup, we have an external yum repository server located at Hence, in order for our containers to all have access to this server, we do this in the rc.local file.

route add gw

Remember to do 

chmod +x /etc/rc.d/rc.local

 so that this file is executable during the boot sequence process. 

Reboot, then check the routes using

route -n

Joining the Worker(s) to the Master Node

Remember the join command that was printed out when you created the master node? We need it now for joining the node to the master.

kubeadm reset​
kubeadm join –token=… –discovery-token-ca-cert-hash sha256:​

NOTE: If token has expired (unlikely if you’re doing Part 1 and Part 2 all in one shot), at master node: ​

kubeadm token create –ttl=0 ​

To create a token that never expires.

kubeadm token list

to see the tokens.

Enabling Your NVIDIA GPUs

sudo vim /etc/systemd/system/kubelet.service.d/10-kubeconf

To the following line, add –feature-gates=”Accelerators=true”

ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS […] –feature-gates=”Accelerators=true”

If the above doesn’t work for some reason, try adding –feature-gates=Accelerators=true    (no “”)

Then enable and restart the kubelet service.

systemctl enable kubelet && systemctl start kubelet

Follow the official K8s instructions here on how to make use of your GPUs in POD files.


Go to the master node (, and type

kubectl get nodes

You should see a list of all the nodes that have joined your master. You can start running your pod yaml files now.

Try running GPU-enabled POD files following the instructions here.


Didn’t work? I compiled a list of issues I encountered, and successfully trouble-shooted –  to be continued!

Kubernetes – Offline Installation Guide (Part 1 – Setting Up)


A while back, I had the chance to set up a Kubernetes cluster on a group of GPU-enabled servers at my workplace. Each server had 8 GTX1080Ti NVIDIA GPUs in it, and 512GB of ram, and 72 CPU cores.

The criteria were as follows:

  1. Resource management – The user should not need to manage the resources of the cluster. The containers should be put into the appropriate server that can serve its requirements. Easily resolved by Kubernetes.
  2. Easy horizontal scaling  : One should be able to easily add new servers to the cluster.
  3. Offline Install : There is and will be a security air gap between the servers and the external internet world, meaning that we would have to install Kubernetes in an offline way.
  4. GPU Acceleration: Our users who run tensorflow algorithms required GPU acceleration within their containers.

I jumped at the chance to experiment with containers and K8s and volunteered. The setup looks something like this:

Screen Shot 2017-11-19 at 11.22.44 AM


Kubeadm is an installer for Kubernetes, and is well-supported by the K8s community. For our offline installation, we mirror-ed closely the Kubeadm setup steps whenever we could.

However, Kubeadm is still largely online-based, an internet connection is assumed.

You will need a PC with internet access to do this installation for downloading of RPMs and Docker Images. I am doing this on a CentOS 7 system so the repository handling will be yum-based. 



Download the Required Kubernetes RPMs

You can use yum –downloadonly to download all required rpms. See the link below for a guide on how to use yum –downloadonly.

The required files are

  • kubeadm 1.8.1
  • kubectl 1.8.1
  • kubelet 1.8.1
  • kubernetes-cni 0.5.1-1
  • ebtables 2.0.10-15
  • ethtool 4.8-1

It’s all here in a zip archive for you lazy ones. These are the ones that I tested against. Of course, you may want to download the latest and greatest ones.  Link

Install Kubeadm and Friends

Install kubeadm, kubectl, kubelet and kubernetes-cni and start kubelet services.

yum install *.rpm

To install all the rpm files that you downloaded.

Install Docker

Follow the steps to install Docker here:

It should be available in the repository of your linux distribution. Configure your distribution for local repository. Alternatively, download the docker rpm as well on a live internet machine. (same steps as above using yum –download-only)


yum install docker

Getting the System Ready

Enable and Start Docker Service

systemctl enable docker && systemctl start docker

Turn off the swap file of the system.

sudo swapoff -a

Remove/comment swap entries in /etc/fstab.

sudo vim /etc/fstab

It looks something like this.

/dev/VolGroup00/LogVol02   swap     swap    defaults     0 0

Disable SELinux​

setenforce 0


Edit k8s.conf​

vim /etc/sysctl.d/k8s.conf​

In k8s.conf, add the lines

net.bridge.bridge-nf-call-ip6tables = 1​
net.bridge.bridge-nf-call-iptables = 1​

sysctl –system


Download the Docker Images

Kubeadm runs most required Kubernetes components on container images. Which is a great design as the underlying operating system is kept relatively “clean” compared to something like OpenStack.

You can inspect an existing kubeadm installation to see the container list after installing kubeadm on an online machine.

sudo docker images

Those that start with and weaveworks are what you need.

You can pull the container images one-by-one individually using the following command. Example here is pulling the kube-apiserver-amd64, v1.8.1.

docker pull

docker save > kube-apiserver.tar

<repeat for all required containers>

Alternatively, the easy way is to download all the tar files I used for my working install here. Link


Load Docker Images into the Kubernetes Computer

CentOS should come with Python to run python scripts.

If you have downloaded my containers zip bundle, you can run :


to load all containers in the directory into your Kubernetes Computer’s docker repository. It’s just a script to run docker load command for all the containers in the directory.

At this juncture, your system is ready to run kubeadm. The next part will focus on how to set up the master K8s node and getting the slave nodes to join it. It should have been fairly straightforward, but there were some nitty gritty things that I encountered along the way. Also, I will cover the additional setup required to set up GPU acceleration for each of the 8-GPU equipped servers.

Part 2 – Master, Workers and GPUs


Free Logging in tmux


To get started with tmux, a terminal multiplexer that is insanely useful when you have no other access than ssh, check this out.

For the rest who already know how wonderful tmux is…

You can actually use log the output of a tmux pane to a file. Using the history-limit parameter, you can use it to log many, many lines.
Modify the default tmux start-up settings.

vim ~/.tmux.conf

In the file type add the following line:

set-option –g history-limit 1000000

This means that you set tmux to have a scrollback buffer of 1000000 lines. Be careful with this though, setting it to too high a value may result in a lot of memory being used when using multiple window panes.

As user , go into tmux

> tmux

Run your program in tmux.

> your_program

In tmux

Press Control-B  , followed by : , then type

capture-pane –S -1000000

Press Control-B  , followed by : , then type

save-buffer /home/pier/history.txt

Of course, for more “serious” use-cases, you could consider something like Poco’s logger, but then you would have to embed it in your program.

Essential Bash Terminal Navigation Shortcuts


Here is a list of the bash terminal shortcuts that I found most useful to know when navigating in Linux terminal. Learning these by heart will increase the speed of your console navigation. I will add more as time goes by.

Tab (while typing a command)

You need to know this. Basically auto-completes the command for you when possible.

Arrow Key Up

Cycles through your history, useful when you are running a command that you know you had previously typed.


Reverse search. Press Control-R and start typing what you intend to run. Control-R again to loop through the results. Chances are, somewhere in your history you have executed the same command. Then press tab to accept the command that reverse search has found for you. Press Control-G to undo the reverse search and return to what you previously typed.


Go to beginning of line.


Go to end of line.


Delete from cursor to the front. Super useful for the times when you type in a wrong password and you know it, and want to delete everything you typed to re-type it.

Control-K and Control-Y

Control-K cuts from cursor to end of line. Use Control-A to go to the front of the line and Control-K to cut. Control-Y does the pasting of the last cut text.

Control-Alt-F1 .. F8

Multiple console log-ins. Useful for the times where there’s no X Windows and you need to navigate between programs/jobs. Next best thing to multiple gnome terminals in X Windows.

Shift-PgUp / PgDown

In the console, this scrolls the screen up/down.


Interrupt (kill) the current foreground process running in in the terminal. This sends the SIGINT signal to the process, which is technically just a request—most processes will honor it, but some may ignore it.


Suspend the current foreground process running in bash. This sends the SIGTSTP signal to the process. To return the process to the foreground later, use the fg process_name command.


Close the bash shell. This sends an EOF (End-of-file) marker to bash, and bash exits when it receives this marker. This is similar to running the exit command.


Fix for Low Resolution in Console After Installing NVIDIA drivers (CentOS 7)

Installing NVIDIA requires the blacklisting of the default nouveau drivers.

It is the fault of the nvidia driver that the console resolution became 640x 480. Here is a hack to get it back to your desired resolution.

Set the following grub kernel parameter:

vga = 791

> sudo vim /boot/grub2/grub.cfg

For your desired kernel launching menuentry and between rhgb and quiet insert vga=791


menuentry 'CentOS Linux......{
    linux16 ...  rhgb vga=791 quiet 


See here for the full list of settings and corresponding vga=? numbers

Reboot and you should see that you will get a higher default resolution when you boot to the command line console.

Note that this is intended to be a quick fix, the next time you run grub2-mkconfig the new parameter will likely be removed as the grub.cfg will be re-generated.

Compiling Static vs Dynamic Libraries on CMake


Why compile statically?

This allows for ease of deployment, at the expense of a larger binary executable.
You don’t have to copy the libraries that you use manually to the target system

CMake link_libraries() Magic

I use CLion, which (currently) enforces the use of CMake in compiling C/C++ projects. In your CMakeLists.txt file, first make sure you link the directory to find your files:


CMake has a magic link_libraries() function which takes in the library specified and determines how you want it to be compiled (statically or dynamically linked).

If you type


It is interpreted as a dynamic linked library.


Tells CMake to look for this static library file in the linked directories, and build it statically into your binary.

Order of Static Linking Matters



I met this error.

/usr/local/lib/libPocoXML.a(XMLWriter.o): In function `Poco::XML::XMLWriter::XMLWriter(std::ostream&, int)':XMLWriter.cpp:(.text+0x28b3): undefined reference to `Poco::UTF8Encoding::UTF8Encoding()'XMLWriter.cpp:(.text+0x28cc): undefined reference to `Poco::UTF8Encoding::UTF8Encoding()'

It seems that the libPocoXML.a static library is trying to call functions in libPocoFoundation.a but can’t find them.
Reversing the order of linking the libraries helps.


This is because when CMake links the libPocoXML.a library, it makes a note of the external functions that are called and looks for them to be linked in the subsequent libraries. One example here is the Poco::UTF8Encoding::UTF8Encoding() function.

What is happening here is that CMake links the libPocoXML.a, looks for the function in subsequent libraries that are linked and finds nothing. Reversing the order allows libPocoXML.a to find the desired function later on in libPocoFoundation.a.

This only happens for static library compilation due to how CMake interprets it.
Check your binary using ldd :

ldd DSPBox =>  (0x00007fff2eb46000) => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/ (0x00007f4a209cd000) => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/ (0x00007f4a2078c000) => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/ (0x00007f4a20580000) => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/ (0x00007f4a2036a000) => /usr/local/cuda/lib64/ (0x00007f4a1972f000) => /usr/local/lib/ (0x00007f4a194d3000) => /lib64/ (0x00007f4a19299000) => /lib64/ (0x00007f4a19094000) => /lib64/ (0x00007f4a18e8c000) => /lib64/ (0x00007f4a18b84000) => /lib64/ (0x00007f4a18881000) => /lib64/ (0x00007f4a1866a000) => /lib64/ (0x00007f4a18454000) => /lib64/ (0x00007f4a18091000) => /lib64/ (0x00007f4a17e3c000) => /lib64/ (0x00007f4a17c25000) /lib64/ (0x00007f4a20c53000)


Statically linking a file which has dynamic file dependencies.

This worked


but not the static version.


/usr/local/lib/libtiff.a(tif_jpeg.o): In function `TIFFjpeg_destroy’:/home/pier/Software/Development/tiff-3.8.2/libtiff/tif_jpeg.c:377: undefined reference to `jpeg_destroy’/usr/local/lib/libtiff.a(tif_jpeg.o): In function `TIFFjpeg_write_raw_data’:/home/pier/Software/Development/tiff-3.8.2/libtiff/tif_jpeg.c:320: undefined reference to `jpeg_write_raw_data’/usr/local/lib/libtiff.a(tif_jpeg.o): In function `TIFFjpeg_finish_compress’:…..
If you dig in deeper, you are able to find the dependancies using readelf

 cd /usr/local/
libreadelf -d | grep 'NEEDED'

0x0000000000000001 (NEEDED)             Shared library:
[] 0x0000000000000001 (NEEDED)             Shared library:
[] 0x0000000000000001 (NEEDED)             Shared library:
[] 0x0000000000000001 (NEEDED)             Shared library: []

So apparently we still need the above dynamic libraries even for libtiff.a. Fortunately these files come preinstalled in CentOS and most distributions.
So all you need to do now is :

link_libraries(libtiff.a jpeg z)

libc and libm are linked by default by gcc. CMake interprets this as : compile libtiff.a statically into the binary, but its dependancies libjpeg and libz are still dynamically linked. Get them dynamically from the linked system folders.
To be sure, check using ldd again:

ldd DSPBox =>  (0x00007ffdfc5ee000) => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/ (0x00007fa41cf23000) => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/ (0x00007fa41cce2000) => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/ (0x00007fa41cad6000) => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/ (0x00007fa41c8c0000) => /usr/local/cuda/lib64/ (0x00007fa415c85000) => /lib64/ (0x00007fa415a12000) => /lib64/ (0x00007fa4157fc000) => /lib64/ (0x00007fa4155df000) => /lib64/ (0x00007fa4153db000) => /lib64/ (0x00007fa4151d3000) => /lib64/ (0x00007fa414eca000) => /lib64/ (0x00007fa414bc8000) => /lib64/ (0x00007fa414806000) => /lib64/ (0x00007fa4145ee000) => /lib64/ (0x00007fa4143d8000) /lib64/ (0x00007fa41d1a9000)

No more dynamic library requirement for libtiff!

Loading tmux on Boot in Linux


tmux is a wonderful tool for displaying virtual consoles on the linux command prompt screen. It’s the next best thing to getting actual GUI windows controllable with a mouse.

Mainly, I use it for ssh purposes. Where I can ssh to a pc that I know has tmux already launched in the background and type.

tmux a

which attaches the session to the on-going tmux background session, allowing you to see everything that is going on in that process. This is especially useful for embedded systems where there are multiple processes launched in the background and you want to monitor them all.

So I have a tmux script here:



#allow re-launch
/usr/bin/tmux has-session -t $SESSION 2> /dev/null && /usr/bin/tmux kill-session -t $SESSION
/usr/bin/tmux -2 new-session -d -s $SESSION

echo "Launching tmux"

/usr/bin/tmux split-window -h
/usr/bin/tmux split-window -v
/usr/bin/tmux select-pane -t 0

/usr/bin/tmux send-keys -t $SESSION.0 "cd /path/to/binary1folder" C-m
/usr/bin/tmux send-keys -t $SESSION.0 "./binary1" C-m

/usr/bin/tmux send-keys -t $SESSION.1 "cd /path/to/binary2folder" C-m
/usr/bin/tmux send-keys -t $SESSION.1 "./binary2" C-m

/usr/bin/tmux send-keys -t $SESSION.2 "cd /path/to/binary3folder" C-m
/usr/bin/tmux send-keys -t $SESSION.2 "./binary3" C-m

This basically opens up , three panes and splits the window horizontally first, then splitting again one of the split windows vertically. It then launches a binary in each of the window panes. I won’ t go too much into the scripting here as there are plenty of resources for doing so, like this.

Configuring tmux to boot on startup on CentOS 7

Normally this should be pretty straightforward, but I ran into some hiccups.


sudo nano /etc/rc.local

And edit the rc.local file to include

su -c /path/toyourscript/ -l your_user_id

-l your_user_id means that you do the launch of the script as the user your_user_id.

Make sure your rc.local is executable.

sudo chmod +x /etc/rc.local

And by right it should launch when CentOS boots, launching in the background which in turn launches tmux. However, I found that one of the abrt startup scripts, was interfering with the launching of the tmux process/binaries. It would hang at the console terminal of the tmux screens. Doing the following resolved the problem for me.

cd /etc/profile.d/
chmod -r

Basically, make non-readable, allowing the profile.d startup process to skip over this particular script. It’s kind of a hack, but it worked. I reckon at most I don’t get the automatic bug reporting tool notifications at the console. Note that this script is run depending on what type of installation you chose when installing CentOS. I think that the minimal installl doesn’t run into this issue.

Hope this it useful to you, let me know!

ps. Here’s a tmux cheat sheet.

CentOS7 – Setting Static IP (Persistent)

Say your device name is ifcfg-eth0
Edit/create  /etc/sysconfig/network-scripts/ifcfg-eth0, enter:

# cat /etc/sysconfig/network-scripts/ifcfg-eth0

Sample static ip configuration:


Reboot, and type ifconfig – you should see the network being assigned the static ip.