Apple Keyboard – Get Function Keys Working Properly in Ubuntu

MLA22B.jpeg

If you are using an apple keyboard, and you need the function keys to work straight away, instead of needing to press Fn-F4 , for example, try this.

Create a file , fn.service in /etc/systemd/system/

sudo chmod +x /etc/systemd/system/fn.service

fn.service contents :

[Unit]
Description=Job that enables fn keys on apple keyboard

[Service]
ExecStart=/bin/echo -n 0x02 > /sys/module/hid_apple/parameters/fnmode
Type=oneshot
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target

In the /etc/systemd/system directory, do this:

sudo systemctl enable fn.service

sudo systemctl start fn.service

Your function keys should be running as per normal now.

They will also function as per normal every reboot of the Ubuntu system.

Kubernetes – Offline Installation Guide (Part 2 – Master, Workers and GPUs)

1*QtnQTnrgQ-N-r5k-nYUZ0g

Architecture

To re-iterate, the setup is as follows (CentOS 7 based):

Screen Shot 2017-11-19 at 11.22.44 AM

Master

We will use weave net for our pods, so get the weave net startup yaml file.

export kubever=$(kubectl version | base64 | tr -d ‘\n’)
​wget https://cloud.weave.works/k8s/net?k8s-version=$kubever

Alternatively, use this file I pre-downloaded on an internet computer. Link

At the node designated as your K8s master, type:

kubeadm init —kubernetes-version=v1.8.1 —apiserver-advertise-address=10.100.100.1

Take note of the join command – save it to txt or something because you will need this to join your other nodes later.

As your admin user (user requires “wheel” access), run the following. Don’t run it as root.

mkdir -p $HOME/.kube​
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config​
kubectl apply -f weave.yaml

This procedure makes your current user a Kubernetes user.

Allow pods to be scheduled on GPU1 master​. Our master has 512GB ram and 8 GPUs – no way we’re gonna waste that!

kubectl taint nodes –all node-role.kubernetes.io/master-

At GPU1 master​,

kubectl get svc

​Take note of the cluster IP. You will need it for the next step.

Worker

route add  gw 10.100.100.1

​Add the above line to /etc/rc.local (on worker node only) to always create this route on boot-up. The nodes sometimes reference the cluster-ip which is not seen by them. This will route the request through the master node, which is 10.100.100.1

In our setup, we have an external yum repository server located at 168.102.103.7. Hence, in order for our containers to all have access to this server, we do this in the rc.local file.

route add 168.102.103.7 gw 10.100.100.1

Remember to do 

chmod +x /etc/rc.d/rc.local

 so that this file is executable during the boot sequence process. 

Reboot, then check the routes using

route -n

Joining the Worker(s) to the Master Node

Remember the join command that was printed out when you created the master node? We need it now for joining the node to the master.

kubeadm reset​
kubeadm join –token=… 10.100.100.1:6443 –discovery-token-ca-cert-hash sha256:​

NOTE: If token has expired (unlikely if you’re doing Part 1 and Part 2 all in one shot), at master node: ​

kubeadm token create –ttl=0 ​

To create a token that never expires.

kubeadm token list

to see the tokens.

Enabling Your NVIDIA GPUs

sudo vim /etc/systemd/system/kubelet.service.d/10-kubeconf

To the following line, add –feature-gates=”Accelerators=true”

ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS […] –feature-gates=”Accelerators=true”

If the above doesn’t work for some reason, try adding –feature-gates=Accelerators=true    (no “”)

Then enable and restart the kubelet service.

systemctl enable kubelet && systemctl start kubelet

Follow the official K8s instructions here on how to make use of your GPUs in POD files.

Verification

Go to the master node (10.100.100.1), and type

kubectl get nodes

You should see a list of all the nodes that have joined your master. You can start running your pod yaml files now.

Try running GPU-enabled POD files following the instructions here.

Crap!

Didn’t work? I compiled a list of issues I encountered, and successfully trouble-shooted –  to be continued!

Kubernetes – Offline Installation Guide (Part 1 – Setting Up)

1*QtnQTnrgQ-N-r5k-nYUZ0g

A while back, I had the chance to set up a Kubernetes cluster on a group of GPU-enabled servers at my workplace. Each server had 8 GTX1080Ti NVIDIA GPUs in it, and 512GB of ram, and 72 CPU cores.

The criteria were as follows:

  1. Resource management – The user should not need to manage the resources of the cluster. The containers should be put into the appropriate server that can serve its requirements. Easily resolved by Kubernetes.
  2. Easy horizontal scaling  : One should be able to easily add new servers to the cluster.
  3. Offline Install : There is and will be a security air gap between the servers and the external internet world, meaning that we would have to install Kubernetes in an offline way.
  4. GPU Acceleration: Our users who run tensorflow algorithms required GPU acceleration within their containers.

I jumped at the chance to experiment with containers and K8s and volunteered. The setup looks something like this:

Screen Shot 2017-11-19 at 11.22.44 AM

Kubeadm

Kubeadm is an installer for Kubernetes, and is well-supported by the K8s community. For our offline installation, we mirror-ed closely the Kubeadm setup steps whenever we could.

https://kubernetes.io/docs/setup/independent/install-kubeadm/

However, Kubeadm is still largely online-based, an internet connection is assumed.

You will need a PC with internet access to do this installation for downloading of RPMs and Docker Images. I am doing this on a CentOS 7 system so the repository handling will be yum-based. 

 

 

Download the Required Kubernetes RPMs

You can use yum –downloadonly to download all required rpms. See the link below for a guide on how to use yum –downloadonly.

 https://www.ostechnix.com/download-rpm-package-dependencies-centos/

The required files are

  • kubeadm 1.8.1
  • kubectl 1.8.1
  • kubelet 1.8.1
  • kubernetes-cni 0.5.1-1
  • ebtables 2.0.10-15
  • ethtool 4.8-1

It’s all here in a zip archive for you lazy ones. These are the ones that I tested against. Of course, you may want to download the latest and greatest ones.  Link

Install Kubeadm and Friends

Install kubeadm, kubectl, kubelet and kubernetes-cni and start kubelet services.

yum install *.rpm

To install all the rpm files that you downloaded.

Install Docker

Follow the steps to install Docker here: https://kubernetes.io/docs/setup/independent/install-kubeadm/

It should be available in the repository of your linux distribution. Configure your distribution for local repository. Alternatively, download the docker rpm as well on a live internet machine. (same steps as above using yum –download-only)

Typically,

yum install docker

Getting the System Ready

Enable and Start Docker Service

systemctl enable docker && systemctl start docker

Turn off the swap file of the system.

sudo swapoff -a

Remove/comment swap entries in /etc/fstab.

sudo vim /etc/fstab

It looks something like this.

/dev/VolGroup00/LogVol02   swap     swap    defaults     0 0

Disable SELinux​

setenforce 0

 

Edit k8s.conf​

vim /etc/sysctl.d/k8s.conf​

In k8s.conf, add the lines

net.bridge.bridge-nf-call-ip6tables = 1​
net.bridge.bridge-nf-call-iptables = 1​

sysctl –system

 

Download the Docker Images

Kubeadm runs most required Kubernetes components on container images. Which is a great design as the underlying operating system is kept relatively “clean” compared to something like OpenStack.

You can inspect an existing kubeadm installation to see the container list after installing kubeadm on an online machine.

sudo docker images

Those that start with gcr.io/google_containers and weaveworks are what you need.

You can pull the container images one-by-one individually using the following command. Example here is pulling the kube-apiserver-amd64, v1.8.1.

docker pull gcr.io/google_containers/kube-apiserver-amd64:v1.8.1

docker save gcr.io/google_containers/kube-apiserver-amd64:v1.8.1 > kube-apiserver.tar

<repeat for all required containers>

Alternatively, the easy way is to download all the tar files I used for my working install here. Link

 

Load Docker Images into the Kubernetes Computer

CentOS should come with Python to run python scripts.

If you have downloaded my containers zip bundle v1.8.1_k8s_containers.zip, you can run :

python load_k8s_containers.py

to load all containers in the directory into your Kubernetes Computer’s docker repository. It’s just a script to run docker load command for all the containers in the directory.

At this juncture, your system is ready to run kubeadm. The next part will focus on how to set up the master K8s node and getting the slave nodes to join it. It should have been fairly straightforward, but there were some nitty gritty things that I encountered along the way. Also, I will cover the additional setup required to set up GPU acceleration for each of the 8-GPU equipped servers.

Part 2 – Master, Workers and GPUs

 

Free Logging in tmux

ss-tmux2

To get started with tmux, a terminal multiplexer that is insanely useful when you have no other access than ssh, check this out. http://www.hamvocke.com/blog/a-quick-and-easy-guide-to-tmux/

For the rest who already know how wonderful tmux is…

You can actually use log the output of a tmux pane to a file. Using the history-limit parameter, you can use it to log many, many lines.
Modify the default tmux start-up settings.

vim ~/.tmux.conf

In the file type add the following line:

set-option –g history-limit 1000000

This means that you set tmux to have a scrollback buffer of 1000000 lines. Be careful with this though, setting it to too high a value may result in a lot of memory being used when using multiple window panes.

As user , go into tmux

> tmux

Run your program in tmux.

> your_program

In tmux

Press Control-B  , followed by : , then type

capture-pane –S -1000000

Press Control-B  , followed by : , then type

save-buffer /home/pier/history.txt

Of course, for more “serious” use-cases, you could consider something like Poco’s logger, but then you would have to embed it in your program.

Essential Bash Terminal Navigation Shortcuts

 

Here is a list of the bash terminal shortcuts that I found most useful to know when navigating in Linux terminal. Learning these by heart will increase the speed of your console navigation. I will add more as time goes by.

Tab (while typing a command)

You need to know this. Basically auto-completes the command for you when possible.

Arrow Key Up

Cycles through your history, useful when you are running a command that you know you had previously typed.

Control-R

Reverse search. Press Control-R and start typing what you intend to run. Control-R again to loop through the results. Chances are, somewhere in your history you have executed the same command. Then press tab to accept the command that reverse search has found for you. Press Control-G to undo the reverse search and return to what you previously typed.

Control-A

Go to beginning of line.

Control-E

Go to end of line.

Control-U

Delete from cursor to the front. Super useful for the times when you type in a wrong password and you know it, and want to delete everything you typed to re-type it.

Control-K and Control-Y

Control-K cuts from cursor to end of line. Use Control-A to go to the front of the line and Control-K to cut. Control-Y does the pasting of the last cut text.

Control-Alt-F1 .. F8

Multiple console log-ins. Useful for the times where there’s no X Windows and you need to navigate between programs/jobs. Next best thing to multiple gnome terminals in X Windows.

Shift-PgUp / PgDown

In the console, this scrolls the screen up/down.

Control-C

Interrupt (kill) the current foreground process running in in the terminal. This sends the SIGINT signal to the process, which is technically just a request—most processes will honor it, but some may ignore it.

Control-Z

Suspend the current foreground process running in bash. This sends the SIGTSTP signal to the process. To return the process to the foreground later, use the fg process_name command.

Control-D

Close the bash shell. This sends an EOF (End-of-file) marker to bash, and bash exits when it receives this marker. This is similar to running the exit command.

 

Fix for Low Resolution in Console After Installing NVIDIA drivers (CentOS 7)

Installing NVIDIA requires the blacklisting of the default nouveau drivers.

It is the fault of the nvidia driver that the console resolution became 640x 480. Here is a hack to get it back to your desired resolution.

Set the following grub kernel parameter:

vga = 791

> sudo vim /boot/grub2/grub.cfg

For your desired kernel launching menuentry and between rhgb and quiet insert vga=791

ie.

menuentry 'CentOS Linux......{
.....
    linux16 ...  rhgb vga=791 quiet 
}

 

See here for the full list of settings and corresponding vga=? numbers

https://en.wikipedia.org/wiki/VESA_BIOS_Extensions#Linux_video_mode_numbers

Reboot and you should see that you will get a higher default resolution when you boot to the command line console.

Note that this is intended to be a quick fix, the next time you run grub2-mkconfig the new parameter will likely be removed as the grub.cfg will be re-generated.

Compiling Static vs Dynamic Libraries on CMake

tv

Why compile statically?

This allows for ease of deployment, at the expense of a larger binary executable.
You don’t have to copy the libraries that you use manually to the target system

CMake link_libraries() Magic

I use CLion, which (currently) enforces the use of CMake in compiling C/C++ projects. In your CMakeLists.txt file, first make sure you link the directory to find your files:

link_directories("/usr/local/lib")

CMake has a magic link_libraries() function which takes in the library specified and determines how you want it to be compiled (statically or dynamically linked).

If you type

link_libraries(ev)

It is interpreted as a dynamic linked library.

link_libraries(libev.a)

Tells CMake to look for this static library file in the linked directories, and build it statically into your binary.

Order of Static Linking Matters

Doing

link_libraries(libPocoFoundation.a)
link_libraries(libPocoXML.a)

I met this error.

/usr/local/lib/libPocoXML.a(XMLWriter.o): In function `Poco::XML::XMLWriter::XMLWriter(std::ostream&, int)':XMLWriter.cpp:(.text+0x28b3): undefined reference to `Poco::UTF8Encoding::UTF8Encoding()'XMLWriter.cpp:(.text+0x28cc): undefined reference to `Poco::UTF8Encoding::UTF8Encoding()'
......

It seems that the libPocoXML.a static library is trying to call functions in libPocoFoundation.a but can’t find them.
Reversing the order of linking the libraries helps.

link_libraries(libPocoXML.a)
link_libraries(libPocoFoundation.a)

This is because when CMake links the libPocoXML.a library, it makes a note of the external functions that are called and looks for them to be linked in the subsequent libraries. One example here is the Poco::UTF8Encoding::UTF8Encoding() function.

What is happening here is that CMake links the libPocoXML.a, looks for the function in subsequent libraries that are linked and finds nothing. Reversing the order allows libPocoXML.a to find the desired function later on in libPocoFoundation.a.

This only happens for static library compilation due to how CMake interprets it.
Check your binary using ldd :

ldd DSPBox

linux-vdso.so.1 =>  (0x00007fff2eb46000) libippi.so.9.0 => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/libippi.so.9.0 (0x00007f4a209cd000) libipps.so.9.0 => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/libipps.so.9.0 (0x00007f4a2078c000) libippcore.so.9.0 => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/libippcore.so.9.0 (0x00007f4a20580000) libippvm.so.9.0 => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/libippvm.so.9.0 (0x00007f4a2036a000) libcufft.so.7.5 => /usr/local/cuda/lib64/libcufft.so.7.5 (0x00007f4a1972f000) libtiff.so.3 => /usr/local/lib/libtiff.so.3 (0x00007f4a194d3000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f4a19299000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f4a19094000) librt.so.1 => /lib64/librt.so.1 (0x00007f4a18e8c000) libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007f4a18b84000) libm.so.6 => /lib64/libm.so.6 (0x00007f4a18881000) libgomp.so.1 => /lib64/libgomp.so.1 (0x00007f4a1866a000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f4a18454000) libc.so.6 => /lib64/libc.so.6 (0x00007f4a18091000) libjpeg.so.62 => /lib64/libjpeg.so.62 (0x00007f4a17e3c000) libz.so.1 => /lib64/libz.so.1 (0x00007f4a17c25000) /lib64/ld-linux-x86-64.so.2 (0x00007f4a20c53000)

 

Statically linking a file which has dynamic file dependencies.

This worked

link_libraries(tiff)

but not the static version.

 link_libraries(libtiff.a)

/usr/local/lib/libtiff.a(tif_jpeg.o): In function `TIFFjpeg_destroy’:/home/pier/Software/Development/tiff-3.8.2/libtiff/tif_jpeg.c:377: undefined reference to `jpeg_destroy’/usr/local/lib/libtiff.a(tif_jpeg.o): In function `TIFFjpeg_write_raw_data’:/home/pier/Software/Development/tiff-3.8.2/libtiff/tif_jpeg.c:320: undefined reference to `jpeg_write_raw_data’/usr/local/lib/libtiff.a(tif_jpeg.o): In function `TIFFjpeg_finish_compress’:…..
If you dig in deeper, you are able to find the dependancies using readelf

 cd /usr/local/
libreadelf -d libtiff.so | grep 'NEEDED'

0x0000000000000001 (NEEDED)             Shared library:
[libjpeg.so.62] 0x0000000000000001 (NEEDED)             Shared library:
[libz.so.1] 0x0000000000000001 (NEEDED)             Shared library:
[libm.so.6] 0x0000000000000001 (NEEDED)             Shared library: [libc.so.6]

So apparently we still need the above dynamic libraries even for libtiff.a. Fortunately these files come preinstalled in CentOS and most distributions.
So all you need to do now is :

link_libraries(libtiff.a jpeg z)

libc and libm are linked by default by gcc. CMake interprets this as : compile libtiff.a statically into the binary, but its dependancies libjpeg and libz are still dynamically linked. Get them dynamically from the linked system folders.
To be sure, check using ldd again:

ldd DSPBox

linux-vdso.so.1 =>  (0x00007ffdfc5ee000) libippi.so.9.0 => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/libippi.so.9.0 (0x00007fa41cf23000) libipps.so.9.0 => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/libipps.so.9.0 (0x00007fa41cce2000) libippcore.so.9.0 => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/libippcore.so.9.0 (0x00007fa41cad6000) libippvm.so.9.0 => /opt/intel/compilers_and_libraries_2016.0.109/linux/ipp/lib/intel64/libippvm.so.9.0 (0x00007fa41c8c0000) libcufft.so.7.5 => /usr/local/cuda/lib64/libcufft.so.7.5 (0x00007fa415c85000) libjpeg.so.62 => /lib64/libjpeg.so.62 (0x00007fa415a12000) libz.so.1 => /lib64/libz.so.1 (0x00007fa4157fc000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fa4155df000) libdl.so.2 => /lib64/libdl.so.2 (0x00007fa4153db000) librt.so.1 => /lib64/librt.so.1 (0x00007fa4151d3000) libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007fa414eca000) libm.so.6 => /lib64/libm.so.6 (0x00007fa414bc8000) libc.so.6 => /lib64/libc.so.6 (0x00007fa414806000) libgomp.so.1 => /lib64/libgomp.so.1 (0x00007fa4145ee000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007fa4143d8000) /lib64/ld-linux-x86-64.so.2 (0x00007fa41d1a9000)

No more dynamic library requirement for libtiff!

Loading tmux on Boot in Linux

ss-tmux2

tmux is a wonderful tool for displaying virtual consoles on the linux command prompt screen. It’s the next best thing to getting actual GUI windows controllable with a mouse.

Mainly, I use it for ssh purposes. Where I can ssh to a pc that I know has tmux already launched in the background and type.

tmux a

which attaches the session to the on-going tmux background session, allowing you to see everything that is going on in that process. This is especially useful for embedded systems where there are multiple processes launched in the background and you want to monitor them all.

So I have a tmux script launcher.sh here:

#!/bin/bash

SESSION="MPC1"

#allow re-launch
/usr/bin/tmux has-session -t $SESSION 2> /dev/null && /usr/bin/tmux kill-session -t $SESSION
/usr/bin/tmux -2 new-session -d -s $SESSION

echo "Launching tmux"

/usr/bin/tmux split-window -h
/usr/bin/tmux split-window -v
/usr/bin/tmux select-pane -t 0

/usr/bin/tmux send-keys -t $SESSION.0 "cd /path/to/binary1folder" C-m
/usr/bin/tmux send-keys -t $SESSION.0 "./binary1" C-m

/usr/bin/tmux send-keys -t $SESSION.1 "cd /path/to/binary2folder" C-m
/usr/bin/tmux send-keys -t $SESSION.1 "./binary2" C-m

/usr/bin/tmux send-keys -t $SESSION.2 "cd /path/to/binary3folder" C-m
/usr/bin/tmux send-keys -t $SESSION.2 "./binary3" C-m

This basically opens up , three panes and splits the window horizontally first, then splitting again one of the split windows vertically. It then launches a binary in each of the window panes. I won’ t go too much into the scripting here as there are plenty of resources for doing so, like this.

Configuring tmux to boot on startup on CentOS 7

Normally this should be pretty straightforward, but I ran into some hiccups.

First,

sudo nano /etc/rc.local

And edit the rc.local file to include

su -c /path/toyourscript/launcher.sh -l your_user_id

-l your_user_id means that you do the launch of the script launcher.sh as the user your_user_id.

Make sure your rc.local is executable.

sudo chmod +x /etc/rc.local

And by right it should launch when CentOS boots, launching launcher.sh in the background which in turn launches tmux. However, I found that one of the abrt startup scripts, abrt-console-notification.sh was interfering with the launching of the tmux process/binaries. It would hang at the console terminal of the tmux screens. Doing the following resolved the problem for me.

cd /etc/profile.d/
chmod -r abrt-console-notification.sh

Basically, make abrt-console-notification.sh non-readable, allowing the profile.d startup process to skip over this particular script. It’s kind of a hack, but it worked. I reckon at most I don’t get the automatic bug reporting tool notifications at the console. Note that this script is run depending on what type of installation you chose when installing CentOS. I think that the minimal installl doesn’t run into this issue.

Hope this it useful to you, let me know!

ps. Here’s a tmux cheat sheet. https://gist.githubusercontent.com/afair/3489752/raw/e7106ac93c8f9602d3843696692a87cfb43c2d21/tmux.cheat

CentOS7 – Setting Static IP (Persistent)

Say your device name is ifcfg-eth0
Edit/create  /etc/sysconfig/network-scripts/ifcfg-eth0, enter:


# cat /etc/sysconfig/network-scripts/ifcfg-eth0

Sample static ip configuration:

DEVICE=eth0
BOOTPROTO=static
DHCPCLASS=
HWADDR=00:30:48:56:A6:2E
IPADDR=192.168.1.10
NETMASK=255.255.255.0
ONBOOT=yes

Reboot, and type ifconfig – you should see the network being assigned the static ip.