Using a 2013 Surface RT as a thin client
For this years holiday, my wife gave me a collection of misc. old laptops, tablets, and other tech as a gift. One of the items I received was a 2013 Microsoft Surface RT tablet. For my initial project, I decided to remove Windows 8.1, install Raspberry Pi OS, and use it as a pseudo thin client while sitting on the couch, working at the kitchen table, and other random places I find myself using a laptop.
The fun thing about the Surface RT is that it is a 32-bit armv7 Nvidia Tegra 3 CPU. When it was originally released it ran a special arm version of Windows 8.1 that had several limiting factors including weak hardware, no x86 support, and apps could only be installed from the Windows Store. See the full specs here.
Project setup
Once I got the workarounds in place it wasn’t too much work to get Raspberry Pi OS installed and transferred to the eMMC.
For this project I am using an KoolCore Mini PC that will perform multiple duties. One, it is a jump host to my other machines, a little security practice I picked up in the last few months. Next, it will be the frontend for the Surface RT, both as a commanline client I can ssh into, and to run GUI applications via x2go. Last, is the iKool will be a HTPC connected to my living room TV and I will be using Kodi to watch TV shows and movies from my nas.
If I’m going to be using the energy to run the iKool (although the energy usage is tiny), I want it to perform multiple jobs.
Doing this will allow the Surface RT to use x2go as a remote desktop solution for specific apps (like Webcord and Firefox) while also using vnc to stream the entire desktop if needed. Then, I will also setup nearly all workflows with CLI/TUI interfaces, I can also just ssh into the iKool PC and do everything from the terminal.
Project notes
Below you will find all my notes from setting up the iKoolCore R1 as a dedicated remote desktop, with both GUI and TUI/CLI applications with the apps I enjoy using. I plan on keeping this post up to date with the latest TUI/CLI apps I am using as a reference guide for dedicated commanline apps. Since the iKoolCore R1 will be accessible on my network, I can use any machine to connect to it and use these tools and this setup, regardless if I continue to use the Surface RT.
This is a mammoth post and I’ve assembled an outline for easier navigation.
Desktop apps
Since I am using x2go and the feature to stream specific apps, all apps need to be installed on the host (or mapped to the directory for x2go to understand, but not working on that for this build).
A key part is that unless I develop a workaround, Flatpaks and AppImages aren’t listed in the “Preferred Apps” in x2go. I know I could manually map them, I just don’t want to out of laziness. Therefore I am installed everything with classic installation methods, like a deb or an executable.
Apps
- Webcord (for Discord): Webcord is essentially available for everything under the sun. I am manually installing the
debfrom their releases page. Interestingly enough, there is a Webcord install file forarmhf, which would work on the Surface. - Firefox browser: Installed with the default installation on Debian.
- Libreoffice: Same as Firefox, already installed
- Geany: Same, same, same. My preferred GUI for interacting with development files and scripts.
- KeepPassXC: For passwords and installed from the default
aptrepos. - Veracrypt: I doubt I will encrypt many things, but I want the ability to decrypt as necessary.
Kodi
In addition to being a jump host and a remote host for any device I need it, the iKool is also a Kodi client for the TV. This is so we can watch our owned content instead of going through Plex/Jellyfin. I’m trying to get off of Plex and Jellyfin is terrible at handling subtitles. Kodi handles all files and subtitles easily. I already have the iKool configured to connect via nfs to the NAS, so installing Kodi and setting up was easy.
I installed Kodi using the flatpak to help with isolating it slightly on the system, since the iKool is doing many important things.
Other than our media, I have it configured for our Kodi sports setup so my son and I can hang out on the weekends with the sportsball.
File sharing
This mini PC is acting as a remote desktop for several devices. I would like the ability to download something on iKool and then download on any other device as needed. I also want the ability to download something on any device in the house and then easily upload it to the iKool PC.
The key is that the device needs to be able to download it without having to install an app or have any type of configuration. I want the device to go to a URL and be able to up/download files and it to be instantly available. I could use Nextcloud, but a device would need to be setup with the app and logged in. I could use Syncthing, but again I would need to connect the devices with an app. I want something very easy where I download on iKool and then literally any device in the house can get to it without any additional steps needed on that device.
I decided to go with dufs as it has all those features, plus the ability to do light edits and to display .html files (for use with SingleFile if necessary). Plus, it has an api that allows interactions directly from the commandline.
docker run -d --name=dufs -e TZ=America/Los_Angeles --restart unless-stopped -v /home/USER/Downloads:/data -v /home/USER/docker_config/dufs:/config -p 5000:5000 -it sigoden/dufs -c /config/config.yaml
Configuration file
serve-path: '/data'
bind:
- 0.0.0.0
port: 5000
hidden:
- '*.log'
- '*.lock'
allow-all: true
log-format: '$remote_addr "$request" $status $http_user_agent'
dufs API
Dufs offers an api which means I can use it from the command line. This is beneficial for super lower compute devices that even opening a browser is a heavy lift. It is also helpful to move files to a headless device, such as one of my VMs.
Upload a file
curl -T path-to-file https://reverse.proxy.TLD/new-path/path-to-file
Download a file
curl https://reverse.proxy.TLD/path-to-file
Download a folder as zip file
curl -o path-to-folder.zip https://reverse.proxy.TLD/path-to-folder?zip
Delete a file/folder
curl -X DELETE https://reverse.proxy.TLD/path-to-file-or-folder
Create a directory
curl -X MKCOL https://127.0.0.1/path-to-folder
Move the file/folder to the new path
curl -X MOVE https://reverse.proxy.TLD/path -H "Destination: https://reverse.proxy.TLD/new-path"
CLI apps
First, some basics:
$ sudo apt install -y gnome-disk-utility adb fastboot grsync ncdu git xz-utils unzip wget curl rsync fish tmux speedtest-cli snapd nfs-common exa ufw ffmpeg software-properties-common flatpak distrobox iperf3 nmap duf podman tilda wakeonlan gdebi
Fish shell
Finally, a command line shell for the 90s
This is much better than the standard bash shell. Initial setup:
$ sudo apt install fish
$ echo /usr/bin/fish | sudo tee -a /etc/shells
$ chsh -s /usr/bin/fish
Then I also like to have the man pages parsed for easier autocompletion:
$ fish_update_completions
Last, I have a couple places added to my $PATH:
$ fish_add_path -a /home/USER/.local/bin
$ fish_add_path -a /home/USER/bin
I have many customizations for tmux and keep a thorough config file in sync across all my devices, including remapping keys plus a custom theme. See more notes in [[Fish Shell]].
Utilities
cheat
This tool allows you to download community made cheat sheets for commands and gives you the ability to edit them or create new. I love this concept, especially mapped to a synced directory so I can drop some common notes into a CLI interface for commandline tools on all my machines.
I have it installed from source so I can edit the config. I originally installed with the AUR in distrobox, but it maps the config file into /etc/some/path inside of the container. Installing from source puts the config in a much better ~/.config/cheat.
Here’s the install from source:
$ cd ~/tmp
$ wget https://github.com/cheat/cheat/releases/download/4.4.2/cheat-linux-amd64.gz
$ gunzip cheat-linux-amd64.gz
$ mv cheat-linux-amd64 cheat
$ chmod +x cheat
$ mv cheat /home/USER/bin
Next, I want to adjust the configuration and map the cheatpaths to a couple folders in Nextcloud so I have a single repo across all my machines. Both community and personal paths are set to ~/nextcloud/Backups/cheat/cheatpaths/<community or personal>.
tldr
In addition to cheat I have tldr installed as well. This package provides short, simple cheatsheets for CLI tools. Commandline usage and flags are hard to remember and I will take as many tools to help with that as possible.
I have this setup in Arch distrobox because I flat refuse to use pip and npm.
$ distrobox enter archie
$ yay -S tldr
$ distrobox-export -b /usr/sbin/tldr -ep /home/USER/.local/bin
The people behind tldr also have an [[Awesome PWA]].
Tmux
Not much to say. Just an absolute necessity for working on the command line. From their git:
tmuxis a terminal multiplexer: it enables a number of terminals to be created, accessed, and controlled from a single screen.tmuxmay be detached from a screen and continue running in the background, then later reattached.
I have several customizations. My setup can be found in [[Tmux]].
Btop
A very nice looking system monitoring tool. Can install from the releases page for basically every platform. Happens to be in the Debian 12 apt repos:
$ sudo apt install btop
Bottom
Another TUI/Ncurses style system monitoring tool that is an alternative to btop. I use bottom occasionally as it uses very little resources and btop can be quite heavy.
I’m using the snap:
$ sudo snap install bottom
$ sudo snap connect bottom:mount-observe && sudo snap connect bottom:hardware-observe && sudo snap connect bottom:system-observe && sudo snap connect bottom:process-control
One nice thing is it has a basic mode. To run the basic interface:
$ bottom -b
I use an alias to use this instead of htop:
$ alias htop='bottom -b'
$ funcsave htop
gping
An upgraded version of ping with a bunch of useful addons, plus graphing.
How to install on Debian/Ubuntu/Mint places. This comes from the Azlux repo:
$ echo "deb [signed-by=/usr/share/keyrings/azlux-archive-keyring.gpg] http://packages.azlux.fr/debian/ stable main" | sudo tee /etc/apt/sources.list.d/azlux.list
$ sudo wget -O /usr/share/keyrings/azlux-archive-keyring.gpg https://azlux.fr/repo.gpg
$ sudo apt install gping
I use an alias to default to gping instead of ping:
$ alias ping='gping'
$ funcsave ping
Other networking
- ssh
- nmap
- iperf3
exa
GitHub - ogham/exa: A modern replacement for ‘ls’.
An more robust ls. Like almost everything on this list, it will work on basically every platform, including Android in Termux.
It is in the repos, so install with:
sudo apt install exa
I set up an alias for exa because my default is to always use ls and I can’t seem to break it.
alias ls='exa -l'
funcsave ls
duf
A disk usage analyzer for CLI that has a clean interface. It is essentially a TUI interface for df and du in one.
On Debian:
sudo apt install duf
Files
NextCloud WebDav
This is how to setup the webdav connection with Nextcloud.
Accessing Nextcloud files using WebDAV
And a better version of the setup process: Arch Wiki: DAVfs2
Step 1: Install WebDAV file system:
$ sudo apt install davfs2
$ su -
$ usermod -aG davfs2 USER
Step 2: Create a nextcloud directory in your home directory for the mountpoint, and ~.davfs2 for your personal configuration file:
$ mkdir ~/nextcloud
$ mkdir ~/.davfs2
Step 3: Copy /etc/davfs2/secrets to ~/.davfs2:
$ sudo cp /etc/davfs2/secrets ~/.davfs2/secrets
Step 4: Set yourself as the owner and make the permissions read-write owner only:
$ chown <linux_username>:<linux_username~/.davfs2/secrets
$ chmod 600 ~/.davfs2/secrets
Step 5: Add your Nextcloud login credentials to the end of the secrets file, using your Nextcloud server URL and your Nextcloud username and password:
https://example.com/nextcloud/remote.php/dav/files/USERNAME/ <username<password>
In these setups, I’ve had to edit both secrets files. For whatever reason, the one in /etc/davfs2 needs to have the info as well.
Step 6: Add the mount information to /etc/fstab:
https://example.com/nextcloud/remote.php/dav/files/USERNAME/ /home/<linux_username>/nextcloud davfs user,uid=USERNAME,rw,noauto 0 0
A previous implementation didn’t have uid=username and would mount the filesystem with root. Mounting with the user ID is imperative for the right user permissions.
NFS
For connection to the nas, which has many, many more files than just Nextcloud.
To manually mount:
sudo mount IP_ADRESS:/path/to/share /home/USER/nas
To auto-mount at boot, add this to the etc/fstab file:
IP_ADDRESS:/path/to/share /home/USER/nas nfs auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0
yazi
Updated: 01/24/2024. Switched from nnn to yazi.
I discovered yazi from Terminal Trove, a website dedicated to commandline applications. It even has an RSS feed, which makes it so new terminal applications show up in my feed reader every week.
What has sold me on yazi are all the additional features. For example, it shows image previews while navigating through folders. It also shows file previews and will open in your file editor of choice. Plus it integrates with nerd-fonts to show file icons and symbols.
Overall it is surprisingly fast and is probably my favorite file manager on all systems, graphical or commandline. It is so much faster navigating directories than a graphical file manager and the previews load faster, too.
Github: yazi terminal file manager
See my setup at this blog post: My new favorite file manager
kpxhs
Although I can pass KeePassXC GUI through x2go, it is still nice to have a way to access passwords via the commandline. KeePassXC has a CLI client, but it is a pain in the ass. You have to constantly enter your master password and get search just right (dealing with spaces) to find anything.
Luckily there is a TUI client that works in conjunction with KeePassXC-CLI called kpxhs. It is just a TUI viewer for KeePassXC-CLI and not its own implementation of the KeePass standard.
To get started, first install KPXC and make sure it is in your $PATH. I’m just installing from the repos:
$ sudo apt install keepassxc
Then, download kpxhs and put into your $PATH. I have ~/bin mapped, so just dropped there. Once downloaded, mark as executable and run.
$ cd ~/bin
$ wget https://github.com/akazukin5151/kpxhs/releases/download/v1.11/kpxhs-linux
$ mv kpxhs-linux kpxhs && mv kpxhs ~/bin
$ chmod +x ~/bin/kpxhs
Now that it is installed, we need to update the config so it will open the right kdbx file on launch. Otherwise you will have to manually enter the path every time it opens.
First step is to write the config. Run kpxhs --write-config. This will generate the config file and place it in ~/.config/kpxhs. Now edit config.hs with the path to the key file. It will look something like this:
Config { timeout = Just (Seconds 10)
, dbPath = Just "/home/me/kpxhs/test/kpxhs_test.kdbx"
, keyfilePath = Just "/home/me/kpxhs/test/keyfile.key"
, focusSearchOnStart = Just False
}
Text
Micro
My CLI text editor of choice. I’m not a developer, so many features of other tools don’t apply to me.
$ sudo apt install micro
The config lives at micro ~/.config/micro/settings.json. I only have these settings:
{
"softwrap": true,
"wordwrap": true
}
I set micro as the default in two places. First, using this command:
set -Ux EDITOR /usr/bin/micro
And then editing /usr/bin/select-editor to be /usr/bin/micro.
Glow
This is a markdown renderer/viewer that renders files extremely stylized. Has lots of features and config options. This works for viewing .md files that are stored in Obsidian notes that I am syncing using WebDAV.
Glow is just a renderer, but can open files with a shortcut to edit using a preferred editor. I am using micro.
To install:
$ sudo mkdir -p /etc/apt/keyrings
$ curl -fsSL https://repo.charm.sh/apt/gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/charm.gpg
$ echo "deb [signed-by=/etc/apt/keyrings/charm.gpg] https://repo.charm.sh/apt/ * *" | sudo tee /etc/apt/sources.list.d/charm.list
$ sudo apt update && sudo apt install glow
Web
browsh
Rather than using w3m or lynx as a CLI browser, I like browsh. From their website:
Browsh is a fully-modern text-based browser. It renders anything that a modern browser can; HTML5, CSS3, JS, video and even WebGL. Its main purpose is to be run on a remote server and accessed via SSH/Mosh or the in-browser HTML service in order to significantly reduce bandwidth and thus both increase browsing speeds and decrease bandwidth costs.
I have it installed with distrobox from the AUR:
$ distrobox enter archie
$ yay -S browsh
$ distrobox-export -b /usr/sbin/browsh -ep ~/.local/bin
browsh is actively maintained (unlike Carbonyl for Chromium), built on Firefox, and uses several Firefox features including installing extensions.
wiki-TUI
This is a TUI application for browsing Wikipedia. There are no deb installation files and requires to use cargo to build and install from source. I hate cargo and other repos (looking at you pypi and npm) for installing applications.
Instead I am using distrobox to install from the AUR. After installing, I exported the executable so I can run it without entering the archie container. The export uses -b for the bin file and then exports to a $PATH on the host configured in [[Fish Shell]].
$ distrobox enter archie
$ yay -S wiki-tui
$ distrobox-export -b /usr/sbin/wiki-tui -ep ~/.local/bin
Entertainment
Newsboat
I have Newsboat installed via Snap. Before doing anything, run newsboat once to generate the folders for the snap.
Now, we need to create a couple files, stating with urls which newsboat needs.
$ touch ~/snap/newsboat/NUM/.newsboat/urls
After installing, we need to make a config file:
$ micro ~/snap/newsboat/NUMBER/.newsboat/config
At the top of the config file, enter this to connect with [[FreshRSS - Corriveau Handbook]] and have fancy shell highlighting:
urls-source "freshrss"
freshrss-url "https://reverse.proxy.TLD/api/greader.php"
freshrss-login "USER"
freshrss-password "PASSWORD"
# Display
show-read-feeds no
feed-sort-order unreadarticlecount-asc
color info default default reverse
color listnormal_unread cyan default
color listfocus blue default reverse bold
color listfocus_unread blue default reverse bold
color article cyan default
color listnormal yellow default
text-width 80
articlelist-format "%4i %f %D %?T?|%-17T| ?%t"
highlight feedlist "^ *[0-9]+ *N " magenta magenta
highlight articlelist "^ *[0-9]+ *N " magenta magenta
highlight article "(^Feed:.*|^Title:.*|^Author:.*)" red default
highlight article "(^Link:.*|^Date:.*)" white default
highlight article "^Podcast Download URL:.*" cyan default
highlight article "^Links:" magenta black underline
highlight article "https?://[^ ]+" green default
highlight article "^(Title):.*$" blue default
highlight article "\\[[0-9][0-9]*\\]" magenta default bold
highlight article "\\[image\\ [0-9]+\\]" green default bold
highlight article "\\[embedded flash: [0-9][0-9]*\\]" green default bold
highlight article ":.*\\(link\\)$" cyan default
highlight article ":.*\\(image\\)$" blue default
highlight article ":.*\\(embedded flash\\)$" magenta default
browser "xdg-open" # This will open the browser defined by xdg-settings
Run with:
$ newsboat -r
Tut
A TUI for Mastodon with vim inspired keys. The program has most of the features you can find in the web client.
This comes from the Azlux repo that was also installed above for gping:
$ sudo apt install tut
Then run tut to prompt for logging in. Uses fairly intuitive keyboard bindings and has notifications access, plus you can enable mouse support in the config.
tut has the ability to open links and media from the CLI. My preference is to open links in browsh instead of lynx or w3m. After setting up browsh we can tweak the tut config to use browsh by editing ~/.config/tut/config.toml and edit this section like this:
[media.link]
# The program to open links. TUT_OS_DEFAULT equals xdg-open on Linux, open on
# MacOS and start on Windows.
# default="TUT_OS_DEFAULT"
program="browsh"
# Arguments to pass to the program.
# default=""
args=""
# If the program runs in the terminal set this to true.
# default=false
terminal=true
For images, I am using feh along with ssh x11-forwarding. First, need to install it on both iKool and on the Surface:
$ sudo apt install feh
Next, edit the tut config to use feh. It is this section:
[media.image]
# The program to open images. TUT_OS_DEFAULT equals xdg-open on Linux, open on
# MacOS and start on Windows.
# default="TUT_OS_DEFAULT"
program="feh"
# Arguments to pass to the program.
# default=""
args=""
# If the program runs in the terminal set this to true.
# default=false
terminal=true
Last, we need to connect to it using “trusted x11 forwarding” using a ssh flag. When we do that, the image will be opened in a feh window on the local machine. Connect like this:
$ ssh -Y ikool