Category: Tech

Eclipse, modular projects and JUnit

Helmut Neukirchen, 21. September 2021

I (and many others) always had problems making JUnit (as added by Eclipse automatically when creating JUnit test cases) work with modular projects, i.e. projects that use module-info.java files to define dependencies. Finally, I found solutions:

  • Let the new project wizard not create the module-info.java file -- deleting it afterwards might not be enough as Eclipse did already some modification the the module path settings (OK, trivial) or
  • Choose Java ≤8 in settings (i.e. module-info.java ignored -- again: trivial) or
  • Apply quick-fixes: in the class containing your JUnit test cases, hover over the org.junit.jupiter.api import and select the quick-fix: “Add ‘requires […]’ to module-info.java”. Then in module-info.java: hover with mouse over the squiggle line (the important point is: clicking on the light bulb does not give any quick-fix, so you need to hover) and do: “Move classpath entry ‘JUnit5’ to modulepath”. This should fix it! or
  • Create an Eclipse project with extra src folder (e.g. src-test or use the Maven default structure) that has (via “Allow output folders for source folders”) its own output folder (e.g. bin-test or use the Maven default structure) and that has “Contains test sources” toggled to “Yes” (in project properties - Java Build Path -Source). The test src folder should then have a more grey-ish icon. Either do this with the New project wizard, or afterwards using project properties. As a result, JUnit is then not part of the modular project anymore. (Has also the advantage that test code is better separated.)

Power consumption of Raspberry Pi 4 versus Intel J4105 system

Helmut Neukirchen, 7. June 2021

While I intended to use a Raspberry Pi 4 as a small server, I also ordered from China a small system (BEBEPC, comparable to the Qotom mini PCs. While Qotom mini PCs are slightly better documented, they typically have less powerful CPUs: even though they have Core i3 CPUs, these are so old that a more recent Celeron CPU is faster) based on a Intel J4105 CPU (= TDP of 10 W, 4 cores, 1.5 GHz base frequency, 2.5 GHz burst frequency) which has over the Raspi the advantage of native SATA ports (one standard SATA connector with 5 V power supply and one mSATA connector carrying 3.3 V power supply -- I ordered an mSATA to SATA adapter and a 5 V SATA power splitter cable to be able to have a RAID system of two SATA SSDs -- mSATA SSDs are only available in smaller sizes. But note that the SATA ports are only 3 Gbit/s, i.e. 300 MB/s, which means that a 500 MB/s SSD is already overkill). However, the biggest advantage is that it is (obviously) able to run Intel-only code, e.g. in particular Virtual Machine images or containers only available for Intel, e.g., using Proxmox VE.

Both systems come with 8 GB LPDDR4 RAM, but for the J4905, even 16 GB are possible (see below).

Both systems can be passively cooled: for the Raspy, I used a cooling case from https://www.coolingcases.com/ -- it cools well, but the metal affects the range of the onboard Wifi (not that relevant for a server). The J4105 came as well with a case that allows passive cooling -- while it is still tiny for a PC, it has approx. 4 times the volume of the Raspi.

The J4105 system has for sure more compute power than the Raspi's ARM CPU, so the remaining question is the power consumption. Hence, I did some tests and measurements using a cheap power meter that claims to have a 2% precision. Both systems were connected via FullHD HDMI to a monitor.

Intel J4105 measurements

As at the beginning, I did not had installed Linux yet, it was running Windows 10 and idle refers to having only the built-in task manager running in foreground (to display clock frequency) and all the background services that Windows 10 has by default. CPU load was generated using a batch file containing an endless loop.

The J4105 clocks down to 0.78 GHz when idle and the power consumption of the whole system (with one mSATA and one SATA SSD) is then 3.8 W.

With 1 core being busy, it still clocks up to 2.4 GHz and consumes 7.2 W.

With 2 cores being busy, it still clocks up to 2.4 GHz and consumes 10.3 W.

With 3 cores being busy, it clocks up to 2.35 GHz and consumes between 11.8 W and 12.1 W.

With 4 cores being busy, it clocks up to 2.19 GHz and consumes between 11.4 W and 12.0 W. (So it seems the reduced clock saves power).

I did run it with 4 cores being busy for an hour, and the measurements did not change, e.g. no thermal throttling seems to have occurred (nor did the case get hot, so a really good passive cooling -- or the contact between CPU and case is bad, but then thermal throttling could have been expected).

Raspberry Pi 4 measurements

I had OSMC with KODI running, but nothing else, i.e. the KODI UI being idle, but all the background services running. The latest firmware as of 4. June 2021 was used, storage was SDHC card only. CPU load was generated using the stress command.

The Raspberry Pi 4 consumed idle 3.8 W to 4.0 W.

With 1 core being busy, it consumes 4.5 W.

With 2 cores being busy, it consumes 5.0 W.

With 3 cores being busy, it consumes between 5.4 W and 5.5 W.

With 4 cores being busy, it consumes 6.0 W.

Temperature with the cooling case from https://www.coolingcases.com/ was approx. 52° C (so it prevented thermal throttling that would start at 80° C). Surprisingly, even in idle mode, the temperature was 40-42° (the tiny case does feel much warmer than the bigger case of the Intel system -- so, it seems: size matters).

Conclusions

In summary, the idle power consumption of both systems is comparable and while the busy consumption is lower with Raspberry Pi 4, it is of course less powerful than the J4105 system. For the J4105, I never observed the full 2.5 GHz burst clock rate (but 2.4 GHz). Even though the CPU TDP is 10 W, the whole system consumed up to 12.1 W (e.g. the RAM, the two SSDs, WiFi, HDMI output, external power supply, etc. probably also to add their share -- during boot, I even saw 14.8 W).

Note that others suggest 2.7 W idle for the Raspi 4 (but seems to require switching off a lot of I/O, e.g., HDMI etc. -- which I did not do, nor did I minimise background processes) or even as low as 2.1 W. On the other hand, many other report that they neither (with either a fan or a heatsink) get the system cooler than 42° in idle, so getting the Raspy warmer than the touch of your hand seems to be normal, but the J4105 system with the bigger case was considerably cooler.

It seems that the J4105 is a good 24/7 home server system, i.e. more powerful than the Raspi when needed, but still not consuming more power when idle. (A German c't article confirms this for a thin client that is also J4105-based.)

The ultimate passively cooled server with ECC ram would be ASRock Industrial iBOX-V2000M or iBOX-V2000V -- but these are not available for private users. But any ASRock motherboard in general, together with AMD Pro CPUs should support ECC.

Some documentation on the BEBEPC system

RAM: 16 GB DIMMs supported

An even more powerful system based on J4125 (= J4105 with higher clock) suggests that with Dual-Rank-Modules even 16 GB per RAM module are possible, i.e. with two banks, even 32 GB of RAM. Power consumption of that J412-based system has also been measured which is higher (best explained by the fact that it is faster, i.e. cannot clock down as much: 2000-2700 MHz vs. 1500-2500 MHz).
I therefore ordered 16 GB DIMMs: I can confirm that this works. However, my system has just 1 RAM socket, so 16 GB is the maximum.

Auto power on

It seems that to make the system automatically power-on after a power outage, a jumper needs to be set at PWRON1 at the pins marked PWR_SW1.

Independent from that, the system does not start after having been powered off -- not even after the power button has been pressed: in this case, the RTC/CMOS battery needs to be removed and inserted again.

BIOS settings

F11 or DEL to enter the AMI BIOS.

F2 to select boot drive.

MAC address can be found via Advanced.

Change OS to Linux via Chipset-South Bridge.

Change SATA Device Type to SSD via Chipset-South Cluster Configuration (not sure whether Mechanical Presence Switch setting matters and needs to be disabled).
Not sure about DITO (the time a given port must be idle before HW may enter DevSleep autonomously): might help if SSD gets hot/consumes to much energy.

Chipset-Miscellaneous Configuration: Power Button Debounce Mode disable to make the power button to come back from standby mode.

Security-Secure Boot: Disable if booting Linux causes problems.

Security-Quiet Boot: Disable to see some BIOS messages at boot.

Boot: Change order of boot devices.

US keyboard

On German/Icelandic keyboards, the pipe symbol is left of the enter key.

Update 2023: Intel N100, N200 and N300/N305 CPUs

The Intel N100, N200 and N300/N305 CPUs are some sort of successor of the J4105 CPU. N100 and N200 have both 4 cores and N200 can clock higher and has a better GPU than N100, those CPUs ending with the digit 5 are allowed gets hotter (i.e. higher TDP), i.e. they can probably sustain longer using all cores at highest speed. N300/N305 has 8 cores and also marketed as "Core i3". All support 2.5 Gb Ethernet. While they support only one DIMM (i.e. single channel being slower than two memory channels), they support DDR5 RAM which is anyway 50% faster and has ECC on-die, but this is not real ECC as the bus between CPU and RAM has no ECC and ECC errors will not be reported to the CPU, so that it is not possible to identify failing RAM DIMMs.

But in fact, even In-Band ECC (IBECC) is supported by these CPUs, e.g. the ODROID H4 supports this in its BIOS, i.e. part of the RAM is used for ECC and also part of the data bus, i.e. slowing down data transfers. But the big advantage is that the ECC-protection applies also to transmission of data over the data bus and that the OS can report ECC errors, so that you get informed about rotting RAM. Linux supports IBECC via Intel's IGEN6 module starting from version 2.5.1 that has been integrated in kernel version 5.11 . The overhead of IBECC is that for every 512 bits, 16 bits of the normal RAM are used for IBECC (compared to 64 + 8 for the traditional ECC and 128 + 8 for the DDR5 on-die ECC), i.e. the available amount of RAM is reduced by 1/32 and the performance penalty is ca. 10-20 %, depending on the workload with on-chip GPU-centric workload suffering most. (I guess, if IBECC is used together with DDR5 on-die ECC that corrects silently single bit errors, these get never detected as single bit errors are not really considered to be a sign of failing RAM but anticipated as normal due to the high-density and get therefore never reported by the IBECC -- but the two bit errors probably cannot get corrected by the on-die ECC and should get reported by the IBECC?). Even if the BIOS does not support enabling IBECC, there are claims that using the AMISCE tool from AMI, you can set this from command line (and then reboot -- just take care that you can clear the CMOS/NVRAM or have some rescue mode), e.g. on Linux, this is the SCELNX_64 / scelnx tool, but maybe the uefisettings tool works as well?

The Intel i3-N305 fanless mini PCs look also good, but you never know what backdoors are int the BIOS. The Protectli systems have coreboot, but are more expensive and have outdated hardware, i.e. none of these new processors. Starlabs Byte has an N200 with coreboot, but DDR4 RAM only. The official maximum RAM is 16 GB according to Intel, but there are systems offered with 32 GB:

  • TerraMaster F4-424 Pro NAS with N300 and 32 GB RAM
  • CWWK Magic Computer with various CPUs and RAM configuration and it has even a PCIe socket
  • iKoolCore R2 it has a fan but is super tiny. (Getting video passthrough of a VM guest on the video ports might be a challenge when using Proxmox VE as host, but it is do-able with some fiddling around.)
  • ODROID H4 from Hardkernel which is South-Korean, so no China BIOS -- hence the ODROID H4 sounds a very good buy and also the support provided by Hardkernel seems to be good. But it does not come with a decent case. If you sacrifice the NVM SSD, you can use the NVM port (which is in fact just PCIe) to add 4 further 2.5 Gb Ethernet ports, making it a great router, however you then either need to use the slow eMMC or the SATA ports for storage.
  • You can also find many more devices at AliExpress...

Note that most of the above come with Intel Ethernet Intel i226 chips that have a good driver support in Linux and BSD, however there are claims that these chips crash after a couple of hours and the only way to prevent this is to switch of PCIe power saving (ASPM) -- on the other hand, you find people reporting their n100 systems with i226 running rock-solid.

In general, these new CPUs are slightly faster than, e.g., a 8 core C3758 Xeon-like Atom CPU that is three years older, but supports more RAM and even ECC. But these new CPUs are more I/O limited (in terms of PCI lanes) in comparison to that 8 core C3758 Xeon-like Atom CPU that has 25 W but can still be passively cooled.

A test of an Intel N200-based fanless mini from Asus with DDR4 RAM mentions that N200 with DDR5 RAM is faster. Idle power consumption is claimed to be 5-6 W and 22 W under full load.

Others show for a N305 system idling 15-16 W which is significantly more. (But the N305 has 15 W TDP vs. 7 W for the N300 -- otherwise, both CPUs are exactly the same; I guess, the N300 will simple start to throttle when stressing all cores. But the N305 can also be restricted via BIOS to a lower TDP.) There, you find also a performance comparison with a Raspberry Pi 4 and Pi 5: N100 is twice as fast as a Pi 5 and four times faster than a Pi 4, and N305 is almost twice as fast as an N100.

Wacom Tablet on Linux with dual/multi-screen setup

Helmut Neukirchen, 3. February 2021

Wacom tablets, including digitisers in screens, should be supported out-of-the-box with Linux.

I have a dual screen setup and a Wacom tablet. As one screen is 4K UHD and the other FHD, this is too much screen estate for my shaky hand on the tiny Wacom tablet that I have (the tablet has 2540 lpi resolution, so this is not a restriction of the tablet, but the blame goes to me). Therefore, I want to restrict the tablet use to the FHD screen only in order to get a more calm pen usage.

On my Debian KDE system, this can be done by two means:

  • Install the KDE Wacom configuration tool via Debian package kde-config-tablet. In that tool, either set the mapping of the Wacom tablet to a specific screen (do not forget: you need to click "OK" also on the initial overview screen, i.e. end the whole setting process). Or use the pre-defined keyboard shortcuts to set a tablet-screen mapping.
  • Use the command line tool xsetwacom: get the screen name via xrandr (e.g. eDP for my laptop screen) and, get the name of the Wacom tablet via xsetwacom --list. Note that multiple entries are listed there for the same tablet: take care to use the one that ends with stylus. In my case, I use
    xsetwacom set "Wacom Intuos S Pen stylus" MapToOutput eDP

In online teaching, I enjoy using to draw on slides. E.g. for drawing on PDFs, I use either KDEs okular PDF viewer (in presentation mode, move to the upper screen edge to get a menu with pen colors) or xournal or rather it's fork xournal++/xournalpp. While I run PowerPoint on Linux with Wine, exactly the presentation mode drawing function does not work (I see the dot representing where the pen would draw, but drawing does not leave a trace).

For using the pen in Gimp, it is important to understand that the Gimp drawing tool that shall to be mapped to the pen needs to be selected from the Toolbox window with the pen itself! (The pen and the mouse have different tools associated and Gimp distinguishes this based on what input device is used to selected the tool. Trying to select the tool with the mouse and then using the pen will only lead to making the pen crop (which is probably the default behaviour): this mode is also displayed at the bottom: "click-drag to draw a crop rectangle".)

Raspberry Pi 4: boot from USB with Ubuntu, ZFS

Helmut Neukirchen, 18. November 2020

First steps

When I was new to Raspberry Pi, I followed these German instructions.

Running Raspberry Pi on SD card and syslog wear

The default Raspberry Pi syslog logs to the normal files system, i.e. SD card (if not using USB).
In order to log to RAM and write it to file system only when needed, you can use Log2RAM either by adding it manually or to apt. Alternativly (did not try myself), use: Zram-config.

But you anyway might want to use USB storage instead of SD card.

Booting Raspberry Pi from USB

Raspberry Pi now supports booting from USB (having installed the latest firmware does not harm: I did this by booting Raspberry OS from SD card. Note that in contrast to Raspberry Pi <4, Raspberry Pi 4 stores the firmware actually in an EEPROM, not just as a file loaded at every boot from the FAT boot partition by the GPU firmware of Raspberry Pi <4). I then dd'ed the image from SD to USB mass storage.

Booting Ubuntu from USB

If you want to go for Ubuntu (not just beta as 64 bit Raspberry Pi OS, but available as stable 64 bit and support for ZFS), the following is relevant:

Ubuntu for Raspberry Pi uses U-Boot as bootloader, however U-Boot does not support booting from USB on Raspberry Pi, only from SD card, i.e. while Ubuntu 20.04 LTS works out of the box when booting from SD card, when I dd'ed the SD card onto a USB drive, booting failed because U-Boot could not load the kernel via USB.

Luckily, the bootloader that is part of the Raspberry Pi firmware can boot Ubuntu without U-Boot. As always, a FAT format boot partition is needed that contains a couple of files in order to boot.

While U-Boot can load compressed (vmlinuz) kernel images and can load the kernel from an ext4 root filesystem, the Raspberry Pi bootloader firmware can only load uncompressed (vmlinux) kernel images and only from the FAT-based boot filesystem.

While the Ubuntu 20.04 LTS ARM64 image has a kernel on the FAT-based boot partition, it is unfortunately compressed (because the assumed U-Boot would be able to deal with it). Hence, you need to uncompress the kernel manually to allow the Raspberry Pi firmware bootloader to load and start the kernel.

In addition, it seems that the .dat and .elf files that are part of the bootstraping need to be the most recent ones.

Hence, I downloaded the whole Raspberry Pi firmware from GitHub (via the green Code button) and extracted the .dat and .elf files from the boot directory.

Finally, you need to change the config.txt by adding kernel=vmlinux and
initramfs initrd.img followkernel
(in [all] and comment-out [pi4].

That should be enough to boot Ubuntu from USB. My above steps are essentially based on https://eugenegrechko.com/blog/USB-Boot-Ubuntu-Server-20.04-on-Raspberry-Pi-4 where you find step-by-step instructions.

Note that when you do later a kernel update inside the booted Ubuntu, it might only update the kernel image in the ext4 root partition -- not in the FAT boot partition. In this case, you need to copy the kernel over. Also you need to decompress it again.
(It should be possible to automate this, to get an idea, see: https://krdesigns.com/articles/Boot-raspbian-ubuntu-20.04-official-from-SSD-without-microsd of https://medium.com/@zsmahi/make-ubuntu-server-20-04-boot-from-an-ssd-on-raspberry-pi-4-33f15c66acd4.)

If you want to use the system headless (in fact, connecting a keyboard did not produce any input in my case), you can configure the network settings via the FAT-based boot partition: https://ubuntu.com/tutorials/how-to-install-ubuntu-on-your-raspberry-pi#3-wifi-or-ethernet

ZFS

Work in progress...

The ultimate goal is to have two drives as ZFS mirrors (RAID1) connected via USB.

(Be aware: 1. if the USB adapter claims data to have been written, that it has in fact not yet written, ZFS may fail -- just like probably any journaling-based file system; 2. USB is not as stable as SATA, so an ODROID-HV4 or a Raspberry Pi 4 compute module with PCI-based SATA might be better, or a Helios64 which might in future even have ECC RAM. But at least, Raspberry Pi has the better ecosystem and ZFS has some memory debug flag that does checksums for its RAM buffers.)

While https://www.nasbeery.de/ has some very easy script to use ZFS, it still assumes an SD card for the boot and root filesystem. It would of course be better to have everything on the USB drive (and even using RAID).

As the Raspberry Pi bootloader can only access a FAT-based boot partition, we still need a FAT-based boot partition on the USB drive. According to documentation, if the first probed USB drive does not have a boot partition, the next drive will be probed. So, it should be possible to have some sort redundancy here (but we need manually take care that both FAT-based boot partitions are synced after each kernel update to have some sort of RAID1).

As Ubuntu should be able to have the root partition on ZFS (once the Raspberry Pi firmware bootloader loaded the kernel from the FAT-based boot partition), it should be possible to use ZFS as root partition (what size? 50GB?). The remainder could then be a ZFS data pool.

Note that if one of the RAID1 drives fails and needs to be replaced, the new drive might have slightly less sectors, so it is wise to use not all available space for the ZFS data pool. If we use anyway a swap partition in addition, we could use it to utilize the remaining space (and have then on the replacement drive a slightly smaller swap partition if the replacement drive is smaller). The swap partition should not be on ZFS but a raw swap partition: Linux can either use multiple swap partitions, i.e. from all of the RAID drives -- or only use one and keep the others unused.

This means, we still partition the mass storage instead of letting ZFS use it exclusively. The Raspberry Pi bootloader understand only MBR partition format -- this might limit drive size to 2 TB.

The following web pages cover ZFS as root:

Compiling ZFS for Raspberry Pi OS (first part), switching to ZFS root (second part), including initramfs and kernel cmdline.txt

https://github.com/jrcichra/rpi-zfs-root

https://www.reddit.com/r/zfs/comments/ekl4e1/ubuntu_with_zfs_on_raspberry_pi_4/

Update

I do not use any longer the Raspberry Pi for this, but trying now on PC hardware with Proxmox that comes with ZFS.

But currently, I have the problem that while BIOS and Linux detects the two SSDs, Proxmox complains that it can find only one.

  • When booting with a USB key containing the Proxmox installer and the Crucial SSD disconnected and only the SanDisk SSD connected via the SATA to mSATA adapter, the USB key becomes /dev/sdb and the SanDisk is detected by Linux in dmesg as ata1 (with Features: Dev-Sleep) and as /dev/sda, but Proxmox says that it cannot find a disk at all.
  • When using the Crucial SSD on that mSATA port with the USB key and the SanDisk SSD not connected, Linux shows the Crucial SSD in dmesg as ata1 (with Features: Trust Dev-Sleep) and as /dev/sda and Proxmox allows to select ZFS RAID0.
  • When using the Crucial SSD on that mSATA port (ata1) and the SanDisk SSD on the SATA port (ata2) and boot with the USB key, Linux shows the Crucial SSD in dmesg as ata1 (with Features: Trust Dev-Sleep) and as /dev/sda. Note that the Crucial SSD is by dmesg shown as 4096-byte physical blocks whereas the SanDisk SSD does not has this line, but says Preferred minimum I/O size 512 bytes whereas the Crucial SSD says Preferred minimum I/O size 4096 bytes and supports TCG Opal. Proxmox allows to select ZFS RAID0, but not RAID1 (=mirror). As in the above list item 1, Proxmox does not seem to like the SanDisk SSD. Notabaly, the SanDisk SSD (/dev/sdb) is shown at mountpoint /cdrom -- not sure why (but unmounting it did not really make Proxmox use it).
  • Attaching the SanDisk SSD via SATA USB adapter makes Linux show it as sdb (the Crucials SSD as usual as sda and the USB key as sdc). Still Proxmox does not like it.
  • Note that the SMART info on the SanDisk SSD says Unrecoverable ECC count (Count of unrecoverable ECC errors) 15 sectors (my guess it that this is the reason why Proxmox does not like it!)
  • I added now a Samsung SSD in addition to the Crucial and now Proxmox is happy. The Samsung SSD is 500 GB only though. Buy a second Crucial MX500 2TB (CT2000MX500SSD)...

But now, Proxmox complains that the disks have not the same size (on command line, probably some --force/-f would be sufficient): "Warning: mirrored disks must have same size Please fix ZFS setup first".
But the trick is to install it in ZFS RAID 0 mode on the smaller disk only. (What does not work, i.e. still gives the above complaint is set the "hdsize" parameter during the installation).

Best practise: use for ZFS as drive reference: /dev/disk/by-id path

USB adapters

I used USB to SATA adapters with the ASMedia ASM1153E as these work with Raspberry Pi and UASP.

TODO:
When checking dmesg after connecting a USB drive, it did spit out some warning that could be ignored.

TODO: performance tuning

https://www.jeffgeerling.com/blog/2020/raspberry-pi-usb-boot-uasp-trim-and-performance

https://www.jeffgeerling.com/blog/2020/raspberry-pi-usb-boot-uasp-trim-and-performance

https://www.jeffgeerling.com/blog/2020/enabling-trim-on-external-ssd-on-raspberry-pi

However that made in fact everything slower (TODO: speed testing via hdparm, dd).

Debian 10 Buster Linux on Thinkpad T14 AMD

Helmut Neukirchen, 12. November 2020

Update: For having the side button of my Logitech mice as middle mouse button, I used in the past some X org rules, but now, I simply use Solaar.

Update: The WiFi did not work while an Ethernet cable was connected. In fact that is a power saving feature: that in the BIOS setting, disabling Wireless Auto Disconnection solved the problem.

Update: the text below refers to Debian 10 "buster". Now that Debian 11 "Bullseye" has been released, which has Linux kernel 5.10, things should work out of the box. I just did a dist-upgrade from buster to bullseye which was the smoothest dist-upgrade that I ever had, i.e. no problems at all. Except that I had to do a apt-get install linux-image-amd64 to get the standard bullseye kernel (my kernel installed manually from buster did confuse VirtualBox which complained about mimssing matching kernel header source files) .

The Thinkpad T14 AMD is a very nice machine and everything works with Linux (I did not test the fingerprint reader and infrared camera, though). I opted for the T14 over the slimmer T14s, because the T14s has no full sized Ethernet port and it seems that (due to being slimmer) the cooling is not as good as with the T14.

Kernel 5.9 (or later) is a must to support all the hardware of Thinkpad T14 AMD, but the 4.x kernel used by the installer of Debian Buster is sufficient to do the installation, except that Wifi does not work, so you need an Ethernet cable connection during installation.

To get the 5.9 kernel in Debian Buster, it is at time of writing available from Sid (there are various ways to use packages from Sid in Stable -- in the simplest case, download the debs manually), and the following packages are needed:

  • linux-image-5.9*-amd64*
  • firmware-linux-free*
  • firmware-linux-nonfree*
  • firmware-misc-nonfree*
  • firmware-amd-graphics*
  • amd64-microcode* (Checking that the UEFI/BIOS is most recent before using a Microcode upgrade is recommended. But the update might be anyway blocked via /etc/modprobe.d/amd64-microcode-blacklist.conf)

In principle, the kernel header files are also nice to have, but it may involve updating to a completely new GCC from Sid:

  • linux-headers-5.9*-amd64*
  • linux-headers-5.9*-common*

Note that you will not get automatically security updates if you update the kernel manually. You may want to give APT pinning a try for having only the kernel from Sid.

Update: a buster backport of a more recent kernel is available now, install via apt-get install -t buster-backports linux-image-amd64 (assuming, you have backports configured). This support also all the above packages, including linux headers.

Note that for the Intel AX200 Wifi to work, you also need the latest firmware-iwlwifi (the one from buster-backports is enough-- the one from Sid should not be necessary):
apt-get install --target-release=buster-backports firmware-iwlwifi (assuming that backports have been configured as APT source). Initially, I had to download some files directly from Intel, but it seems that the buster-backport package now contains the missing files.

The graphics was straigthforward from Debian Stable:

  • xserver-xorg-video-amdgpu
  • firmware-amd-graphics*

I cannot remember whether I installed them explicitly or whether they are just installed because they are dependencies of the above AMDGPU package: I have a couple of mesa and DRM packages installed (see also Debian Howto on AMDGPU and the Debian Page on Video acceleration), e.g.:

  • libgl1-mesa-dri
  • libglx-mesa0
  • mesa-vulkan-drivers
  • mesa-va-drivers (for video decoding)
  • vdpau-driver-all (for video decoding)
  • libdrm-amdgpu

(The whole Linux graphics stack consists of more components, you maybe want to check..)

Check whether MESA is activated.

Also some, BIOS/UEFI tweeking was needed, e.g. Change sleep state from Windows to Linux mode (and if you use an unsigned kernel: disable secure boot).

There is one thing concerning suspend and resume, though: my Jabra USB headset stops to work after resume -- manually unplugging and plugging the USB plug solves this problem and remarkably other USB devices (keyboard and mouse) work after resume. In the end, I wrote a script that removed and install the USB module after resume and placed as file /etc/pm/sleep.d/10-usbquirk.sh (do a chmod a+x). The content is as follows:

# !/bin/bash
# USB headset not working after suspend, so try relode USB module (other USB devices work though)
case "${1}" in
hibernate)
;;

suspend)
;;

thaw)
rmmod xhci_hcd
sleep 0.5
modprobe xhci_hcd
;;

resume)
rmmod xhci_hcd
sleep 0.5
modprobe xhci_hcd
;;

*)
;;
esac

(Somehow the indentation got removed by WordPress -- but in fact indentation does not matter for the Bash script.)

I also have the UltraDock docking station: it uses the two USB-C ports of the T14 plus a third proprietary connector: AFAIK it is for the Ethernet, even though it seems that it does not simply mechanically extend the Ethernet port, but has an extra Ethernet chip built-into the docking station. And I guess that one of the USB C ports works rather in a mode where not USB C is used but the video signals are directly transmitted, so that is different from a pure USB C dock. And indeed, the dock seems to have its own MAC address (I did not figure out how to find out what the MAC address is -- probably need to connect it to my smart switch). While the BIOS has a MAC passthrough setting (that is enabled), under Linux, it gets not passed through.

Otherwise, the dock works without any extra configuration with Debian, including dock and undock (I use a 4k screen via one of the HDMI connectors of the UltraDock and a couple of USB A ports of the UltraDock).

I still dislike not having anymore the bottom dock connector used by Lenovo in the past: you cannot anymore simply "throw" the Laptop onto the dock, but need significant force now to insert the three connectors from the side and I am not sure, how many docking cycles the USB-C connectors last from a mechanical point of view.

I also tried a 4K screen (Lenovo P32p) that has USB-C for power delivery and video and even serves as USB hub as well (i.e. with one USB-C cable, it powers the laptop, transmits the video, and mouse and keyboard are attached to the screen; it even has Ethernet that will then go via the same, single USB-C cable). This works nice and replaces in fact a dock. -- However, at the beginning I needed always to reboot the system in order to make the video work via USB-C. For some reasons this problem magically disappeared.

TODO: Check Thinkpad specific packages (I have them currently not installed, and do not see any hardware support missing).

Update: There is now also Debian Wiki page on the Thinkpad T14.

Tahoma and Tahoma bold font in Wine/CrossOver

Helmut Neukirchen, 27. October 2016

Even if the free Microsoft Core fonts are installed, Tahoma is missing. A Microsoft knowledge base support entry is available to download as Tahoma32.exe, however this is a broken link. Hence, download the therein contained files (tahoma.ttf and tahomabd.ttf) from elsewhere (seems to be legal as Microsoft offered them anyway to the public), e.g. https://github.com/caarlos0/msfonts/tree/master/fonts

Copy font file to ~/.fonts directory and run fc-cache -fv

Some notes on using a Spark cluster

Helmut Neukirchen, 18. August 2016

The following notes are mainly for my personal use referring to the Spark 1.6/YARN cluster that I access, but maybe they are helpful for you as well...

Upload to HDFS

By default (=used implicitly by all HDFS operations), a HDFS paths are relative to your HDFS home directory: it needs to be created first by the administrator!

While piping through SSH should work ( cat test.txt | ssh
username@masternode "hdfs dfs -put - hadoopFoldername/" ) , it is reported to be slow -- I never checked this, but as I anyway used rather small data, I did instead an scp to the local file system of the master node and used afterwards a hdfs put:
scp localFile username@masternode
hdfs dfs -put twitterSmall.csv Twitter

Concatenate HDFS files (all inside an HDFS directory) and store in
local file system (without sorting)

hdfs dfs -getmerge HdfFolderContainingSplitResultFiles LocalFileToBeCreated

Note that Spark does not overwrite output files in HDFS by default. Either take care when you re-run jobs that the output files have been (re-)moved or you have to allow it in the Spark conf of your program:  conf.set("spark.hadoop.validateOutputSpecs","false")

Debugging

  1. See http://spark.apache.org/docs/latest/running-on-yarn.html
  2. Use spark-submit --verbose
  3. If executor processes are killed, this is mainly due to insufficient RAM (garbage collection takes too long, thus timeouts occur or simple out of memory/OOM exceptions). While you see in this case in the log of the driver on the spark-submit console  only "<span class="hljs-keyword">exit</span> code <span class="hljs-number">143</span>", the details need to be found in the logs of nodes/executors. This may not be possible via Web UI due to executor nodes being firewalled -- in this case use:
    yarn logs -applicationId application_1470137500465_0147
    (App Id tp be taken from ID columns in Cluster Web UI. Works only for completed runs, not the current run.) In these logs, you can find then / search for java.lang.OutOfMemoryError: GC overhead limit exceeded or java.lang.OutOfMemoryError: Java heap space

Performance tuning

  1. Note that due HDFS blocks size of 128 MB, by default, partitions of this size are created when reading data. To enforce a higher number of partitions/higher parallelism, use already at the file read stage the optional numberOfPartitions parameter (that also many other RDD creating operations support).
  2. Some introduction https://www.mapr.com/blog/resource-allocation-configuration-spark-yarn
    http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-2/
    (in particular: more than 5 cores per executor is said to lead to bad HDFS throughput. Note that “executor” is not identical to “node”, thus instead of running one executor with 24 cores on one node, rather run 4 executors with 5 cores on each node or 8 executors with 3 cores! Note that then, however, the overall memory of a node needs to be divided by the numbers of executors per node, e.g. 5 BG per executor with 8 executors per node on a 40G RAM node.)
  3. Config for RAM-intensive jobs (=1 core per executor only & 1 core per node only, using 40GB heap space and 2GB overhead for Spark/Yarn itself => on each of the 38 nodes only one core is used that thus can make use of all available RAM), in addition increase timeouts and message size:
    spark-submit --conf "spark.network.timeout=300s" --conf "spark.akka.frameSize=2000" --driver-memory 30g --num-executors 38 --executor-cores 1 --conf "yarn.nodemanager.resource.cpu-vcores=1"--executor-memory 40g --conf "spark.yarn.executor.memoryOverhead=2000"--conf "spark.driver.cores=4" --conf "spark.driver.maxResultSize=0"
    (Note: not sure about the driver memory and cores: this seems to have no influence -- is it too late to set it here?)

CORBA remote object IORs in a NAT environment

Helmut Neukirchen, 23. October 2015

When running CORBA remote objects in a NAT environment (assuming Internet protocols are used), the IIOP IOR remote object references that will be created (and registered at some nameservice) will contain the private IP address (to convince yourself: dump the IOR as string and paste that string in http://www2.parc.com/istl/projects/ILU/parseIOR/). As a result, when a client outside the NAT environment looks up the IOR, it will get one containing the private IP and access to the remote object does of course not work. For the Oracle OpenJDK CORBA implementation, the following command line parameter needs to be provided to both the ORB and the JVM running at the remote object side:
-ORBServerHost PublicIPofServer

Concerning the ports:
By default, the Oracle OpenJDK is using TCP port 1049 for the activation service. You can change this port via the ORB command line parameter -port.

The port used for the CORBA Naming Service (which is automatically provided by the OpenJDK Java ORB) depends on whether orbd is started as root or as an ordinary user: when started as root, TCP port 900 is used, otherwise TCP port 1049 (because ports lower than 1024 can only be created by root). Unfortunately, TCP port 1049 is also used by the activation service as described above. Hence, a port collision (=exceptions) will occur (what a stupid design)!
In this case, let the ORB start the Naming Service e.g. on TCP port 1050:
orbd -ORBInitialPort 1050

When changing the Naming Service port from the default 900, client and server JVMs that use that Naming Service also need to know about the changed Naming Service port number: Start the JVMs with additional parameter:
java -ORBInitialPort 1050

When running client and server on different hosts, take care that they use the same Naming Service. Assuming that the Naming Service running on the server's host is used: the server will anyway use this local Naming Service, but the client needs to know the hostname of the server's Naming Service: start the client JVM with additional parameter:
java -ORBInitialHost nameserverhost

Note that in addition to these standard services (Activation and Naming), CORBA uses by default dynamically assigned TCP ports (=expect difficulties with firewalls) for all further objects such as your own remote objects that are contained in the IORs. However, you can enforce a port to be used by a servant created within a JVM using the additional parameter:
java -ORBServerPort port

Debian Linux on Thinkpad X250

Helmut Neukirchen, 4. March 2015

What I did to install Debian Linux (Jessie) on Thinkpad X250:

Booting from USB device (to install Debian) was some challenge: in particular USB 3 needed to be disabled in BIOS (maybe some more BIOS tweaks that I cannot remember anymore).

To make the Trackpoint keys work:

In BIOS, disable Touchpad (anyway a good idea to prevent accidental touches there).

Added file /etc/modprobe.d/x250.conf with content
options psmouse proto=imps

Added file /usr/share/X11/xorg.conf.d/20-thinkpad.conf with content (works only if Touchpad is disabled in BIOS)

Section "InputClass"
Identifier "Trackpoint Wheel Emulation"
MatchProduct "PPS/2 IBM TrackPoint|DualPoint Stick|Synaptics Inc. Composite TouchPad / TrackPoint|ThinkPad USB Keyboard with TrackPoint|USB Trackpoint pointing device|Composite TouchPad / TrackPoint|PS/2 Synaptics TouchPad"
MatchDevicePath "/dev/input/event*"
Option "EmulateWheel" "true"
Option "EmulateWheelButton" "2"
Option "Emulate3Buttons" "false"
Option "XAxisMapping" "6 7"
Option "YAxisMapping" "4 5"
EndSection

Also to make side button of my Logitech USB mouse act as middle button:
Added file 20-logitech-mouse-side-button.conf with content

Section "InputClass"
Identifier "Logitech mouse side button remap"
MatchProduct "Logitech USB Receiver"
MatchDevicePath "/dev/input/event*"
Option "ButtonMapping" "1 0 3 4 5 6 7 2 9 10"
EndSection

(Still sometimes Logitech mouse stops completely to work, then unplugging USB receiver from docking station works -- still need to investigate that. Update it seems that plugging in the USB receiver into another USB port (=other USB type) helps.)

I also experience sometimes that my external Dell monitor connected via DP cable and my dock sometimes blanks for half a second: a firmware update of the dock is needed, but is only available as MS Windows executable. Any hints welcome how to do this via Linux! (A BIOS update via Linux is possible and worked.)
I do not have that problem when using the DVI-D port and cable of the dock -- however for 4k resolution, DP is better than DVI!

I also had an old 1440x900 display that did not report its native resolution when connected via VGA (which btw. reports as DP2). While I might probably add some modeline to some xconfig file as I last did probably 10 years ago, I did the following:

cvt 1440 900
Then pasted the modeline generated by cvt:
xrandr --output DP2 --newmode "1440x900_60.00" 106.50 1440 1528 1672 1904 900 903 909 934 -hsync +vsync
xrandr --addmode DP2 "1440x900"
xrandr --output DP2 --mode 1440x900

Also my other display sometimes gets no recognised:

cvt 1920 1080
Then pasted the modeline generated by cvt:
xrandr --output DP2 --newmode "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync
xrandr --addmode DP2 "1920x1080"
xrandr --output DP2 --mode 1920x1080

For getting cloned display output with KDE "Display and Monitor" configuration system setting pane, the two screens have to dragged onto each other. However, I like
the old "Size & Orientation" pane more which can be obtained by installing the kde-workspace-randr package.

Just as reminder for me: to use Gutenprint for the photoprinter: create first in CUPS (e.g. via web interface) an entry for the photoprinter so that the printer gets an own queue. Then, in Gimp, this queue can be used when setting up the photoprinter there. In case the Print with Gutenprint menu entry does not show up in Gimp, an extra package needs to be installed: IIRC for Debian it is package: gimp-gutenprint
What I did to install Debian Linux (Jessie) on Thinkpad X250:

Booting from USB device (to install Debian) was some challenge: in particular USB 3 needed to be disabled in BIOS (maybe some more BIOS tweaks that I cannot remember anymore).

To make the Trackpoint keys work:

In BIOS, disable Touchpad (anyway a good idea to prevent accidental touches there).

Added file /etc/modprobe.d/x250.conf with content
options psmouse proto=imps

Added file /usr/share/X11/xorg.conf.d/20-thinkpad.conf with content (works only if Touchpad is disabled in BIOS)

Section "InputClass"
Identifier "Trackpoint Wheel Emulation"
MatchProduct "PPS/2 IBM TrackPoint|DualPoint Stick|Synaptics Inc. Composite TouchPad / TrackPoint|ThinkPad USB Keyboard with TrackPoint|USB Trackpoint pointing device|Composite TouchPad / TrackPoint|PS/2 Synaptics TouchPad"
MatchDevicePath "/dev/input/event*"
Option "EmulateWheel" "true"
Option "EmulateWheelButton" "2"
Option "Emulate3Buttons" "false"
Option "XAxisMapping" "6 7"
Option "YAxisMapping" "4 5"
EndSection

Also to make side button of my Logitech USB mouse act as middle button:
Added file 20-logitech-mouse-side-button.conf with content

Section "InputClass"
Identifier "Logitech mouse side button remap"
MatchProduct "Logitech USB Receiver"
MatchDevicePath "/dev/input/event*"
Option "ButtonMapping" "1 0 3 4 5 6 7 2 9 10"
EndSection

(Still sometimes Logitech mouse stops completely to work, then unplugging USB receiver from docking station works -- still need to investigate that. Update it seems that plugging in the USB receiver into another USB port (=other USB type) helps.)

I also experience sometimes that my external Dell monitor connected via DP cable and my dock sometimes blanks for half a second: a firmware update of the dock is needed, but is only available as MS Windows executable. Any hints welcome how to do this via Linux! (A BIOS update via Linux is possible and worked.)
I do not have that problem when using the DVI-D port and cable of the dock -- however for 4k resolution, DP is better than DVI!

I also had an old 1440x900 display that did not report its native resolution when connected via VGA (which btw. reports as DP2). While I might probably add some modeline to some xconfig file as I last did probably 10 years ago, I did the following:

cvt 1440 900
Then pasted the modeline generated by cvt:
xrandr --output DP2 --newmode "1440x900_60.00" 106.50 1440 1528 1672 1904 900 903 909 934 -hsync +vsync
xrandr --addmode DP2 "1440x900"
xrandr --output DP2 --mode 1440x900

Also my other display sometimes gets no recognised:

cvt 1920 1080
Then pasted the modeline generated by cvt:
xrandr --output DP2 --newmode "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync
xrandr --addmode DP2 "1920x1080"
xrandr --output DP2 --mode 1920x1080

For getting cloned display output with KDE "Display and Monitor" configuration system setting pane, the two screens have to dragged onto each other. However, I like
the old "Size & Orientation" pane more which can be obtained by installing the kde-workspace-randr package.

Just as reminder for me: to use Gutenprint for the photoprinter: create first in CUPS (e.g. via web interface) an entry for the photoprinter so that the printer gets an own queue. Then, in Gimp, this queue can be used when setting up the photoprinter there. In case the Print with Gutenprint menu entry does not show up in Gimp, an extra package needs to be installed: IIRC for Debian it is package: gimp-gutenprint

Update 27.5.2024:

Update 27.5.2024: With Debian Bookworm, I can in the CUPS web interface not detect the printer. Install the package printer-driver-gutenprint did make the printer show in the CUPS administrator interface.

But then, I got an error message about an incorrect paper format. I then compiled the latest version of Gutenprint manually -- but this did not compile the Gimp plugin, so I had to install first libgimp2.0-dev.

Still, that did not work, so I had to downgrade the packages to the Gutenprint version prior to the regression:


The issue is resolved by removing these packages and manually installing the packages from Jammy:

libgutenprint-common/jammy,jammy,now 5.3.3-9 all
libgutenprint9/jammy,now 5.3.3-9 amd64
printer-driver-gutenprint/jammy,now 5.3.3-9

For version pinning, create a file in etc/apt/preferences.d with contents:


Package: libgutenprint-common
Pin: version 5.3.3-5
Pin-Priority: 1000
Explanation: Newer versions in Debian have a regression https://sourceforge.net/p/gimp-print/discussion/4359/thread/8fca54c027/

Package: libgutenprint9
Pin: version 5.3.3-5
Pin-Priority: 1000
Explanation: Newer versions in Debian have a regression https://sourceforge.net/p/gimp-print/discussion/4359/thread/8fca54c027/

Package: printer-driver-gutenprint
Pin: version 5.3.3-5
Pin-Priority: 1000
Explanation: Newer versions in Debian have a regression https://sourceforge.net/p/gimp-print/discussion/4359/thread/8fca54c027/

Once Debian has versions as new as 5.3.4-2023-08-23 (e.g. in sid), these packages can be used.