Category: Tech

Zoom Panopto integration

Helmut Neukirchen, 9. November 2021

Panopto can tell Zoom to copy Zoom meeting cloud recordings to Panopto. You can configure this automatic import/export by clicking in the very upper right corner of https://rec.hi.is/ on your user name and then select "User Settings".

University of Iceland is running Panopto with at least two different storage spaces: the old storage space used when logged-in to Panopto via UGLA (for Panopto videos accessible via UGLA) and the new storage space when logged-in to Panopto via Canvas (for Panopto videos accessible via Canvas).

On https://rec.hi.is/, you can in the upper right corner log out and log in to change between these two spaces. But you cannot copy videos between these two spaces -- but UTS help desk can do so.

For the Zoom integration, the problem is that recordings may end up in the wrong space: whatever the last log-in to Panopto was, sets the integration, i.e. tells Zoom where to store the video for all future Zoom session recordings. So take care that your last log-in was into the intended storage space before a Zoom cloud recording starts. (Or ask UTS help desk to fix it afterwards.)

Ice tea vs. IoT: LoRa

Helmut Neukirchen, 20. October 2021

Ice tea or IoT -- what do you prefer?

When I ordered the TTGO T-BEAM, I liked that it combines LoRa and GPS and it even supports a 18650 battery (18650 cells with internal protection circuit are somewhat longer, but still fit -- although very tight) including a good charging chip to charge the Li-Ion cell -- not LiFePO -- via USB (USB can also be used to power the device without using the battery holder). The ublox NEO-6M GPS chip has a dedicated backup (super)capacitor (looks like a coin cell battery) to buffer the GPS chips' RTC and almanach, but probably only for a few minutes.

Just when the delivery arrived, I found video #182 from Andreas Spiess, reporting that older TTGO designs had some design flaws: the 868-915 MHz versions have passive RF components (coils and capacitors to tune the frequency) that are not specific enough for the 868 MHz that we use here in Europe (some even fixed that) and the LoRa antenna could be better (all the videos by Andreas Spiess can only be recommended, including the LoRa videos). I was then happy to see that in newer designs, including the T-TBEAM, the WiFi antenna is placed better and in fact, the T-TBEAM even has a connector for an external WiFi antenna (but would need some minor soldering); also the LoRa and GPS part is now shielded by a metal cage. I was relieved to find the more recent video #224 measuring the T-BEAM and other newer boards, judging the newer designs to be OK.

I already expected that a better GPS antenna might be needed (and the tiny original one is only fixed with some adhesive tape that does not hold very well).

In summary, the T-BEAM seems not to be that bad (even the passive component that are too generic for 868 MHz turn out to be OK), but many reports indicate that the power consumption is rather high (that whole thread is anyway a worthwhile reading). 10 mA seem to be the minimum possible even during deep sleep. Concerning the power consumption, there seems to be an issue with deep-sleep. There is also a video on what is possible with ESP32 and deep sleep. Update: Meanwhile a student did as part of his M.Sc. thesis power measurements with the TTGO Lora32 (i.e not the T-BEAM) and the lack of going to deep sleep is confirmed there as well.

Some people complained that they got only 900 m instead of kilometers of range. The comments for video #224 mention that an older library had a flaw concerning the transmit power which did lead in that video to a low transmission power; according to the comments, this has at least been fixed now in the LoRa library by Sandeep Mistry that can be found in the Arduino Library Manager. Update: Again in our M.Sc. thesis, we achieved 15.5 km range.

A display can also be connected, but to reduce power consumption, it might be better to make it removable by using a female header.

Andreas Spiess recommends in his videos WeMos D1 ESP8266 and a Hope RFM95W LoRa module for which even a PCB is available (recommending as well WeMos D1 as ESP2866 board) -- it however needs SMD soldering. Nexus by Ideetron has elsewhere been mentioned as low power solution, but has only a small user base and thus lacks information -- and GPS can anyway be expected to be the big power consumer.
Concerning the LoRaWAN libraries, MCCI seems to be the only one that is actively maintained and communication with The Things Network needs to save some state information (for joining via OTAA) which MCCI stores in RAM that is not buffered in deep-sleep of ESPs. So for using OTAA, MCUs that do buffer the RAM (i.e. newer ATMEL MCU like in newer Arduino) would be preferable, e.g. Atmega 1284p together with a watchdog for waking up periodically has extremely low power consumption (0.5 μA in deep sleep) but lacks GPS. Other low power designs provide even triple GNSS and acceleration-detection watchdog. In addition to the ublox GNSS chips, there are some approaches that claim to reach lower power consumption by off-loading GNSS solver processing via LoRa to some external clouds server infrastructure or doing extreme A-GPS data compression for LoRa transmission from a cloud.

The really cool thing is that even satellites serve as LoRa repeaters (if there is a clear line of sight, LoRa has a theoretical range of 1300 kilometers, thus easily reaching low earth orbit satellites). By this, sensors that have no LoRa connection to a station on the Earth can still reach a LoRa repeater in the sky and forward their messages back to Earth. (But you need an amateur radio license for the used 70 cm frequency band: 435 MHz / 436 MHz up- and downlink.)

I also got two TTGO Lora32 v1.6.1 that have LoRa, a card reader, and a tiny display on the back, but not GPS. On one of them, the WiFi antenna was already loose when unpacking (see the 3D sheet metal in the photo below). Need to check how easy it is to solder it back again (or whether rather a hot air rework station is needed) or use it as opportunity to add an SMA/UFL connector? (There is also an UFL antenna connector, but since it as close to the LoRa SMA antenna connector, I guess the UFL connector is as well for LoRa -- after desoldering some 0 Ohm SMD resistor and creating a soldering bring/reusing that 0 Ohm SMD resistor.)
Even though TTGO Lora32 comes with a cable to connect a battery, TTGO Lora32 version v1.6 had a fire issue where the battery explodes. I checked the schematics: My v1.6.1 has this issue fixed and the TTGO T-BEAM uses anyway a different charging IC that is claimed to be pretty good.

Also, double check the pinout: some complain that the pinout provided by LilyGO can be wrong.

Depending on the applications, I might use LoRa for device-to-device commnication, or LoRaWAN via The ThingsNetwork that has a coverage in Reykjavík, but fair use limits, e.g. 10 messages to the device per day, which could be avoided by setting-up my private LoRaWAN using ChirpStack.

Talking about lora (a popular name for parrots as the Spanish word for parrot is loro): did I mentioned already that the Computer Science department has moved and already a new visitor...?

DIY DVB-T/DVB-T2 indoor sleeve antenna made out of a coax antenna cable

Helmut Neukirchen, 13. October 2021

As the DVB-T sender has been moved here within Reykjavík, I had to adjust my indoor antenna which is simply built by turning a coax-antenna cable into a half-wave dipole antenna (essentially, a variant of a sleeve antenna) : the outer insulation of the coax cable was removed so that the part with the inner wire has a lambda/4 length and the left-over shield was peeled and turned inside out over the insulation so that it also has lambda/4 length (in sum: lambda/2). The aluminum foil that was part of the shielding was removed and finally, the inner insulation removed so that the inner wire remains totally uncovered. Take care that remainders of the shield do not touch the inner wire.

For the details, including the calculations, see: http://www.vdr-wiki.de/wiki/index.php/DVB-T_Antennen (in German, but the calculations work in any language -- note that they use a correction factor of 0.95 for the length of the shield and 0.97 * lambda/4 for the length of the inner wire -- but, well, the antenna needs to cover some frequency range, so these corrections probably matter not that much).

More info on the senders in Iceland can be found at https://vodafone.is/sjonvarp/sjonvarpsthjonusta/thjonustusvaedi/ (see map at the bottom). The sender operated by Vodafone on Úlfarsfell broadcasts on three UHF channels with 8 MHz bandwidth:

  • Channel 26 (514 MHz center frequency): RÚV HD (DVB-T2), RÚV 2 HD, BBC Brit, DR1, Food Network, Hringbraut, N4, National Geographic, Rás 1, Rás 2, Rondo (the latter are not TV, but radio)
  • Channel 27 (522 MHz center frequency): RÚV (DVB-T only), Stöð 2, Stöð 2 Bíó, Stöð 2 Fjölskylda, Stöð 2 Sport, Stöð 2 Sport 2, Rás 1, Rás 2, Bylgjan, Fm957 , Léttbylgjan, Xið
  • Channel 28 (530 MHz center frequency): Stöð 2 Golf, Stöð 2 Sport 3, Stöð 2 Sport 4, Animal Planet, Discovery.

Using 522 MHz, lambda/4 is 14.36 cm which I used for the above DIY antenna.

With the older sender where I had an unblocked line of sight, the reception was yielding almost 100% signal strength and signal quality, but with the new location of the sender on Mt. Úlfarsfell, my reception got really bad (there is a hill and high buildings in the line of sight) and signal strength is even fluctuating, which might be explained by the weather, e.g. rain can be expected to weaken the signal strength.

In addition to the above programmes, my TV receives a far stronger DVB-T signal on on channel 41 (634 MHz -- which means the calculated lambda/4 does not fit perfectly, still the received signal strength is close to 100%) which must be another sender than the one from Vodafone (it anyway broadcasts missionary programmes only).

Eclipse, modular projects and JUnit

Helmut Neukirchen, 21. September 2021

I (and many others) always had problems making JUnit (as added by Eclipse automatically when creating JUnit test cases) work with modular projects, i.e. projects that use module-info.java files to define dependencies. Finally, I found solutions:

  • Let the new project wizard not create the module-info.java file -- deleting it afterwards might not be enough as Eclipse did already some modification the the module path settings (OK, trivial) or
  • Choose Java ≤8 in settings (i.e. module-info.java ignored -- again: trivial) or
  • Apply quick-fixes: in the class containing your JUnit test cases, hover over the org.junit.jupiter.api import and select the quick-fix: “Add ‘requires […]’ to module-info.java”. Then in module-info.java: hover with mouse over the squiggle line (the important point is: clicking on the light bulb does not give any quick-fix, so you need to hover) and do: “Move classpath entry ‘JUnit5’ to modulepath”. This should fix it! or
  • Create an Eclipse project with extra src folder (e.g. src-test or use the Maven default structure) that has (via “Allow output folders for source folders”) its own output folder (e.g. bin-test or use the Maven default structure) and that has “Contains test sources” toggled to “Yes” (in project properties - Java Build Path -Source). The test src folder should then have a more grey-ish icon. Either do this with the New project wizard, or afterwards using project properties. As a result, JUnit is then not part of the modular project anymore. (Has also the advantage that test code is better separated.)

Power consumption of Raspberry Pi 4 versus Intel J4105 system

Helmut Neukirchen, 7. June 2021

While I intended to use a Raspberry Pi 4 as a small server, I also ordered from China a small system (BEBEPC, comparable to the Qotom mini PCs. While Qotom mini PCs are slightly better documented, they typically have less powerful CPUs: even though they have Core i3 CPUs, these are so old that a more recent Celeron CPU is faster) based on a Intel J4105 CPU (= TDP of 10 W, 4 cores, 1.5 GHz base frequency, 2.5 GHz burst frequency) which has over the Raspi the advantage of native SATA ports (one standard SATA connector with 5 V power supply and one mSATA connector carrying 3.3 V power supply -- I ordered an mSATA to SATA adapter and a 5 V SATA power splitter cable to be able to have a RAID system of two SATA SSDs -- mSATA SSDs are only available in smaller sizes. But note that the SATA ports are only 3 Gbit/s, i.e. 300 MB/s, which means that a 500 MB/s SSD is already overkill). However, the biggest advantage is that it is (obviously) able to run Intel-only code, e.g. in particular Virtual Machine images or containers only available for Intel, e.g., using Proxmox VE.

Both systems come with 8 GB LPDDR4 RAM, but for the J4905, even 16 GB are possible (see below).

Both systems can be passively cooled: for the Raspy, I used a cooling case from https://www.coolingcases.com/ -- it cools well, but the metal affects the range of the onboard Wifi (not that relevant for a server). The J4105 came as well with a case that allows passive cooling -- while it is still tiny for a PC, it has approx. 4 times the volume of the Raspi.

As you can see in the above photos, squeezing to SATA SSDs (for RAID) was a challenge and I had to bend the connector cables very tightly (which might not be healthy for the connectors, cables, PCBs). Therefore, I had considered removing the 2.5" case from the SATA SSDs and using only the bare PCBs. But as you can see in the photos below, there were pads glued to the SSD controller chip and I am not sure whether these are thermal pads and the metal case is needed as heat sink. Hence, I decided to keep using the case. (On the other hand, the second pad is only marginally covering the two NAND chips and the NAND chips on the second side of the PCBs are not padded at all -- so maybe, the pads are just mechanical to prevent rattling.)

SSD case open (HDD for comparison)


SSD PCB with pads (are these thermal pads?)

The J4105 system has for sure more compute power than the Raspi's ARM CPU, so the remaining question is the power consumption. Hence, I did some tests and measurements using a cheap power meter that claims to have a 2% precision. Both systems were connected via FullHD HDMI to a monitor.

Intel J4105 measurements

As at the beginning, I did not had installed Linux yet, it was running Windows 10 and idle refers to having only the built-in task manager running in foreground (to display clock frequency) and all the background services that Windows 10 has by default. CPU load was generated using a batch file containing an endless loop.

The J4105 clocks down to 0.78 GHz when idle and the power consumption of the whole system (with one mSATA and one SATA SSD) is then 3.8 W.

With 1 core being busy, it still clocks up to 2.4 GHz and consumes 7.2 W.

With 2 cores being busy, it still clocks up to 2.4 GHz and consumes 10.3 W.

With 3 cores being busy, it clocks up to 2.35 GHz and consumes between 11.8 W and 12.1 W.

With 4 cores being busy, it clocks up to 2.19 GHz and consumes between 11.4 W and 12.0 W. (So it seems the reduced clock saves power).

I did run it with 4 cores being busy for an hour, and the measurements did not change, e.g. no thermal throttling seems to have occurred (nor did the case get hot, so a really good passive cooling -- or the contact between CPU and case is bad, but then thermal throttling could have been expected).

Raspberry Pi 4 measurements

I had OSMC with KODI running, but nothing else, i.e. the KODI UI being idle, but all the background services running. The latest firmware as of 4. June 2021 was used, storage was SDHC card only. CPU load was generated using the stress command.

The Raspberry Pi 4 consumed idle 3.8 W to 4.0 W.

With 1 core being busy, it consumes 4.5 W.

With 2 cores being busy, it consumes 5.0 W.

With 3 cores being busy, it consumes between 5.4 W and 5.5 W.

With 4 cores being busy, it consumes 6.0 W.

Temperature with the cooling case from https://www.coolingcases.com/ was approx. 52° C (so it prevented thermal throttling that would start at 80° C). Surprisingly, even in idle mode, the temperature was 40-42° (the tiny case does feel much warmer than the bigger case of the Intel system -- so, it seems: size matters).

Conclusions

In summary, the idle power consumption of both systems is comparable and while the busy consumption is lower with Raspberry Pi 4, it is of course less powerful than the J4105 system. For the J4105, I never observed the full 2.5 GHz burst clock rate (but 2.4 GHz). Even though the CPU TDP is 10 W, the whole system consumed up to 12.1 W (e.g. the RAM, the two SSDs, WiFi, HDMI output, external power supply, etc. probably also to add their share -- during boot, I even saw 14.8 W).

Note that others suggest 2.7 W idle for the Raspi 4 (but seems to require switching off a lot of I/O, e.g., HDMI etc. -- which I did not do, nor did I minimise background processes) or even as low as 2.1 W. On the other hand, many other report that they neither (with either a fan or a heatsink) get the system cooler than 42° in idle, so getting the Raspy warmer than the touch of your hand seems to be normal, but the J4105 system with the bigger case was considerably cooler.

It seems that the J4105 is a good 24/7 home server system, i.e. more powerful than the Raspi when needed, but still not consuming more power when idle. (A German c't article confirms this for a thin client that is also J4105-based.)

The ultimate passively cooled server with ECC ram would be ASRock Industrial iBOX-V2000M or iBOX-V2000V -- but these are not available for private users. But any ASRock motherboard in general, together with AMD Pro CPUs should support ECC.

Some documentation on the BEBEPC system

RAM: 16 GB DIMMs supported

An even more powerful system based on J4125 (= J4105 with higher clock) suggests that with Dual-Rank-Modules even 16 GB per RAM module are possible, i.e. with two banks, even 32 GB of RAM. Power consumption of that J412-based system has also been measured which is higher (best explained by the fact that it is faster, i.e. cannot clock down as much: 2000-2700 MHz vs. 1500-2500 MHz).
I therefore ordered a 16 GB DIMM: I can confirm that this works. However, my system has just 1 RAM socket, so 16 GB is the maximum.

Auto power on

It seems that to make the system automatically power-on after a power outage, a jumper needs to be set at PWRON1 at the pins marked PWR_SW1.

Independent from that, the system does not start after having been powered off -- not even after the power button has been pressed: in this case, the RTC/CMOS battery needs to be removed and inserted again.

BIOS settings

F11 or DEL to enter the AMI BIOS.

F2 to select boot drive.

MAC address can be found via Advanced.

Change OS to Linux via Chipset-South Bridge.

Change SATA Device Type to SSD via Chipset-South Cluster Configuration (not sure whether Mechanical Presence Switch setting matters and needs to be disabled).
Not sure about DITO (the time a given port must be idle before HW may enter DevSleep autonomously): might help if SSD gets hot/consumes to much energy.

Chipset-Miscellaneous Configuration: Power Button Debounce Mode disable to make the power button to come back from standby mode.

Security-Secure Boot: Disable if booting Linux causes problems.

Security-Quiet Boot: Disable to see some BIOS messages at boot.

Boot: Change order of boot devices.

US keyboard

On German/Icelandic keyboards, the pipe symbol is left of the enter key.

Update 2023: Intel N100, N200 and N300/N305 Alder Lake N CPUs

The Intel N100, N200 and N300/N305 CPUs are some sort of successor of the J4105 CPU. N100 and N200 have both 4 cores and N200 can clock higher and has a better GPU than N100. The numbering is a mess (e.g. N95 is faster than N100), as a rule of thumb: those CPUs ending with the digit 5 are allowed to get hotter (i.e. higher TDP), i.e. they can probably sustain longer using all cores at highest speed. But I do not know what the reason for the different TPD is (are the different TDPs the same die, but just the production results get selected?). N300/N305 has 8 cores and also marketed as "Core i3". All support 2.5 Gb Ethernet. While they support only one DIMM (i.e. single channel being slower than two memory channels), they support DDR5 RAM which is anyway 50% faster.

If the BIOS supports it, these system even allow ECC, see my post on the iKOOLCORE R2 for more details on the ECC. Even if the BIOS does not support enabling IBECC, there are claims that using the AMISCE tool from AMI, you can set this from command line (and then reboot -- just take care that you can clear the CMOS/NVRAM or have some rescue mode), e.g. on Linux, this is the SCELNX_64 / scelnx tool, but maybe the uefisettings tool works as well?

That Chinese Intel i3-N305 fanless mini PCs look also good, but you never know what backdoors are int the BIOS. The Protectli systems have coreboot, but are more expensive and have outdated hardware, i.e. none of these new processors. Starlabs Byte has an N200 with coreboot, but DDR4 RAM only. The official maximum RAM is 16 GB according to Intel, but there are systems offered with 32 GB:

  • TerraMaster F4-424 Pro NAS with N300 and 32 GB RAM. (While Terramaster is Chinese, it might be interesting to see when the non-Chinese QNAP and Synology offer Alder Lake N-based systems, but in contrast to TerraMaster, installing your own Linux, e.g. TrueNas Scale, is not well supported.)
  • CWWK Magic Computer with various CPUs and RAM configuration and it has even a PCIe socket
  • iKoolCore R2 it has a fan but is super tiny. For details, including getting ECC to run on Linux, see my blog post on the iKOOLCORE R2.
  • ODROID H4 from Hardkernel which is South-Korean, so no China BIOS -- hence the ODROID H4 sounds a very good buy and also the support provided by Hardkernel seems to be good. But it does not come with a decent case. If you sacrifice the NVM SSD, you can use the NVM port (which is in fact just PCIe) to add 4 further 2.5 Gb Ethernet ports, making it a great router, however you then either need to use the slow eMMC or the SATA ports for storage.
  • You can also find many more devices at AliExpress...

Note that most of the above come with Intel Ethernet Intel i226 chips that have a good driver support in Linux and BSD, however there are claims that these chips crash after a couple of hours and the only way to prevent this is to switch of PCIe power saving (ASPM) -- on the other hand, you find people reporting their n100 systems with i226 running rock-solid.

In general, these new CPUs are slightly faster than, e.g., a 8 core C3758 Xeon-like Atom CPU that is three years older, but supports more RAM and even ECC. But these new CPUs are more I/O limited (in terms of PCI lanes) in comparison to that 8 core C3758 Xeon-like Atom CPU that has 25 W but can still be passively cooled.

A test of an Intel N200-based fanless mini from Asus with DDR4 RAM mentions that N200 with DDR5 RAM is faster. Idle power consumption is claimed to be 5-6 W and 22 W under full load.

Others show for a N305 system idling 15-16 W which is significantly more. (But the N305 has 15 W TDP vs. 7 W for the N300 -- otherwise, both CPUs are exactly the same; I guess, the N300 will simple start to throttle when stressing all cores. But the N305 can also be restricted via BIOS to a lower TDP.) There, you find also a performance comparison with a Raspberry Pi 4 and Pi 5: N100 is twice as fast as a Pi 5 and four times faster than a Pi 4, and N305 is almost twice as fast as an N100.

Wacom Tablet on Linux with dual/multi-screen setup

Helmut Neukirchen, 3. February 2021

Wacom tablets, including digitisers in screens, should be supported out-of-the-box with Linux.

I have a dual screen setup and a Wacom tablet. As one screen is 4K UHD and the other FHD, this is too much screen estate for my shaky hand on the tiny Wacom tablet that I have (the tablet has 2540 lpi resolution, so this is not a restriction of the tablet, but the blame goes to me). Therefore, I want to restrict the tablet use to the FHD screen only in order to get a more calm pen usage.

On my Debian KDE system, this can be done by two means:

  • Install the KDE Wacom configuration tool via Debian package kde-config-tablet. In that tool, either set the mapping of the Wacom tablet to a specific screen (do not forget: you need to click "OK" also on the initial overview screen, i.e. end the whole setting process). Or use the pre-defined keyboard shortcuts to set a tablet-screen mapping.
  • Use the command line tool xsetwacom: get the screen name via xrandr (e.g. eDP for my laptop screen) and, get the name of the Wacom tablet via xsetwacom --list. Note that multiple entries are listed there for the same tablet: take care to use the one that ends with stylus. In my case, I use
    xsetwacom set "Wacom Intuos S Pen stylus" MapToOutput eDP

In online teaching, I enjoy using to draw on slides. E.g. for drawing on PDFs, I use either KDEs okular PDF viewer (in presentation mode, move to the upper screen edge to get a menu with pen colors) or xournal or rather it's fork xournal++/xournalpp. While I run PowerPoint on Linux with Wine, exactly the presentation mode drawing function does not work (I see the dot representing where the pen would draw, but drawing does not leave a trace).

For using the pen in Gimp, it is important to understand that the Gimp drawing tool that shall to be mapped to the pen needs to be selected from the Toolbox window with the pen itself! (The pen and the mouse have different tools associated and Gimp distinguishes this based on what input device is used to selected the tool. Trying to select the tool with the mouse and then using the pen will only lead to making the pen crop (which is probably the default behaviour): this mode is also displayed at the bottom: "click-drag to draw a crop rectangle".)

Raspberry Pi 4: boot from USB with Ubuntu, ZFS

Helmut Neukirchen, 18. November 2020

First steps

When I was new to Raspberry Pi, I followed these German instructions.

Running Raspberry Pi on SD card and syslog wear

The default Raspberry Pi syslog logs to the normal files system, i.e. SD card (if not using USB).
In order to log to RAM and write it to file system only when needed, you can use Log2RAM either by adding it manually or to apt. Alternativly (did not try myself), use: Zram-config.

But you anyway might want to use USB storage instead of SD card.

Booting Raspberry Pi from USB

Raspberry Pi now supports booting from USB (having installed the latest firmware does not harm: I did this by booting Raspberry OS from SD card. Note that in contrast to Raspberry Pi <4, Raspberry Pi 4 stores the firmware actually in an EEPROM, not just as a file loaded at every boot from the FAT boot partition by the GPU firmware of Raspberry Pi <4). I then dd'ed the image from SD to USB mass storage.

Booting Ubuntu from USB

If you want to go for Ubuntu (not just beta as 64 bit Raspberry Pi OS, but available as stable 64 bit and support for ZFS), the following is relevant:

Ubuntu for Raspberry Pi uses U-Boot as bootloader, however U-Boot does not support booting from USB on Raspberry Pi, only from SD card, i.e. while Ubuntu 20.04 LTS works out of the box when booting from SD card, when I dd'ed the SD card onto a USB drive, booting failed because U-Boot could not load the kernel via USB.

Luckily, the bootloader that is part of the Raspberry Pi firmware can boot Ubuntu without U-Boot. As always, a FAT format boot partition is needed that contains a couple of files in order to boot.

While U-Boot can load compressed (vmlinuz) kernel images and can load the kernel from an ext4 root filesystem, the Raspberry Pi bootloader firmware can only load uncompressed (vmlinux) kernel images and only from the FAT-based boot filesystem.

While the Ubuntu 20.04 LTS ARM64 image has a kernel on the FAT-based boot partition, it is unfortunately compressed (because the assumed U-Boot would be able to deal with it). Hence, you need to uncompress the kernel manually to allow the Raspberry Pi firmware bootloader to load and start the kernel.

In addition, it seems that the .dat and .elf files that are part of the bootstraping need to be the most recent ones.

Hence, I downloaded the whole Raspberry Pi firmware from GitHub (via the green Code button) and extracted the .dat and .elf files from the boot directory.

Finally, you need to change the config.txt by adding kernel=vmlinux and
initramfs initrd.img followkernel
(in [all] and comment-out [pi4].

That should be enough to boot Ubuntu from USB. My above steps are essentially based on https://eugenegrechko.com/blog/USB-Boot-Ubuntu-Server-20.04-on-Raspberry-Pi-4 where you find step-by-step instructions.

Note that when you do later a kernel update inside the booted Ubuntu, it might only update the kernel image in the ext4 root partition -- not in the FAT boot partition. In this case, you need to copy the kernel over. Also you need to decompress it again.
(It should be possible to automate this, to get an idea, see: https://krdesigns.com/articles/Boot-raspbian-ubuntu-20.04-official-from-SSD-without-microsd of https://medium.com/@zsmahi/make-ubuntu-server-20-04-boot-from-an-ssd-on-raspberry-pi-4-33f15c66acd4.)

If you want to use the system headless (in fact, connecting a keyboard did not produce any input in my case), you can configure the network settings via the FAT-based boot partition: https://ubuntu.com/tutorials/how-to-install-ubuntu-on-your-raspberry-pi#3-wifi-or-ethernet

ZFS

Work in progress...

The ultimate goal is to have two drives as ZFS mirrors (RAID1) connected via USB.

(Be aware: 1. if the USB adapter claims data to have been written, that it has in fact not yet written, ZFS may fail -- just like probably any journaling-based file system; 2. USB is not as stable as SATA, so an ODROID-HV4 or a Raspberry Pi 4 compute module with PCI-based SATA might be better, or a Helios64 which might in future even have ECC RAM. But at least, Raspberry Pi has the better ecosystem and ZFS has some memory debug flag that does checksums for its RAM buffers.)

While https://www.nasbeery.de/ has some very easy script to use ZFS, it still assumes an SD card for the boot and root filesystem. It would of course be better to have everything on the USB drive (and even using RAID).

As the Raspberry Pi bootloader can only access a FAT-based boot partition, we still need a FAT-based boot partition on the USB drive. According to documentation, if the first probed USB drive does not have a boot partition, the next drive will be probed. So, it should be possible to have some sort redundancy here (but we need manually take care that both FAT-based boot partitions are synced after each kernel update to have some sort of RAID1).

As Ubuntu should be able to have the root partition on ZFS (once the Raspberry Pi firmware bootloader loaded the kernel from the FAT-based boot partition), it should be possible to use ZFS as root partition (what size? 50GB?). The remainder could then be a ZFS data pool.

Note that if one of the RAID1 drives fails and needs to be replaced, the new drive might have slightly less sectors, so it is wise to use not all available space for the ZFS data pool. If we use anyway a swap partition in addition, we could use it to utilize the remaining space (and have then on the replacement drive a slightly smaller swap partition if the replacement drive is smaller). The swap partition should not be on ZFS but a raw swap partition: Linux can either use multiple swap partitions, i.e. from all of the RAID drives -- or only use one and keep the others unused.

This means, we still partition the mass storage instead of letting ZFS use it exclusively. The Raspberry Pi bootloader understand only MBR partition format -- this might limit drive size to 2 TB.

The following web pages cover ZFS as root:

Compiling ZFS for Raspberry Pi OS (first part), switching to ZFS root (second part), including initramfs and kernel cmdline.txt

https://github.com/jrcichra/rpi-zfs-root

https://www.reddit.com/r/zfs/comments/ekl4e1/ubuntu_with_zfs_on_raspberry_pi_4/

Update

I do not use any longer the Raspberry Pi for this, but trying now on PC hardware with Proxmox that comes with ZFS.

But currently, I have the problem that while BIOS and Linux detects the two SSDs, Proxmox complains that it can find only one.

  • When booting with a USB key containing the Proxmox installer and the Crucial SSD disconnected and only the SanDisk SSD connected via the SATA to mSATA adapter, the USB key becomes /dev/sdb and the SanDisk is detected by Linux in dmesg as ata1 (with Features: Dev-Sleep) and as /dev/sda, but Proxmox says that it cannot find a disk at all.
  • When using the Crucial SSD on that mSATA port with the USB key and the SanDisk SSD not connected, Linux shows the Crucial SSD in dmesg as ata1 (with Features: Trust Dev-Sleep) and as /dev/sda and Proxmox allows to select ZFS RAID0.
  • When using the Crucial SSD on that mSATA port (ata1) and the SanDisk SSD on the SATA port (ata2) and boot with the USB key, Linux shows the Crucial SSD in dmesg as ata1 (with Features: Trust Dev-Sleep) and as /dev/sda. Note that the Crucial SSD is by dmesg shown as 4096-byte physical blocks whereas the SanDisk SSD does not has this line, but says Preferred minimum I/O size 512 bytes whereas the Crucial SSD says Preferred minimum I/O size 4096 bytes and supports TCG Opal. Proxmox allows to select ZFS RAID0, but not RAID1 (=mirror). As in the above list item 1, Proxmox does not seem to like the SanDisk SSD. Notabaly, the SanDisk SSD (/dev/sdb) is shown at mountpoint /cdrom -- not sure why (but unmounting it did not really make Proxmox use it).
  • Attaching the SanDisk SSD via SATA USB adapter makes Linux show it as sdb (the Crucials SSD as usual as sda and the USB key as sdc). Still Proxmox does not like it.
  • Note that the SMART info on the SanDisk SSD says Unrecoverable ECC count (Count of unrecoverable ECC errors) 15 sectors (my guess it that this is the reason why Proxmox does not like it!)
  • I added now a Samsung SSD in addition to the Crucial and now Proxmox is happy. The Samsung SSD is 500 GB only though. Buy a second Crucial MX500 2TB (CT2000MX500SSD)...

But now, Proxmox complains that the disks have not the same size (on command line, probably some --force/-f would be sufficient): "Warning: mirrored disks must have same size Please fix ZFS setup first".
But the trick is to install it in ZFS RAID 0 mode on the smaller disk only. (What does not work, i.e. still gives the above complaint is set the "hdsize" parameter during the installation).

Best practise: use for ZFS as drive reference: /dev/disk/by-id path

USB adapters

I used USB to SATA adapters with the ASMedia ASM1153E as these work with Raspberry Pi and UASP.

TODO:
When checking dmesg after connecting a USB drive, it did spit out some warning that could be ignored.

TODO: performance tuning

https://www.jeffgeerling.com/blog/2020/raspberry-pi-usb-boot-uasp-trim-and-performance

https://www.jeffgeerling.com/blog/2020/raspberry-pi-usb-boot-uasp-trim-and-performance

https://www.jeffgeerling.com/blog/2020/enabling-trim-on-external-ssd-on-raspberry-pi

However that made in fact everything slower (TODO: speed testing via hdparm, dd).

Debian 10 Buster Linux on Thinkpad T14 AMD

Helmut Neukirchen, 12. November 2020

Update: For having the side button of my Logitech mice as middle mouse button, I used in the past some X org rules, but now, I simply use Solaar.

Update: The WiFi did not work while an Ethernet cable was connected. In fact that is a power saving feature: that in the BIOS setting, disabling Wireless Auto Disconnection solved the problem.

Update: the text below refers to Debian 10 "buster". Now that Debian 11 "Bullseye" has been released, which has Linux kernel 5.10, things should work out of the box. I just did a dist-upgrade from buster to bullseye which was the smoothest dist-upgrade that I ever had, i.e. no problems at all. Except that I had to do a apt-get install linux-image-amd64 to get the standard bullseye kernel (my kernel installed manually from buster did confuse VirtualBox which complained about mimssing matching kernel header source files) .

The Thinkpad T14 AMD is a very nice machine and everything works with Linux (I did not test the fingerprint reader and infrared camera, though). I opted for the T14 over the slimmer T14s, because the T14s has no full sized Ethernet port and it seems that (due to being slimmer) the cooling is not as good as with the T14.

Kernel 5.9 (or later) is a must to support all the hardware of Thinkpad T14 AMD, but the 4.x kernel used by the installer of Debian Buster is sufficient to do the installation, except that Wifi does not work, so you need an Ethernet cable connection during installation.

To get the 5.9 kernel in Debian Buster, it is at time of writing available from Sid (there are various ways to use packages from Sid in Stable -- in the simplest case, download the debs manually), and the following packages are needed:

  • linux-image-5.9*-amd64*
  • firmware-linux-free*
  • firmware-linux-nonfree*
  • firmware-misc-nonfree*
  • firmware-amd-graphics*
  • amd64-microcode* (Checking that the UEFI/BIOS is most recent before using a Microcode upgrade is recommended. But the update might be anyway blocked via /etc/modprobe.d/amd64-microcode-blacklist.conf)

In principle, the kernel header files are also nice to have, but it may involve updating to a completely new GCC from Sid:

  • linux-headers-5.9*-amd64*
  • linux-headers-5.9*-common*

Note that you will not get automatically security updates if you update the kernel manually. You may want to give APT pinning a try for having only the kernel from Sid.

Update: a buster backport of a more recent kernel is available now, install via apt-get install -t buster-backports linux-image-amd64 (assuming, you have backports configured). This support also all the above packages, including linux headers.

Note that for the Intel AX200 Wifi to work, you also need the latest firmware-iwlwifi (the one from buster-backports is enough-- the one from Sid should not be necessary):
apt-get install --target-release=buster-backports firmware-iwlwifi (assuming that backports have been configured as APT source). Initially, I had to download some files directly from Intel, but it seems that the buster-backport package now contains the missing files.

The graphics was straigthforward from Debian Stable:

  • xserver-xorg-video-amdgpu
  • firmware-amd-graphics*

I cannot remember whether I installed them explicitly or whether they are just installed because they are dependencies of the above AMDGPU package: I have a couple of mesa and DRM packages installed (see also Debian Howto on AMDGPU and the Debian Page on Video acceleration), e.g.:

  • libgl1-mesa-dri
  • libglx-mesa0
  • mesa-vulkan-drivers
  • mesa-va-drivers (for video decoding)
  • vdpau-driver-all (for video decoding)
  • libdrm-amdgpu

(The whole Linux graphics stack consists of more components, you maybe want to check..)

Check whether MESA is activated.

Also some, BIOS/UEFI tweeking was needed, e.g. Change sleep state from Windows to Linux mode (and if you use an unsigned kernel: disable secure boot).

There is one thing concerning suspend and resume, though: my Jabra USB headset stops to work after resume -- manually unplugging and plugging the USB plug solves this problem and remarkably other USB devices (keyboard and mouse) work after resume. In the end, I wrote a script that removed and install the USB module after resume and placed as file /etc/pm/sleep.d/10-usbquirk.sh (do a chmod a+x). The content is as follows:

# !/bin/bash
# USB headset not working after suspend, so try relode USB module (other USB devices work though)
case "${1}" in
hibernate)
;;

suspend)
;;

thaw)
rmmod xhci_hcd
sleep 0.5
modprobe xhci_hcd
;;

resume)
rmmod xhci_hcd
sleep 0.5
modprobe xhci_hcd
;;

*)
;;
esac

(Somehow the indentation got removed by WordPress -- but in fact indentation does not matter for the Bash script.)

I also have the UltraDock docking station: it uses the two USB-C ports of the T14 plus a third proprietary connector: AFAIK it is for the Ethernet, even though it seems that it does not simply mechanically extend the Ethernet port, but has an extra Ethernet chip built-into the docking station. And I guess that one of the USB C ports works rather in a mode where not USB C is used but the video signals are directly transmitted, so that is different from a pure USB C dock. And indeed, the dock seems to have its own MAC address (I did not figure out how to find out what the MAC address is -- probably need to connect it to my smart switch). While the BIOS has a MAC passthrough setting (that is enabled), under Linux, it gets not passed through.

Otherwise, the dock works without any extra configuration with Debian, including dock and undock (I use a 4k screen via one of the HDMI connectors of the UltraDock and a couple of USB A ports of the UltraDock).

I still dislike not having anymore the bottom dock connector used by Lenovo in the past: you cannot anymore simply "throw" the Laptop onto the dock, but need significant force now to insert the three connectors from the side and I am not sure, how many docking cycles the USB-C connectors last from a mechanical point of view.

I also tried a 4K screen (Lenovo P32p) that has USB-C for power delivery and video and even serves as USB hub as well (i.e. with one USB-C cable, it powers the laptop, transmits the video, and mouse and keyboard are attached to the screen; it even has Ethernet that will then go via the same, single USB-C cable). This works nice and replaces in fact a dock. -- However, at the beginning I needed always to reboot the system in order to make the video work via USB-C. For some reasons this problem magically disappeared.

TODO: Check Thinkpad specific packages (I have them currently not installed, and do not see any hardware support missing).

Update: There is now also Debian Wiki page on the Thinkpad T14.

Tahoma and Tahoma bold font in Wine/CrossOver

Helmut Neukirchen, 27. October 2016

Even if the free Microsoft Core fonts are installed, Tahoma is missing. A Microsoft knowledge base support entry is available to download as Tahoma32.exe, however this is a broken link. Hence, download the therein contained files (tahoma.ttf and tahomabd.ttf) from elsewhere (seems to be legal as Microsoft offered them anyway to the public), e.g. https://github.com/caarlos0/msfonts/tree/master/fonts

Copy font file to ~/.fonts directory and run fc-cache -fv

Some notes on using a Spark cluster

Helmut Neukirchen, 18. August 2016

The following notes are mainly for my personal use referring to the Spark 1.6/YARN cluster that I access, but maybe they are helpful for you as well...

Upload to HDFS

By default (=used implicitly by all HDFS operations), a HDFS paths are relative to your HDFS home directory: it needs to be created first by the administrator!

While piping through SSH should work ( cat test.txt | ssh
username@masternode "hdfs dfs -put - hadoopFoldername/" ) , it is reported to be slow -- I never checked this, but as I anyway used rather small data, I did instead an scp to the local file system of the master node and used afterwards a hdfs put:
scp localFile username@masternode
hdfs dfs -put twitterSmall.csv Twitter

Concatenate HDFS files (all inside an HDFS directory) and store in
local file system (without sorting)

hdfs dfs -getmerge HdfFolderContainingSplitResultFiles LocalFileToBeCreated

Note that Spark does not overwrite output files in HDFS by default. Either take care when you re-run jobs that the output files have been (re-)moved or you have to allow it in the Spark conf of your program:  conf.set("spark.hadoop.validateOutputSpecs","false")

Debugging

  1. See http://spark.apache.org/docs/latest/running-on-yarn.html
  2. Use spark-submit --verbose
  3. If executor processes are killed, this is mainly due to insufficient RAM (garbage collection takes too long, thus timeouts occur or simple out of memory/OOM exceptions). While you see in this case in the log of the driver on the spark-submit console  only "<span class="hljs-keyword">exit</span> code <span class="hljs-number">143</span>", the details need to be found in the logs of nodes/executors. This may not be possible via Web UI due to executor nodes being firewalled -- in this case use:
    yarn logs -applicationId application_1470137500465_0147
    (App Id tp be taken from ID columns in Cluster Web UI. Works only for completed runs, not the current run.) In these logs, you can find then / search for java.lang.OutOfMemoryError: GC overhead limit exceeded or java.lang.OutOfMemoryError: Java heap space

Performance tuning

  1. Note that due HDFS blocks size of 128 MB, by default, partitions of this size are created when reading data. To enforce a higher number of partitions/higher parallelism, use already at the file read stage the optional numberOfPartitions parameter (that also many other RDD creating operations support).
  2. Some introduction https://www.mapr.com/blog/resource-allocation-configuration-spark-yarn
    http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-2/
    (in particular: more than 5 cores per executor is said to lead to bad HDFS throughput. Note that “executor” is not identical to “node”, thus instead of running one executor with 24 cores on one node, rather run 4 executors with 5 cores on each node or 8 executors with 3 cores! Note that then, however, the overall memory of a node needs to be divided by the numbers of executors per node, e.g. 5 BG per executor with 8 executors per node on a 40G RAM node.)
  3. Config for RAM-intensive jobs (=1 core per executor only & 1 core per node only, using 40GB heap space and 2GB overhead for Spark/Yarn itself => on each of the 38 nodes only one core is used that thus can make use of all available RAM), in addition increase timeouts and message size:
    spark-submit --conf "spark.network.timeout=300s" --conf "spark.akka.frameSize=2000" --driver-memory 30g --num-executors 38 --executor-cores 1 --conf "yarn.nodemanager.resource.cpu-vcores=1"--executor-memory 40g --conf "spark.yarn.executor.memoryOverhead=2000"--conf "spark.driver.cores=4" --conf "spark.driver.maxResultSize=0"
    (Note: not sure about the driver memory and cores: this seems to have no influence -- is it too late to set it here?)