Horizon 2020 Future and Emerging Technologies programme: Dynamical Exascale Entry Platform - Extreme Scale Technologies (DEEP-EST) about to finish

Helmut Neukirchen, 26. April 2021

The Horizon 2020 Future and Emerging Technologies programme: Dynamical Exascale Entry Platform - Extreme Scale Technologies (DEEP-EST) project has finished its work and was praised for its results in the final review of the project's outcome.

We still have to harvest our results by writing publications on the results, but you can find a video already here:

All our travel emissions have been offset. As it is not clear whether funding regulations allow to offset emissions due to supercomputer energy consumption, these were not compensated. However, one of the research topics of the DEEP-EST project was energy efficiency and we achieved a lot by using specialised (=more efficient) accelerator hardware.

CoE RAISE Seminar: HPC Systems Engineering in the Interaction Room

Helmut Neukirchen, 14. April 2021

The European Centre of Excellence RAISE (Research on AI- and Simulation-Based Engineering at Exascale) is holding an online seminar on using the Interaction Room Software Engineering approach for HPC Systems Engineering.

This approach has been described in this publication:
Matthias Book, Morris Riedel, Helmut Neukirchen, Markus Götz.
Facilitating Collaboration in High Performance Computing Projects with an Interaction Room.
The 4th ACM SIGPLAN International Workshop on Software Engineering for Parallel Systems (SEPS 2017). Co-located with SPLASH 2017 as an ACM SIGPLAN-approved workshop.
October 23, 2017, Vancouver, Canada. DOI: 10.1145/3141865.3142467, ACM Digital Library 2017.
Download

The recording of the online seminar can now be found on the CoE RAISE YouTube channel:

HÍ eða HR, tölvunarfræði eða hugbúnaðarverkfræði / University of Iceland vs. Reykjavik University, Computer Science vs. Software Engineering

Helmut Neukirchen, 5. March 2021

HÍ eða HR / University of Iceland vs. Reykjavik University

Often, the question arises whether University of Iceland (Háskóli Ísland (HÍ)) or Reykjavik University (Háskólinn í Reykjavík (HR)) is better for studying Computer Science (tölvunarfræði) or Software Engineering (hugbúnaðarverkfræði).

In my experience both universities do not differ that much -- on the surface things might look different, but when you look closer, they are not that different. As an example: HR advertises 3 week intense courses to apply the theoretical foundations learned in earlier courses, whereas at HÍ, the application of the learned theory is built into the courses themselves: either as a project at the end of each course or a project running even throughout the whole course semester.

However, there is one difference (in addition to paying high tuition fees at HR): the diversity choice of courses from other disciplines. At HÍ, you can take non-CS or non-SE courses as part of your studies -- and these can not only be other STEM (Science, Technology, Engineering, and Mathematics) courses, but also, e.g., foreign languages. As HR is quite limited in the number of course due to their limited number of study programmes, HÍ has a big advantage there.

Tölvunarfræði eða Hugbúnaðarverkfræði / Computer Science (CS) vs. Software Engineering (SE)

Another question is about the difference between Computer Science (Tölvunarfræði) and Software Engineering (Hugbúnaðarverkfræði): while both are in essence about programming, Software Engineering goes beyond as it has the "big picture" in mind -- not only, e.g., the big picture of a software architecture, but also related to management, e.g. project management and quality management. For example, SE students take courses from Industrial Engineering on project management and quality management (in addition to software quality management offered by me). When it comes to stakeholder relations (one of the biggest problems in software project are unclear requirements where the developed software does not meet the needs of users) and to user experience, SE requires many soft skills -- including psychology (e.g. work psychology and human-computer interaction and usability).

One might be tempted to say that CS is maybe for the nerds and SE for those who can talk to people and lead projects. But in fact, SE is not solely about soft skills, but you need both: soft and hard skills. Being an Engineer is an officially licensed professional title and as such, the regulations that apply to the contents of any Engineering programme in Iceland apply as well to Software Engineering, e.g. taking a certain amount of Math and Science courses which is the exact opposite of soft skills. So, to be a good Software Engineer you need to have both talents: people and tech.

Note that even if you enroll in our Computer Science programme, it allows so much freedom in selection of courses that you could take the same courses that a Software Engineering student has to take. (However, in this case, you will not be entitled to apply for a license as professional Engineer as you did not study any Engineering, but a Science, namely Computer Science.)

Further information

If you want more information on our programmes:

Bachelor (B.Sc.)

Computer Science (Tölvunarfræði) -- we added recently the specialisation in Data Science

Software Engineering (Hugbúnaðarverkfræði)

Master (M.Sc.)

Computer Science (Tölvunarfræði)

Software Engineering (Hugbúnaðarverkfræði)

Computational Engineering (Reikniverkfræði)

Ph.D.

And of course, you can also do a PhD in any of these programmes. Before you apply, contact a professor: either by a personal visit or -- if you are located abroad -- by writing an old school paper letter (professors get hundreds of email with PhD applications where it is obvious that the same email was written to many professors and thus, these email are considered as spam -- but a paper mail makes an impress)!

Stafræni Háskóladagurinn 2021: Object detection using neural networks in your smartphone trained by a supercomputer

Helmut Neukirchen, 25. February 2021

The University of Iceland's Computer Science department is researching machine learning using the next generation's supercomputer DEEP-EST -- by the way: we are also offering a Data Science specialisation in our Computer Science programme, where, e.g., machine learning including deep neural networks is covered. To showcase what is possible if you have a supercomputer to train neural networks, we offer a web page that allows you to use the camera of your smartphone (or laptop) to detect objects in real-time.

https://nvndr.csb.app/

Just open the following web page and allow your browser to use the camera: https://nvndr.csb.app/
(Allow up to approx. 1 minute for loading the trained neural network and for initialisation. Web page works best in landscape orientation.)

While neural networks are still best trained on a supercomputer, such as DEEP-EST with its Data Analysis Module, the trained neural network even runs in the browser of a smartphone (purely running locally as Javascript in your browser without any connection to a supercomputer, i.e. completely offline after having downloaded the Javascript code and the trained neural network).

The used approach is Single Shot Detector (SSD) (the percentage shows how sure the neural network is about the classification) using the MobileNet neural network architecture. The dataset used for training is COCO (Common Objects in Context), i.e. only objects of the labeled object classes contained in COCO will get detected. The Javascript code that is running in your browser uses Tensorflow Lite and its Object Detection API.

Example object detection via a neural network

If you want learn more about the DEEP-EST project where the next generation supercomputer is developed, have a look at the poster below (click on the picture below for PDF version):

PDF of DEEP-EST poster

European Centre of Excellence RAISE (Research on AI- and Simulation-Based Engineering at Exascale)

Helmut Neukirchen, 11. February 2021

University of Iceland is part of the European Centre of Excellence RAISE (Research on AI- and Simulation-Based Engineering at Exascale) that has started in 1/2021 and will end 12/2023. It is funded by the European Commission's Horizon 2020 programme with an overall budget of € 4 969 347. The University of Iceland's team is lead by Morris Riedel together with Matthias Book and Helmut Neukirchen (all professors at the Faculty of Industrial Engineering, Mechanical Engineering and Computer Science) and several PhD students are funded by this project.

Compute- and data-driven research encompasses a broad spectrum of disciplines and is the key to Europe’s global success in various scientific and economic fields. The massive amount of data produced by such technologies demands novel methods to post-process, analyze, and to reveal valuable mechanisms. The development of artificial intelligence (AI) methods is rapidly proceeding and they are progressively applied to many stages of workflows to solve complex problems. Analyzing and processing big data require high computational power and scalable AI solutions. Therefore, it becomes mandatory to develop entirely new workflows from current applications that efficiently run on future high-performance computing architectures at Exascale. The RAISE Center of Excellence for Research on AI- and Simulation-Based Engineering at Exascale will be the excellent enabler for the advancement of such technologies in Europe on industrial and academic levels, and a driver for novel intertwined AI and HPC methods. These technologies will be advanced along representative use-cases, covering a wide spectrum of academic and industrial applications, e.g., coming from wind energy harvesting, wetting hydrodynamics, manufacturing, physics, turbomachinery, and aerospace. It aims at closing the gap in full loops using forward simulation models and AI-based inverse inference models, in conjunction with statistical methods to learn from current and historical data. In this context, novel hardware technologies, i.e., Modular Supercomputing Architectures, Quantum Annealing, and prototypes from the DEEP project series will be used for exploring unseen performance in data processing. Best practices, support, and education for industry, SMEs, academia, and HPC centers on Tier-2 level and below will be developed and provided in RAISE's European network attracting new user communities. This goes along with the development of a business providing new services to various user communities.

Erasmus+ Exchange Computer Science University of Iceland / skiptinám tölvunarfræði Háskóli Íslands

Helmut Neukirchen, 10. February 2021

The Computer Science department of the University of Iceland is part of Erasmus+ and as such it is possible to have exchange of students (and also teachers) with other universities abroad (incoming and outgoing).

For an exchange, a bilateral contract between the two universities needs to be set up. Currently, we have the following contracts, but new contracts can be set up on demand:

Johannes Kepler University Linz
University of Antwerp
ETH Zürich
Universität Duisburg Essen
Georg August Universität Göttingen
Technical University of Munich
Universidad Complutense de Madrid
Université du Luxembourg
University of Groningen
Lodz University of Technology
Glasgow Caledonian University

In particular for German speaking universities, I can serve as a contact point.

Wacom Tablet on Linux with dual/multi-screen setup

Helmut Neukirchen, 3. February 2021

I have a dual screen setup and a Wacom tablet. As one screen is 4K UHD and the other FHD, this is too much screen estate for my shaky hand on the tiny Wacom tablet that I have (the tablet has 2540 lpi resolution, so this is not a restriction of the tablet, but the blame goes to me). Therefore, I want to restrict the tablet use to the FHD screen only in order to get a more calm pen usage.

On my Debian KDE system, this can be done by two means:

  • Install the KDE Wacom configuration tool via Debian package kde-config-tablet. In that tool, either set the mapping of the Wacom tablet to a specific screen (do not forget: you need to click "OK" also on the initial overview screen, i.e. end the whole setting process). Or use the pre-defined keyboard shortcuts to set a tablet-screen mapping.
  • Use the command line tool xsetwacom: get the screen name via xrandr (e.g. eDP for the laptop screen) and, get the name of the Wacom tablet via xsetwacom --list. Note that multiple entries are listed there for the same tablet: take care to use the one that ends with stylus. In my case, I use
    xsetwacom set "Wacom Intuos S Pen stylus" MapToOutput eDP

In online teaching, I enjoy using to draw on slides. E.g. for drawing on PDFs, I use either KDEs okular PDF viewer (in presentation mode, move to the upper screen edge to get a menu with pen colors) or xournal or rather it's fork xournal++/xournalpp. While I run PowerPoint on Linux with Wine, exactly the presentation mode drawing function does not work (I see the dot representing where the pen would draw, but drawing does not leave a trace).

For Gimp, it is important to understand that the tool to be mapped to the pen needs to be selected from the Toolbox window with the pen itself! (The pen and the mouse have different tools associated and Gimp distinguishes this based on what input device is used to selected the tool. Trying to select the tool with the mouse and then using the pen will only lead to making the pen crop (which is probably the default behaviour): this mode is also displayed at the bottom: "click-drag to draw a crop rectangle".)

@students: looking for a Master's thesis project?

Helmut Neukirchen, 9. December 2020

Are you looking for a Master's thesis project? If you want to do it in the field of software engineering and/or distributed systems, contact Helmut. See my research areas for topics where I can make suggestions concerning projects. If you are interested in other topics than my research area, we can discuss this as well.

Currently, I have funding for a Master's thesis project in aspect-orientation for software testing.

Some other possible topics are:

  • LoRa is a long-range, low-power (but also low-bandwidth) wireless communication suitable for IoT, such as transmitting sensor data. Various research in the field of distributed systems/wireless networking/IoT is possible using LoRa. This could include connecting to LoRa satellites, distributing GPS corrections data, e.g., for precise navigation of autonomous vessels, or the next topic shown below.
    In 2022, I am looking for students working in the project Communication with buoys using LoRa.
  • Delay-tolerant opportunistic networking in an Icelandic context: in areas without mobile phone coverage, mobile devices can still communicate via WiFi or bluetooth and since they are mobile they can carry-on and buffer received data until another mobile device is met, i.e. hop-by-hop communication involving mobility of nodes and buffering of data. Possible implementation targets would be ESP8266 and Android or iOS. Combining with LoRaWAN might also be considered, e.g. using The Things Network or adding own stations. Long term goal would be to have this integrated into the 112 Iceland app (together with Esa Hyytiä and Ingólfur Hjörleifsson)
  • Refactoring for LaTeX. While LaTeX is for text processing, it is at the same time almost like a programming language and therefore, refactorings are desirable (e.g. renaming an internal label and all references to it, pushing down sections into subsections / pulling up subsections into sections, adding columns to a table, replacing double quotes by typographic quotes, replacing spaces by non-breaking spaces in front of references, adjusting space following abbreviations where LaTeX thinks the dot is a fullstop, changing BibTeX entry types from one into another). These refactorings shall be implemented and added to some open-source LaTeX editing environment.
  • Machine learning and Software Engineering: either work on how to, e.g., test machine learning software or how to use machine learning in Software Engineering, e.g. use machine learning to support software testing
  • Scalable video conferencing: the COVID pandemic has shown that there is a need for a FOSS video conference solution that is able to handle more than 50 students in an online lecture. BigBlueButton and Jitsi meet are FOSS solutions, however there scalability is limited. The idea is to identify/profile bottlenecks of scalability and develop improvements.
  • Developing concepts for testing High-Performance Computing (HPC) code (e.g. how to use the Google Test framework to test HPC C++ code)
  • Developing and implementing refactoring concepts for High-Performance Computing (HPC) C++ code and the Eclipse CDT C/C++ IDE
  • Enabling big data processing of scientific file formats: the HDF5 binary file format used in sciences cannot be processed out of the box by the text-based big data approaches: An Apache Spark RDD/Hadoop MapReduce input format for HDF5 files that takes locality of the HDFS distributed file-system into account
  • Performance modelling (together with Esa Hyytiä) of big data systems (=local storage) in comparison to high-performance computing systems (=central storage)
  • Migrating Eclipse IDE functionality to an Xtext-based model-based IDE.
  • Model-based (Eclipse modeling tools) generation of TTCN-3 test case skeletons from TPlan test purpose descriptions.
  • Model-based (Eclipse Xtext + possibly suitable model transformation ) refactoring of TTCN-3 source code.
  • Traceability between requirements and UML models (connecting Eclipse ProR and Eclipse Papyrus tools using OSLC technology)
  • Generation of Jata-based Java test cases from TTCN-3 source code (=TTCN-3 to Jata compiler).
  • Selection of a set JUnit regression tests for a source code change based on the source code coverage recorded for each test in earlier test runs: Regression test are about finding out which parts of a code have changed and running then only those tests that related to that change. The PIT mutation testing applies this approach internally, but it what is missing is a tool that tells a developer explicitly which tests to re-run based on a coverage analysis.
  • Model-based (Eclipse modeling tools) refactoring of UML models and diagrams.
  • Aspect-oriented testing: develop an aspect-oriented language extension for the test language TTCN-3

Raspberry Pi 4: boot from USB with Ubuntu, ZFS

Helmut Neukirchen, 18. November 2020

First steps

When I was new to Raspberry Pi, I followed these German instructions.

Running Raspberry Pi on SD card and syslog wear

The default Raspberry Pi syslog logs to the normal files system, i.e. SD card (if not using USB).
In order to log to RAM and write it to file system only when needed, you can use Log2RAM either by adding it manually or to apt. Alternativly (did not try myself), use: Zram-config.

But you anyway might want to use USB storage instead of SD card.

Booting Raspberry Pi from USB

Raspberry Pi now supports booting from USB (having installed the latest firmware does not harm: I did this by booting Raspberry OS from SD card. Note that in contrast to Raspberry Pi <4, Raspberry Pi 4 stores the firmware actually in an EEPROM, not just as a file loaded at every boot from the FAT boot partition by the GPU firmware of Raspberry Pi <4). I then dd'ed the image from SD to USB mass storage.

Booting Ubuntu from USB

If you want to go for Ubuntu (not just beta as 64 bit Raspberry Pi OS, but available as stable 64 bit and support for ZFS), the following is relevant:

Ubuntu for Raspberry Pi uses U-Boot as bootloader, however U-Boot does not support booting from USB on Raspberry Pi, only from SD card, i.e. while Ubuntu 20.04 LTS works out of the box when booting from SD card, when I dd'ed the SD card onto a USB drive, booting failed because U-Boot could not load the kernel via USB.

Luckily, the bootloader that is part of the Raspberry Pi firmware can boot Ubuntu without U-Boot. As always, a FAT format boot partition is needed that contains a couple of files in order to boot.

While U-Boot can load compressed (vmlinuz) kernel images and can load the kernel from an ext4 root filesystem, the Raspberry Pi bootloader firmware can only load uncompressed (vmlinux) kernel images and only from the FAT-based boot filesystem.

While the Ubuntu 20.04 LTS ARM64 image has a kernel on the FAT-based boot partition, it is unfortunately compressed (because the assumed U-Boot would be able to deal with it). Hence, you need to uncompress the kernel manually to allow the Raspberry Pi firmware bootloader to load and start the kernel.

In addition, it seems that the .dat and .elf files that are part of the bootstraping need to be the most recent ones.

Hence, I downloaded the whole Raspberry Pi firmware from GitHub (via the green Code button) and extracted the .dat and .elf files from the boot directory.

Finally, you need to change the config.txt by adding kernel=vmlinux and
initramfs initrd.img followkernel
(in [all] and comment-out [pi4].

That should be enough to boot Ubuntu from USB. My above steps are essentially based on https://eugenegrechko.com/blog/USB-Boot-Ubuntu-Server-20.04-on-Raspberry-Pi-4 where you find step-by-step instructions.

Note that when you do later a kernel update inside the booted Ubuntu, it might only update the kernel image in the ext4 root partition -- not in the FAT boot partition. In this case, you need to copy the kernel over. Also you need to decompress it again.
(It should be possible to automate this, to get an idea, see: https://krdesigns.com/articles/Boot-raspbian-ubuntu-20.04-official-from-SSD-without-microsd of https://medium.com/@zsmahi/make-ubuntu-server-20-04-boot-from-an-ssd-on-raspberry-pi-4-33f15c66acd4.)

If you want to use the system headless (in fact, connecting a keyboard did not produce any input in my case), you can configure the network settings via the FAT-based boot partition: https://ubuntu.com/tutorials/how-to-install-ubuntu-on-your-raspberry-pi#3-wifi-or-ethernet

ZFS

Work in progress...

The ultimate goal is to have two drives as ZFS mirrors (RAID1) connected via USB.

(Be aware: 1. if the USB adapter claims data to have been written, that it has in fact not yet written, ZFS may fail -- just like probably any journaling-based file system; 2. USB is not as stable as SATA, so an ODROID-HV4 or a Raspberry Pi 4 compute module with PCI-based SATA might be better, or a Helios64 which might in future even have ECC RAM. But at least, Raspberry Pi has the better ecosystem and ZFS has some memory debug flag that does checksums for its RAM buffers.)

While https://www.nasbeery.de/ has some very easy script to use ZFS, it still assumes an SD card for the boot and root filesystem. It would of course be better to have everything on the USB drive (and even using RAID).

As the Raspberry Pi bootloader can only access a FAT-based boot partition, we still need a FAT-based boot partition on the USB drive. According to documentation, if the first probed USB drive does not have a boot partition, the next drive will be probed. So, it should be possible to have some sort redundancy here (but we need manually take care that both FAT-based boot partitions are synced after each kernel update to have some sort of RAID1).

As Ubuntu should be able to have the root partition on ZFS (once the Raspberry Pi firmware bootloader loaded the kernel from the FAT-based boot partition), it should be possible to use ZFS as root partition (what size? 50GB?). The remainder could then be a ZFS data pool.

Note that if one of the RAID1 drives fails and needs to be replaced, the new drive might have slightly less sectors, so it is wise to use not all available space for the ZFS data pool. If we use anyway a swap partition in addition, we could use it to utilize the remaining space (and have then on the replacement drive a slightly smaller swap partition if the replacement drive is smaller). The swap partition should not be on ZFS but a raw swap partition: Linux can either use multiple swap partitions, i.e. from all of the RAID drives -- or only use one and keep the others unused.

This means, we still partition the mass storage instead of letting ZFS use it exclusively. The Raspberry Pi bootloader understand only MBR partition format -- this might limit drive size to 2 TB.

The following web pages cover ZFS as root:

Compiling ZFS for Raspberry Pi OS (first part), switching to ZFS root (second part), including initramfs and kernel cmdline.txt

https://github.com/jrcichra/rpi-zfs-root

https://www.reddit.com/r/zfs/comments/ekl4e1/ubuntu_with_zfs_on_raspberry_pi_4/

USB adapters

I used USB to SATA adapters with the ASMedia ASM1153E as these work with Raspberry Pi and UASP.

TODO:
When checking dmesg after connecting a USB drive, it did spit out some warning that could be ignored.

TODO: performance tuning

https://www.jeffgeerling.com/blog/2020/raspberry-pi-usb-boot-uasp-trim-and-performance

https://www.jeffgeerling.com/blog/2020/raspberry-pi-usb-boot-uasp-trim-and-performance

https://www.jeffgeerling.com/blog/2020/enabling-trim-on-external-ssd-on-raspberry-pi

However that made in fact everything slower (TODO: speed testing via hdparm, dd).

Debian 10 Buster Linux on Thinkpad T14 AMD

Helmut Neukirchen, 12. November 2020

Update: the text below refers to Debian 10 "buster". Now that Debian 11 "Bullseye" has been released, which has Linux kernel 5.10, things should work out of the box. I just did a dist-upgrade from buster to bullseye which was the smoothest dist-upgrade that I ever had, i.e. no problems at all. Except that I had to do a apt-get install linux-image-amd64 to get the standard bullseye kernel (my kernel installed manually from buster did confuse VirtualBox which complained about mimssing matching kernel header source files) .

The Thinkpad T14 AMD is a very nice machine and everything works with Linux (I did not test the fingerprint reader and infrared camera, though). I opted for the T14 over the slimmer T14s, because the T14s has no full sized Ethernet port and it seems that (due to being slimmer) the cooling is not as good as with the T14.

Kernel 5.9 (or later) is a must to support all the hardware of Thinkpad T14 AMD, but the 4.x kernel used by the installer of Debian Buster is sufficient to do the installation, except that Wifi does not work, so you need an Ethernet cable connection during installation.

To get the 5.9 kernel in Debian Buster, it is at time of writing available from Sid (there are various ways to use packages from Sid in Stable -- in the simplest case, download the debs manually), and the following packages are needed:

  • linux-image-5.9*-amd64*
  • firmware-linux-free*
  • firmware-linux-nonfree*
  • firmware-misc-nonfree*
  • firmware-amd-graphics*
  • amd64-microcode* (Checking that the UEFI/BIOS is most recent before using a Microcode upgrade is recommended. But the update might be anyway blocked via /etc/modprobe.d/amd64-microcode-blacklist.conf)

In principle, the kernel header files are also nice to have, but it may involve updating to a completely new GCC from Sid:

  • linux-headers-5.9*-amd64*
  • linux-headers-5.9*-common*

Note that you will not get automatically security updates if you update the kernel manually. You may want to give APT pinning a try for having only the kernel from Sid.

Update: a buster backport of a more recent kernel is available now, install via apt-get install -t buster-backports linux-image-amd64 (assuming, you have backports configured). This support also all the above packages, including linux headers.

Note that for the Intel AX200 Wifi to work, you also need the latest firmware-iwlwifi (the one from buster-backports is enough-- the one from Sid should not be necessary):
apt-get install --target-release=buster-backports firmware-iwlwifi (assuming that backports have been configured as APT source). Initially, I had to download some files directly from Intel, but it seems that the buster-backport package now contains the missing files.

The graphics was straigthforward from Debian Stable:

  • xserver-xorg-video-amdgpu
  • firmware-amd-graphics*

I cannot remember whether I installed them explicitly or whether they are just installed because they are dependencies of the above AMDGPU package: I have a couple of mesa and DRM packages installed (see also Debian Howto on AMDGPU and the Debian Page on Video acceleration), e.g.:

  • libgl1-mesa-dri
  • libglx-mesa0
  • mesa-vulkan-drivers
  • mesa-va-drivers (for video decoding)
  • vdpau-driver-all (for video decoding)
  • libdrm-amdgpu

(The whole Linux graphics stack consists of more components, you maybe want to check..)

Check whether MESA is activated.

Also some, BIOS/UEFI tweeking was needed, e.g. Change sleep state from Windows to Linux mode (and if you use an unsigned kernel: disable secure boot).

There is one thing concerning suspend and resume, though: my Jabra USB headset stops to work after resume -- manually unplugging and plugging the USB plug solves this problem and remarkably other USB devices (keyboard and mouse) work after resume. In the end, I wrote a script that removed and install the USB module after resume and placed as file /etc/pm/sleep.d/10-usbquirk.sh (do a chmod a+x). The content is as follows:

# !/bin/bash
# USB headset not working after suspend, so try relode USB module (other USB devices work though)
case "${1}" in
hibernate)
;;

suspend)
;;

thaw)
rmmod xhci_hcd
sleep 0.5
modprobe xhci_hcd
;;

resume)
rmmod xhci_hcd
sleep 0.5
modprobe xhci_hcd
;;

*)
;;
esac

(Somehow the indentation got removed by WordPress -- but in fact indentation does not matter for the Bash script.)

I also have the UltraDock docking station: it uses the two USB-C ports of the T14 plus a third proprietary connector: AFAIK it is for the Ethernet, even though it seems that it does not simply mechanically extend the Ethernet port, but has an extra Ethernet chip built-into the docking station. And I guess that one of the USB C ports works rather in a mode where not USB C is used but the video signals are directly transmitted, so that is different from a pure USB C dock. And indeed, the dock seems to have its own MAC address (I did not figure out how to find out what the MAC address is -- probably need to connect it to my smart switch). While the BIOS has a MAC passthrough setting (that is enabled), under Linux, it gets not passed through.

Otherwise, the dock works without any extra configuration with Debian, including dock and undock (I use a 4k screen via one of the HDMI connectors of the UltraDock and a couple of USB A ports of the UltraDock).

I still dislike not having anymore the bottom dock connector used by Lenovo in the past: you cannot anymore simply "throw" the Laptop onto the dock, but need significant force now to insert the three connectors from the side and I am not sure, how many docking cycles the USB-C connectors last from a mechanical point of view.

I also tried a 4K screen (Lenovo P32p) that has USB-C for power delivery and video and even serves as USB hub as well (i.e. with one USB-C cable, it powers the laptop, transmits the video, and mouse and keyboard are attached to the screen; it even has Ethernet that will then go via the same, single USB-C cable). This works nice and replaces in fact a dock. -- However, at the beginning I needed always to reboot the system in order to make the video work via USB-C. For some reasons this problem magically disappeared.

TODO: Check Thinkpad specific packages (I have them currently not installed, and do not see any hardware support missing).

Update: There is now also Debian Wiki page on the Thinkpad T14.