@students: looking for a Master's thesis project?

Helmut Neukirchen, 9. December 2020

Note: while this post has been created in 2020, it is regularly updated.

Are you looking for a Master's thesis project? If you want to do it in the field of software engineering and/or distributed systems, contact Helmut. See my research areas for topics where I can make suggestions concerning projects. If you are interested in other topics than my research area, we can discuss this as well.

Currently, I have funding for a Master's thesis project in aspect-orientation for software testing.

Some other possible topics are:

  • Running large language AI models (LLM) locally: for privacy reasons, you do not want to sent your code to some LLM in the cloud or send your email where AI shall speed up answering to the cloud. However, there exist scaled down LLMs that run locally, even without a GPU, but just a CPI, e.g. Gemma. A thesis topic could be to create a Gemma-plugin for a mail client, such as Thunderbird, or for an IDE, such as Eclipse.
  • AI for creating test cases. Evaluate how good AI tools are for creating test cases, e.g. https://codecept.io/ai. shows an evaluation./
  • Standard methods for designing test cases are surprisingly lacking tool support. A thesis could create support for black-box test design techniques, such as equivalence class partitioning and boundary value analysis, e.g. create all combinations of test input data, e.g. for JUnit test cases (either let a tool generate many test cases or develop a JUnit annotation where the equivalence classes are described and that then internally runs all possible combinations as tes cases). Also using cause-effect graphs and decision tables can be easily automated by tool support.
  • LoRa is a long-range, low-power (but also low-bandwidth) wireless communication suitable for IoT, such as transmitting sensor data. Various research in the field of distributed systems/wireless networking/IoT is possible using LoRa. This could include connecting to LoRa satellites, distributing GPS corrections data, e.g., for precise navigation of autonomous vessels, or the next topic shown below.
    In 2022, I am looking for students working in the project Communication with buoys using LoRa.
  • Delay-tolerant opportunistic networking in an Icelandic context: in areas without mobile phone coverage, mobile devices can still communicate via WiFi or bluetooth and since they are mobile they can carry-on and buffer received data until another mobile device is met, i.e. hop-by-hop communication involving mobility of nodes and buffering of data. Possible implementation targets would be ESP8266 and Android or iOS. Combining with LoRaWAN might also be considered, e.g. using The Things Network or adding own stations. Long term goal would be to have this integrated into the 112 Iceland app (together with Esa Hyytiä and Ingólfur Hjörleifsson)
  • Refactoring for LaTeX. While LaTeX is for text processing, it is at the same time almost like a programming language and therefore, refactorings are desirable (e.g. renaming an internal label and all references to it, pushing down sections into subsections / pulling up subsections into sections, adding columns to a table, replacing double quotes by typographic quotes, replacing spaces by non-breaking spaces in front of references, adjusting space following abbreviations where LaTeX thinks the dot is a fullstop, changing BibTeX entry types from one into another). These refactorings shall be implemented and added to some open-source LaTeX editing environment.
  • Machine learning and Software Engineering: either work on how to, e.g., test machine learning software or how to use machine learning in Software Engineering, e.g. use machine learning to support software testing
  • Developing concepts for testing High-Performance Computing (HPC) code (e.g. how to use the Google Test framework to test HPC C++ code)
  • Developing and implementing refactoring concepts for High-Performance Computing (HPC) C++ code and the Eclipse CDT C/C++ IDE
  • Enabling big data processing of scientific file formats: the HDF5 binary file format used in sciences cannot be processed out of the box by the text-based big data approaches: An Apache Spark RDD/Hadoop MapReduce input format for HDF5 files that takes locality of the HDFS distributed file-system into account
  • Performance modelling (together with Esa Hyytiä) of big data systems (=local storage) in comparison to high-performance computing systems (=central storage)
  • Migrating Eclipse IDE functionality to an Xtext-based model-based IDE.
  • Model-based (Eclipse modeling tools) generation of TTCN-3 test case skeletons from TPlan test purpose descriptions.
  • Model-based (Eclipse Xtext + possibly suitable model transformation ) refactoring of TTCN-3 source code.
  • Traceability between requirements and UML models (connecting Eclipse ProR and Eclipse Papyrus tools using OSLC technology)
  • Generation of Jata-based Java test cases from TTCN-3 source code (=TTCN-3 to Jata compiler).
  • Model-based (Eclipse modeling tools) refactoring of UML models and diagrams.
  • Aspect-oriented testing: develop an aspect-oriented language extension for the test language TTCN-3

Raspberry Pi 4: boot from USB with Ubuntu, ZFS

Helmut Neukirchen, 18. November 2020

First steps

When I was new to Raspberry Pi, I followed these German instructions.

Running Raspberry Pi on SD card and syslog wear

The default Raspberry Pi syslog logs to the normal files system, i.e. SD card (if not using USB).
In order to log to RAM and write it to file system only when needed, you can use Log2RAM either by adding it manually or to apt. Alternativly (did not try myself), use: Zram-config.

But you anyway might want to use USB storage instead of SD card.

Booting Raspberry Pi from USB

Raspberry Pi now supports booting from USB (having installed the latest firmware does not harm: I did this by booting Raspberry OS from SD card. Note that in contrast to Raspberry Pi <4, Raspberry Pi 4 stores the firmware actually in an EEPROM, not just as a file loaded at every boot from the FAT boot partition by the GPU firmware of Raspberry Pi <4). I then dd'ed the image from SD to USB mass storage.

Booting Ubuntu from USB

If you want to go for Ubuntu (not just beta as 64 bit Raspberry Pi OS, but available as stable 64 bit and support for ZFS), the following is relevant:

Ubuntu for Raspberry Pi uses U-Boot as bootloader, however U-Boot does not support booting from USB on Raspberry Pi, only from SD card, i.e. while Ubuntu 20.04 LTS works out of the box when booting from SD card, when I dd'ed the SD card onto a USB drive, booting failed because U-Boot could not load the kernel via USB.

Luckily, the bootloader that is part of the Raspberry Pi firmware can boot Ubuntu without U-Boot. As always, a FAT format boot partition is needed that contains a couple of files in order to boot.

While U-Boot can load compressed (vmlinuz) kernel images and can load the kernel from an ext4 root filesystem, the Raspberry Pi bootloader firmware can only load uncompressed (vmlinux) kernel images and only from the FAT-based boot filesystem.

While the Ubuntu 20.04 LTS ARM64 image has a kernel on the FAT-based boot partition, it is unfortunately compressed (because the assumed U-Boot would be able to deal with it). Hence, you need to uncompress the kernel manually to allow the Raspberry Pi firmware bootloader to load and start the kernel.

In addition, it seems that the .dat and .elf files that are part of the bootstraping need to be the most recent ones.

Hence, I downloaded the whole Raspberry Pi firmware from GitHub (via the green Code button) and extracted the .dat and .elf files from the boot directory.

Finally, you need to change the config.txt by adding kernel=vmlinux and
initramfs initrd.img followkernel
(in [all] and comment-out [pi4].

That should be enough to boot Ubuntu from USB. My above steps are essentially based on https://eugenegrechko.com/blog/USB-Boot-Ubuntu-Server-20.04-on-Raspberry-Pi-4 where you find step-by-step instructions.

Note that when you do later a kernel update inside the booted Ubuntu, it might only update the kernel image in the ext4 root partition -- not in the FAT boot partition. In this case, you need to copy the kernel over. Also you need to decompress it again.
(It should be possible to automate this, to get an idea, see: https://krdesigns.com/articles/Boot-raspbian-ubuntu-20.04-official-from-SSD-without-microsd of https://medium.com/@zsmahi/make-ubuntu-server-20-04-boot-from-an-ssd-on-raspberry-pi-4-33f15c66acd4.)

If you want to use the system headless (in fact, connecting a keyboard did not produce any input in my case), you can configure the network settings via the FAT-based boot partition: https://ubuntu.com/tutorials/how-to-install-ubuntu-on-your-raspberry-pi#3-wifi-or-ethernet

ZFS

Work in progress...

The ultimate goal is to have two drives as ZFS mirrors (RAID1) connected via USB.

(Be aware: 1. if the USB adapter claims data to have been written, that it has in fact not yet written, ZFS may fail -- just like probably any journaling-based file system; 2. USB is not as stable as SATA, so an ODROID-HV4 or a Raspberry Pi 4 compute module with PCI-based SATA might be better, or a Helios64 which might in future even have ECC RAM. But at least, Raspberry Pi has the better ecosystem and ZFS has some memory debug flag that does checksums for its RAM buffers.)

While https://www.nasbeery.de/ has some very easy script to use ZFS, it still assumes an SD card for the boot and root filesystem. It would of course be better to have everything on the USB drive (and even using RAID).

As the Raspberry Pi bootloader can only access a FAT-based boot partition, we still need a FAT-based boot partition on the USB drive. According to documentation, if the first probed USB drive does not have a boot partition, the next drive will be probed. So, it should be possible to have some sort redundancy here (but we need manually take care that both FAT-based boot partitions are synced after each kernel update to have some sort of RAID1).

As Ubuntu should be able to have the root partition on ZFS (once the Raspberry Pi firmware bootloader loaded the kernel from the FAT-based boot partition), it should be possible to use ZFS as root partition (what size? 50GB?). The remainder could then be a ZFS data pool.

Note that if one of the RAID1 drives fails and needs to be replaced, the new drive might have slightly less sectors, so it is wise to use not all available space for the ZFS data pool. If we use anyway a swap partition in addition, we could use it to utilize the remaining space (and have then on the replacement drive a slightly smaller swap partition if the replacement drive is smaller). The swap partition should not be on ZFS but a raw swap partition: Linux can either use multiple swap partitions, i.e. from all of the RAID drives -- or only use one and keep the others unused.

This means, we still partition the mass storage instead of letting ZFS use it exclusively. The Raspberry Pi bootloader understand only MBR partition format -- this might limit drive size to 2 TB.

The following web pages cover ZFS as root:

Compiling ZFS for Raspberry Pi OS (first part), switching to ZFS root (second part), including initramfs and kernel cmdline.txt

https://github.com/jrcichra/rpi-zfs-root

https://www.reddit.com/r/zfs/comments/ekl4e1/ubuntu_with_zfs_on_raspberry_pi_4/

USB adapters

I used USB to SATA adapters with the ASMedia ASM1153E as these work with Raspberry Pi and UASP.

TODO:
When checking dmesg after connecting a USB drive, it did spit out some warning that could be ignored.

TODO: performance tuning

https://www.jeffgeerling.com/blog/2020/raspberry-pi-usb-boot-uasp-trim-and-performance

https://www.jeffgeerling.com/blog/2020/raspberry-pi-usb-boot-uasp-trim-and-performance

https://www.jeffgeerling.com/blog/2020/enabling-trim-on-external-ssd-on-raspberry-pi

However that made in fact everything slower (TODO: speed testing via hdparm, dd).

Debian 10 Buster Linux on Thinkpad T14 AMD

Helmut Neukirchen, 12. November 2020

Update: the text below refers to Debian 10 "buster". Now that Debian 11 "Bullseye" has been released, which has Linux kernel 5.10, things should work out of the box. I just did a dist-upgrade from buster to bullseye which was the smoothest dist-upgrade that I ever had, i.e. no problems at all. Except that I had to do a apt-get install linux-image-amd64 to get the standard bullseye kernel (my kernel installed manually from buster did confuse VirtualBox which complained about mimssing matching kernel header source files) .

The Thinkpad T14 AMD is a very nice machine and everything works with Linux (I did not test the fingerprint reader and infrared camera, though). I opted for the T14 over the slimmer T14s, because the T14s has no full sized Ethernet port and it seems that (due to being slimmer) the cooling is not as good as with the T14.

Kernel 5.9 (or later) is a must to support all the hardware of Thinkpad T14 AMD, but the 4.x kernel used by the installer of Debian Buster is sufficient to do the installation, except that Wifi does not work, so you need an Ethernet cable connection during installation.

To get the 5.9 kernel in Debian Buster, it is at time of writing available from Sid (there are various ways to use packages from Sid in Stable -- in the simplest case, download the debs manually), and the following packages are needed:

  • linux-image-5.9*-amd64*
  • firmware-linux-free*
  • firmware-linux-nonfree*
  • firmware-misc-nonfree*
  • firmware-amd-graphics*
  • amd64-microcode* (Checking that the UEFI/BIOS is most recent before using a Microcode upgrade is recommended. But the update might be anyway blocked via /etc/modprobe.d/amd64-microcode-blacklist.conf)

In principle, the kernel header files are also nice to have, but it may involve updating to a completely new GCC from Sid:

  • linux-headers-5.9*-amd64*
  • linux-headers-5.9*-common*

Note that you will not get automatically security updates if you update the kernel manually. You may want to give APT pinning a try for having only the kernel from Sid.

Update: a buster backport of a more recent kernel is available now, install via apt-get install -t buster-backports linux-image-amd64 (assuming, you have backports configured). This support also all the above packages, including linux headers.

Note that for the Intel AX200 Wifi to work, you also need the latest firmware-iwlwifi (the one from buster-backports is enough-- the one from Sid should not be necessary):
apt-get install --target-release=buster-backports firmware-iwlwifi (assuming that backports have been configured as APT source). Initially, I had to download some files directly from Intel, but it seems that the buster-backport package now contains the missing files.

The graphics was straigthforward from Debian Stable:

  • xserver-xorg-video-amdgpu
  • firmware-amd-graphics*

I cannot remember whether I installed them explicitly or whether they are just installed because they are dependencies of the above AMDGPU package: I have a couple of mesa and DRM packages installed (see also Debian Howto on AMDGPU and the Debian Page on Video acceleration), e.g.:

  • libgl1-mesa-dri
  • libglx-mesa0
  • mesa-vulkan-drivers
  • mesa-va-drivers (for video decoding)
  • vdpau-driver-all (for video decoding)
  • libdrm-amdgpu

(The whole Linux graphics stack consists of more components, you maybe want to check..)

Check whether MESA is activated.

Also some, BIOS/UEFI tweeking was needed, e.g. Change sleep state from Windows to Linux mode (and if you use an unsigned kernel: disable secure boot).

There is one thing concerning suspend and resume, though: my Jabra USB headset stops to work after resume -- manually unplugging and plugging the USB plug solves this problem and remarkably other USB devices (keyboard and mouse) work after resume. In the end, I wrote a script that removed and install the USB module after resume and placed as file /etc/pm/sleep.d/10-usbquirk.sh (do a chmod a+x). The content is as follows:

# !/bin/bash
# USB headset not working after suspend, so try relode USB module (other USB devices work though)
case "${1}" in
hibernate)
;;

suspend)
;;

thaw)
rmmod xhci_hcd
sleep 0.5
modprobe xhci_hcd
;;

resume)
rmmod xhci_hcd
sleep 0.5
modprobe xhci_hcd
;;

*)
;;
esac

(Somehow the indentation got removed by WordPress -- but in fact indentation does not matter for the Bash script.)

I also have the UltraDock docking station: it uses the two USB-C ports of the T14 plus a third proprietary connector: AFAIK it is for the Ethernet, even though it seems that it does not simply mechanically extend the Ethernet port, but has an extra Ethernet chip built-into the docking station. And I guess that one of the USB C ports works rather in a mode where not USB C is used but the video signals are directly transmitted, so that is different from a pure USB C dock. And indeed, the dock seems to have its own MAC address (I did not figure out how to find out what the MAC address is -- probably need to connect it to my smart switch). While the BIOS has a MAC passthrough setting (that is enabled), under Linux, it gets not passed through.

Otherwise, the dock works without any extra configuration with Debian, including dock and undock (I use a 4k screen via one of the HDMI connectors of the UltraDock and a couple of USB A ports of the UltraDock).

I still dislike not having anymore the bottom dock connector used by Lenovo in the past: you cannot anymore simply "throw" the Laptop onto the dock, but need significant force now to insert the three connectors from the side and I am not sure, how many docking cycles the USB-C connectors last from a mechanical point of view.

I also tried a 4K screen (Lenovo P32p) that has USB-C for power delivery and video and even serves as USB hub as well (i.e. with one USB-C cable, it powers the laptop, transmits the video, and mouse and keyboard are attached to the screen; it even has Ethernet that will then go via the same, single USB-C cable). This works nice and replaces in fact a dock. -- However, at the beginning I needed always to reboot the system in order to make the video work via USB-C. For some reasons this problem magically disappeared.

TODO: Check Thinkpad specific packages (I have them currently not installed, and do not see any hardware support missing).

Update: There is now also Debian Wiki page on the Thinkpad T14.

Master in Computer Science, Software Engineering, or Computational Engineering

Helmut Neukirchen, 24. April 2020

We have some overview videos for students who are in interested to enroll in our Master (M.Sc,) programmes in Computer Science (Tölvunarfræði), Software Engineering (Hugbúnaðarverkfræði), or Computational Engineering (Reikniverkfræði):

Master in Computer Science (Meistaranám Tölvunarfræði)

Master in Software Engineering (Meistaranám Hugbúnaðarverkfræði)

Master in Computational Engineering (Meistaranám Reikniverkfræði)

From Exascale Supercomputing to FAIR data

Helmut Neukirchen, 17. April 2020

I am giving a talk on the H2020 projects DEEP-EST and EOSC-Nordic (including EUDAT's B2FIND and B2SHARE) at the University of Iceland's Engineering Research Institute seminar: From Exascale Supercomputing to FAIR data -- or: Why (almost) everyone uses GPUs and how to get a DOI for your dataset.

For zoom link, have a look at the official announcement.

The slide are available via DOI:10.23728/b2share.a6a4682fe1f74b32b8b67948f7ce6965
and a video recording of the presentation is available as well.

Protected: Connection from home to Heilsugæslan

Helmut Neukirchen, 23. March 2020

This content is password protected. To view it please enter your password below:

Other means of remote communication with students

Helmut Neukirchen, 15. March 2020

Discussion forum: Piazza

Most Computer Science teachers uses since years the discussion forum Piazza.com that is tailored for university courses. Students can ask there, e.g. anonymously to lower the bar, questions concerning lectures, assignments, organisational issues, etc..

Instant messenger: Riot

For your PhD students, you may want to have a means of remote communication between email and phone: a chat/instant messenger.

  • If you are fine with awkward to use software and have money (or your institution used tax money to pay), you can use Microsoft teams,
    • Caveat: Microsoft, one of the biggest cloud computing providers on Earth, cannot deal with increased load due to COVID: check status
    • If you just need the chat and all your team members have anyway access, it is fastest to use Microsoft Team for this.
  • if you sold your soul (and data) to, e.g. Facebook, you can of course use their messengers.
  • if you like easy to use software and have money, you can use Slack,
  • if you like easy to use software and are convinced that free and open-source software is better, you use Riot: https://riot.im. Riot runs in the browser and has mobile apps.

As with all instant messengers: disable instant notifications -- they distract too much. Instead look only after messages when you do a break/switch between tasks. If your PhD students have something really urgent (the lab is on fire), they can still call you.

Using video conferencing tools for remote teaching

Helmut Neukirchen, 15. March 2020

Note: re-visit this page from time to time as experience from teaching in the Computer Science department is added.

Students seem to be satisfied with our Zoom approach (feedback, paraphrased for legal reasons):

The teachers in the department of Computer Science handled the shift
to remote teaching very well. Every teacher did her/his best and the
Zoom lectures are going well. Teachers have a positive attitude concerning
the new teaching style which is very encouraging that we will be able to tackle the coming weeks.

I attended different classes: both Teams and Zoom were tried and Zoom is
better. In some classes it is even better than showing up.

Note concerning Microsoft Teams

It is irresponsible that university teachers are told (by people who never delivered remote lectures for university courses in practise) to use Microsoft Teams for remote lecturing: Teams was never intended for remote lecturing (Microsoft Word is called Word, because it is a word processor -- does Microsoft Teams sound like a synonym for remote teaching?) and is therefore simply the wrong tool (and in addition awkward to use: you need a 3 h course to learn it) -- rather use Zoom! Teams is as well also the wrong tool for ad-hoc meetings with people who are not part of your team! There are better tools for small, but easy to use ad-hoc video meetings.

Whatever tool you choose, remember to make it accessible to everyone. If it does not use standards such as HTML (=runs in browser on whatever system which typically even makes it usable by visually impared), take care that your tool can be used via Linux, Mac OS X, Microsoft Windows and also mobile platforms, i.e. Android and iOS. (The advantage of the mobile platform is that audio always works there, whereas Linux, Mac OS X, or Microsoft Windows sometimes cause trouble with audio devices, e.g. students do not have a microphone for their desktop computer to ask questions, but their mobile phone has for sure a microphone.)

Zoom

Our premier tool for giving whole interactive remote lectures is via Zoom. Request a full license from our UTS IT department to get rid of the 40 minute limit of the free version (and e.g. be able to record to automatically upload to Panopto).

  • Highly recommended reading: detailed slides on using Zoom for lecturing (check for updated from time to time) by my colleague Matthias Book.
    • The setup is as follows: you do screen sharing to share your computer screen that shows, e.g. slides, and you use your webcam to record you, e.g. in front of a whiteboard (use a good, thick black pen). If you have a separate webcam you can also let it point to a sheet of paper on which you write. Instead of a separate webcam, there is also software to connect your mobile phone to your computer and is it as webcam.
      If you record to the Zoom cloud (non-free license only), you get these as two separate videos that you can then combine via Panopto (Create, then Build a session).
    • If you record only one stream (e.g. recording not the cloud but via the Zoom client itself), then take care to disable screen sharing while you want to show something with the camera, e.g. the whiteboard. (Otherwise, the single created video will contain only the screensharing.)
    • An example, where only one stream is recorded can be found here. Note that this has been recorded as a single stream only -- while watching life via Zoom, studentw can choose whether to see the webcam or the screenshare in full form. But in the recording, when recording localy, the webcam view will only be a small thumbnail in the recorded video while screen sharing is active. So deactive screensharing for those parts where you want to have the webcam view recorded in full size.
    • Matthias is currently working on a tutorial video (check this page later again).
  • Some practises of using Zoom for lecturing (time will tell whether these are best practises)

Jitsi Meet and Whereby running completely in the browser for easy-to-use smaller meetings

While Zoom is great for many attendants, you need to download, install, and start a client (Linux, Mac OS X, Microsoft Windows) or app (Android, iOS). While you can enable to run in the browser (e.g. set a checkmark when scheduling), it crashed in my browser after a view minutes.

For smaller meetings, e.g. meet ad-hoc with a single student, consultation times, or a teaching assistant meeting with only a handful of student, you may want to avoid the hassle of Zoom to create/schedule a videoconference session and downloading a client or app.

In this case, you can use video conferencing tools that simply run in a browser and do not need any pre-scheduling of videoconference sessions. There are mainly two alternatives that each have there strengths and weaknesses: Whereby and Jitsi Meet.

Please recommend these tools also to students in order to still talk to each other daily in learning groups or just to have fun.

Jitsi Meet for easiest and somewhat bigger meetings that run in Chrome browser or special mobile apps

  • Pro: no account needed, no artifical limit to 4 users, open-source (=possible to install your own server), has some extra features (recording, blurring background, etc.)
  • Con: other browsers than Chrome not well supported (they work, but sometimes video freezes), mobile browsers not supported at all, but apps are available, open-source (=if you use the free server provided by the open-source project, it may not handle a lot of load and might be limited to 1.5 h sessions)

Jitsi Meet meet runs in the desktop browser (apps on mobile platforms) and you need no account, i.e. you can immediately start by just agreeing on a URL and typing it in!

To give it a try, the base URL is https://meet.jit.si/
just agree on a room name, append the room name to the base URL and enter it into your browser, e.g https://meet.jit.si/MySuperDuperMeetingRoom

There is no easier way to do a videoconference then with Jitsi Meet! (At least if all use Chrome as browser/have the mobile app installed.)

ensemble.scaleway.com French Jitsi meet installation

Should the above Jitsi meet server be overloaded (and you are OK with a French as default for the changeable language user interface), you can use https://ensemble.scaleway.com/.

There, click on Lancer une réunion to get a random server and room name. If you like, you can change that room name and even use always the same randomly assigned server name to end up with a fixed URL, e.g.: https://v-4110.ensemble.scaleway.com/MySuperDuperMeetingRoom

Once running, you can switch the language: click on the three vertical dots (bottom right corner), then on the gear wheel for the settings, then on the Plus tab, at Langue, choose Anglais.

Whereby for easy small meetings that run in any browser

  • Pro: works with almost all browsers, possible to lock a room and require students to knock to make you let them enter the room or put them on hold if you are still busy with other students (e.g. consultation time queue)
  • no account needed, no artifical limit to 4 users, open-source (=possible to install your own server)

  • Con: account needed for host, free version artificially limited to 4 users.

If you do not have more than 4 participants (including you), whereby.com runs completely in the browser (all common desktop browsers, on mobile platforms the standard browsers)

The one who hosts the conference room needs an account, everyone who joins using the URL does not need an account.

Panopto / Recording for Panopto with a (Linux) screen recorder

  • HÍ has info on Panopto
  • While Panopto allows a live webcast, it has a delay which prevents interaction with students. So we do not use this feature, but use Zoom for live interaction and record that with Zoom and upload the Zoom video to Panopto.
  • Linux users can use a screen recorder and upload the created file to Panopto (Note: Panopto even does optical character recognition of every pixel that is recorded, so you video becomes searchable). If you anyway use Zoom (the client works nicely on Linux), you can also let Zoom record to a file and upload that to Panopto.
  • Research has shown that screen recordings where you still see a video of the presenter are viewed more than without presenter. One way to achieve this is to record via Zoom which then embeds the presenter video into the recorded shared screen (or enable in settings on the Zoom web page "Record active speaker, gallery view and shared screen separately" to get separate files). Another would be to use https://studio.opencast.org/ that is a purely web-based recording software and allows to record camera and screen into separate videos that you can then upload to Panopto by creating a single session that combined the two videos.
  • If you recorded already in the past and want to re-use recordings now:
    • be aware that Panapto orders by default by recording date. But you can re-order them and asks students not to order by recording date, but by "order". (Unfortunately, UGLA shows videos always ordered by recordings date -- so make students aware of that.)
    • Note that Panopto sometimes forgets the encoding of old videos (black screen, only audio), but by re-processing it you can get it back: see this video for problem and solution description (audio is distorted: audio recording level was set too high).

Some Best Practises

We have a daily short videoconference to exchange experience in remote teaching (these videoconferences are via Zoom, so at the same time it is a practise in using Zoom; furthermore, instead of being home alone with family, it is good to see colleagues) and after one week of remote teaching we had a separate videoconference with students to get feedback from them and to let them know that our department is with them.

Using Gradescope for transforming easily your paper-based homework into online assignments

Helmut Neukirchen, 15. March 2020

Update: now that everyone has Canvas, you can easily sync Canvas and Gradescope with respect to student roster and assignments:
when creating an assignment in Canvas, select as assignment type/tegund skill the type external/ytra and then type in "Gradescope" and let Canvas search for Gradescope and select then "Gradescope" from the results list.

See how to sync Canvas and Gradescope.
(And ignore those parts below that only apply when you do not sync with Canvas...)

The Computer Science department of the Iðnaðarverkfræði-, vélaverkfræði- og tölvunarfræðideild (IVT) of Verkfræði- og náttúruvísindasvið (VON) uses already since 2015 Gradescope for an easy online assignment submission by students and for extremely convenient and time-saving online grading and feedback by teachers.

Both students and teachers love it (e.g. in the 2019 IVT self-evaluation, the students explicitly requested more usage of Gradescope and independent from that the teachers requested IVT deild to spend money to get a full license). IVT deild has paid for a full-feature institutional license that everyone who registers with hi.is email can use for free in 2020. (Update: due to COVID-2019, Gradescape just made all features anyway available for free.)

The approach of Gradescope is that you can keep your traditional approach (so changing to Gradescope is really easy):

  • For assignments/homework during the semester, students upload on their own their solutions: either taking a photo of their paper solution or (as anyway most students typeset their solution electronically) upload a PDF of their solution, and mark on their own where on the uploaded pages the solution can be found.
  • Final exams are done as usual on paper, but the teacher defines boxes where to fill in the answers so that Gradescope knows where to look for the answers and the teacher scans later in and uploads the exam solutions to be able to use the convenient online grading features.
  • In addition, pure online assignments/exams are supported, i.e. instead of uploading a solution, students answer some web form.

Image copied from https://www.gradescope.com/

If you have any questions, you are welcome to contact Helmut Neukirchen. But first have a look at the info below:

Demo videos (each 2-3 minutes):

Teacher creating assignment

Student submitting assignment

Teacher grading submission

Artificial Intelligence (AI)-based image recognition to group solutions that look similar and should thus all get the same grading

E.g, in a course with 40 students, grade the 20 completely correct assignments with one click, the 10 assignments that make all the same mistake with one click, and the 5 empty assignments with one click, so that only the remaining 5 assignments need your attention.

Gradescope is easy

As you see, this is all very easy and a natural (but faster) extension of your paper-based assignment workflow.

(Note that my experience is based on Computer Science assignments and exams, where answers typically fit on one page, but grading is even fun for programming assignments where source code submissions are long. But if you scroll one page down on https://www.gradescope.com, you see also Gradescope being applied to other disciplines than Computer Science.)

As long as not every course uses Canvas, students need to be manually added to their Gradescope course.

  • Either let students enroll themselves, by letting the students know about an entry code (Gradescope generates it: everyone can register with this code for your course)
  • or you as teacher manually upload the list of students as comma-separated values (CSV) format: just export in UGLA your student list in Microsoft Excel format, open in a spreadshet, export there as CSV and upload to Gradescope (double check that names containing special Icelandic characters are correct, i.e. try different CSV exports such as Unicode characterset).

Teacher adds students to course roster

Entry code for students to self-enroll (no video, screenshot only)

You can also add dæmakennarar/teaching assistants (TAs) to a course to let them grade using Gradescope: you just need to clarify who grades what or create separate courses for each TA. (While Gradescope supports the notion of sections=dæmatími groups, sections can currently only be set when populating the roster via CSV, but not web-based using entry code or a teacher later adding single students.)

Getting started

If you want give Gradescope a try, just go to https://www.gradescope.com, sign up (select University of Iceland and use your hi.is email address), create a dummy course and assignment. If you like, you can also add a dummy student using your private email address and play around.

The above features are only the most basic features of Gradescope, for more check:

For your info: IVT deild has paid for the Institution license, i.e. you have all features. (Except for the integration with Canvas that we can only do next semester when all course use Canvas.)

While we paid for 1500 students only, we are allowed to have as many students as we need in 2020 (in 2021, we might then have to pay for the number of students of 2020, so either HÍ as a whole adopts Gradescope or IVT deild convinces Gradescope that 2020 was exceptional -- they anyway started to give out free licenses because of COVID-19).

Computer Science department and DEEP-EST project at UTmessan 2020, Icelands biggest IT fair

Helmut Neukirchen, 10. February 2020

Our new colleague Morris Riedel gave on 7. February 2020 a presentation on Quantum Computing (slides / video) at UTmessan 2020, Icelands biggest IT fair. In addition, the Computer Science Department ran on the public visitor day (8. February 2020) a booth: beside student projects, we showcased research projects, e.g. DEEP-EST.

The DEEP-EST project

For showcasing the machine learning that we do in the DEEP-EST project, we offer a web page that allows you to use the camera of your smartphone (or laptop) to detect objects in real-time. While neural networks are still best trained on a supercomputer, such as DEEP-EST with its Data Analysis Module, the trained neural network even runs in the browser of a smartphone.

https://nvndr.csb.app/

Just open the following web page and allow your browser to use the camera: https://nvndr.csb.app/.

(Allow a few seconds for loading the trained model and initialisation.)

The used approach is Single Shot Detector (SSD) (the percentage shows how sure the neural network is about the classification) using the MobileNet neural network architecture. The dataset used for training is COCO (Common Objects in Context), i.e. only objects of the labeled object classes contained in COCO will get detected. The Javascript code that is running in your browser uses Tensorflow Lite and its Object Detection API and model zoo.

If you want learn more about DEEP-EST, have a look at the poster below (click on the picture below for PDF version):

PDF of DEEP-EST poster