Category: Research

Computer Science department and DEEP-EST project at UTmessan 2020, Icelands biggest IT fair

Helmut Neukirchen, 10. February 2020

Our new colleague Morris Riedel gave on 7. February 2020 a presentation on Quantum Computing (slides / video) at UTmessan 2020, Icelands biggest IT fair. In addition, the Computer Science Department ran on the public visitor day (8. February 2020) a booth: beside student projects, we showcased research projects, e.g. DEEP-EST.

The DEEP-EST project

For showcasing the machine learning that we do in the DEEP-EST project, we offer a web page that allows you to use the camera of your smartphone (or laptop) to detect objects in real-time. While neural networks are still best trained on a supercomputer, such as DEEP-EST with its Data Analysis Module, the trained neural network even runs in the browser of a smartphone.

https://nvndr.csb.app/

Just open the following web page and allow your browser to use the camera: https://nvndr.csb.app/.

(Allow a few seconds for loading the trained model and initialisation.)

The used approach is Single Shot Detector (SSD) (the percentage shows how sure the neural network is about the classification) using the MobileNet neural network architecture. The dataset used for training is COCO (Common Objects in Context), i.e. only objects of the labeled object classes contained in COCO will get detected. The Javascript code that is running in your browser uses Tensorflow Lite and its Object Detection API and model zoo.

If you want learn more about DEEP-EST, have a look at the poster below (click on the picture below for PDF version):

PDF of DEEP-EST poster

Ranking Journals and conferences in Supercomputing and Data Science

Helmut Neukirchen, 22. November 2019

Many academics insist on that journals are better than conferences, e.g. some PhD programmes have unwritten rules that a PhD thesis needs to involve at least one journal publication (which can be really a problem, because some journals have 1.5 year time span from submission to publication; add this to another 1.5 year for a PhD student to produce the first results being worth published in a journal/top conference, then this is almost impossible in 3 years of PhD study).

For Computer Science, some conference are as hard (or even harder) as journals, e.g. in terms of acceptance rates (which however depends also a lot, e.g. having a lot of crap submissions automatically leads to a low acceptance rate). Also Computer Science is a very fast developing field, so results would be often outdated after 1.5 years, so the far shorter publication cycles make conferences far more attractive.

As an example, below are two rankings (based on impact, i.e. citations such as h-index) that show that Computer Science conference are as high-quality (or even higher) as journals. Of course, you can always find conferences (but also journals) that have a low impact: therefore, instead of claiming that in general journals are better than conferences, you always need to look at each particular conference, but also at each particular journal (acceptance rates are missing in these lists -- they would be nice to compare, but this data is tedious to collect):

PhD Defense Standards-based Models and Architectures to Automate Scalable and Distributed Data Processing and Analysis

Helmut Neukirchen, 7. October 2019

Shahbaz Memon successfully defended his PhD thesis in Computer Science on Standards-based Models and Architectures to Automate Scalable and Distributed Data Processing and Analysis. The thesis covers Scientific Workflows and middlewares for High-Performance Computing and High-Throughput Computing.

PhD defense announcement

This PhD is an example of the collaboration between the Faculty of Industrial Engineering, Mechanical Engineering and Computer Science and Jülich Supercomputing Centre (JSC).

PhD candidate, opponents, dean, and PhD committee

Members of the PhD commitee were Morris Riedel, Helmut Neukirchen, and Matthias Book, opponents were Ramin Yahyapour and Robert Lovas. The head of faculty, Rúnar Unnþórsson, was steering the defense. While we have some on cultural diversity involved, we need to improve on gender diversity! More photos can be found on flickr.

European Researcher's Night: From the next generation supercomputer DEEP-EST to your smartphone -- real-time object detection using neural network

Helmut Neukirchen, 28. September 2019

The DEEP-EST research project is at Vísindavaka, part of the European Researcher's Night, in Reykjavik, 28. September 2019.

DEEP-EST Booth at European Researchers Night

Use the camera of your smartphone to detect objects in real-time. While neural networks are still best trained on a supercomputer, such as DEEP-EST with its Data Analysis Module, the trained neural network even runs in the browser of a smartphone. Bring your smartphone and objects such as apples, bananas or teddy bears to let your smartphone detect these objects.

https://nvndr.csb.app/

Just open the following web page and allow your browser to use the camera: https://nvndr.csb.app/.

(Allow a few seconds for loading the trained model and initialisation.)

The used approach is Single Shot Detector (SSD) (the percentage shows how sure the neural network is about the classification) using the Mobilenet neural network architecture. The dataset used for training is COCO (Common Objects in Context), i.e. only objects of the labeled object classes contained in COCO will get detected. The Javascript code that is running in your browser uses Tensorflow Lite and its Object Detection API and model zoo.

If you want learn more about DEEP-EST, have a look at the poster below (click for PDF version):

PDF of DEEP-EST poster

Research project European Open Science Cloud (EOSC)-Nordic starting

Helmut Neukirchen, 1. September 2019

University of Iceland was successful in a consortium applying for funding from the European Horizon 2020 research programme with the European Open Science Cloud (EOSC)-centric proposal EOSC-Nordic.

EOSC-Nordic aims to foster and advance the take-up of the European Open Science Cloud (EOSC) at the Nordic level by coordinating the EOSC-relevant initiatives taking place in Finland, Sweden, Norway, Denmark, Iceland, Estonia, Latvia, Lithuania, Netherlands and Germany. EOSC-Nordic aims to facilitate the coordination of EOSC relevant initiatives within the Nordic and Baltic countries and exploit synergies to achieve greater harmonisation at policy and service provisioning across these countries, in compliance with EOSC agreed standards and practices. By doing so, the project will seek to establish the Nordic and Baltic countries as frontrunners in the take-up of the EOSC concept, principles and approach. EOSC-Nordic brings together a strong consortium including e-Infrastructure providers, research performing organisations and expert networks, with national mandates with regards to the provision of research services and open science policy, and wide experience of engaging with the research community and mobilising national governments, funding agencies, international bodies and global initiatives and high-level experts on EOSC strategic matters.

A successful EOSC-Nordic will reinforce Nordic research area capability and competitiveness, create a profile of a leading knowledge based region, increase the ability of the region to attract talent and investments, enhance its appeal as a partner in cooperation, and strengthen the Nordic region and its efforts in the overall EOSC, through the creation of a cross-border cooperation model for Europe.

The project is coordinated by the Nordic e-Infrastructure Collaboration (NeIC) and the University of Iceland is one of the project participants. The University of Iceland's diverse team is lead by Ebba Þóra Hvannberg. Helmut Neukirchen and Morris Riedel contribute their knowledge with respect to e-Science, such as scalable, parallel machine learning, scientific workflows, and data federation. In addition to these researchers from the University's Computer Science department, experts from other departments of the University of Iceland contribute to EOSC-Nordic.

Project duration is 1st of September 2019 to 31st of August 2022. More information can be found on the EOSC-Nordic web page and also on my local page covering this research project.

EOSC Partners Group Photo

12th Nordic Workshop on Multi-Core Computing (MCC2019)

Helmut Neukirchen, 30. August 2019

The objective of MCC is to bring together Nordic researchers and practitioners from academia and industry to present and discuss recent work in the area of multi-core computing. This year's edition is hosted by Blekinge Institute of Technology in Karlskrona, Sweden.

The scope of the workshop is both hardware and software aspects of multi-core computing, including design and development as well as practical usage of systems. The topics of interest include, but is not limited to, the following:

Architecture of multi-core processors, GPUs, accelerators, heterogeneous systems, memory systems, interconnects and on-chip networks
Parallel programming models, languages, environments
Parallel algorithms and applications
Compiler optimizations and techniques for multi-core systems
Hardware/software design trade-offs in multi-core systems
Operating system, middleware, and run-time system support for multi-core systems
Correctness and performance analysis of parallel hardware and software
Tools and methods for development and evaluation of multi-core systems

There are two types of papers eligible for submission. The first type is original research work and the second type is work already published in 2018 or later.

Participants submitting original work are asked to send an electronic version of the paper that does not exceed four pages using the ACM proceedings format, http://www.acm.org/publications/proceedings-template, to https://easychair.org/conferences/?conf=mcc20190.

The same URL is to be used should you want to present an already published paper as described above. In that case, you need to clearly specify that the paper is already published and where the paper has been published.

No proceedings will be distributed. Contributions will not disqualify subsequent publication in conferences or journals.

The conference web page is https://sites.google.com/view/mcc2019.

Important dates

Sep. 29 2019: Submission deadline
Oct. 27 2019: Author notification
Nov. 18 2019: Registration deadline
Nov. 27-28 2019: MCC Workshop

PhD Defense GraphTyper: A pangenome method for identifying sequence variants at a population-scale

Helmut Neukirchen, 26. June 2019

Hannes Pétur Eggertsson successfully defended his PhD thesis in Computer Science on GraphTyper: A pangenome method for identifying sequence variants at a population-scale. I had the honor to steer this defense in my role as vice head of faculty.

As you notice, only men are occurring here. We need to improve on this! More pictures can be found on flickr.

Datasets for DBSCAN evaluation

Helmut Neukirchen, 20. June 2019

For evaluating implementations of the popular DBSCAN clustering algorithm, various publications use several datasets. Pointers to these datasets and information on paramaters (e.g. normalisation, epsilon and minpts) are collected here. You are welcome to contact me if you have further (big) datasets that are good benchmarks for DBSCAN.

Sarma et al.: μDBSCAN: An Exact Scalable DBSCAN Algorithm for Big Data Exploiting Spatial Locality

TODO: check in detail datasets used, but some are those datasets used in some of the other publications below, but "In addition, we have also used a few other real datasets: 3D Road Network (3DSRN) [32] contains vechicular GPS data; Household Power (HHP*) and KDDBIO145K (KDDB*) datasets are borrowed from UCI Repository [33]."

Gan, Tao: DBSCAN Revisited: Mis-Claim, Un-Fixability, and Approximation

Data normalized to [0, 10^5 ] for every dimension.

MinPts = 100, Epsilon = 5000 and higher. (Note: far too high value turning almost the entire dataset into a single cluster -- the mis-claim is on their side!).

Their preprocessed datasets

  • PAMAP2 (3,850,505 4D points),
  • Farm (3,627,086 5D points),
  • Houshold (2,049,280 7D points)

can be obtained from their webpage.

Mai, Assent, Jacobsen, Storgaard Dieu: Anytime parallel density-based clustering

  • Same household datasets used as by Gan, Tao.
  • Also PAMAP2 is used, but claimed to be 974,479 39D points whereas Gan and Tao reduced it to 4 dimensions using PCA, but claim to have 3,850,505 points.
  • In addition, the UCI Gas Sensor dataset by Fonollosa et al. is used: 4,208,261 16D points (DETAILS NOT PROVIDED IN PAPER).

Kriegel, Schubert, Zimek: The (black) art of runtime evaluation: Are we comparing algorithms or implementations?

  • Same PAMAP2, Farm and household datasets used as by Gan, Tao (including also smaller epsilon values as these make more sense).
  • In addition, for higher dimensional data, the Amsterdam Library of Object Images (ALOI) dataset from Geusebroek et al is used, namely the 110250 HSV/HSB color histograms provided on the ELKI Multi-View Data Sets webpage. Namly, the eight dimensions (two divisions per HSV color component) dataset (I assume, this is the 2x2x2 dataset) with epsilon=0.01 and minPts=20.

Patwary, Satish, Sundaram, Manne, Habib, Dubey: Pardicle: parallel approximate density-based clustering

PDSDBSCAN

A subsampled version of the above Millenium Run dataset has also been used in the paper A new scalable parallel DBSCAN algorithm using the disjoint-set data structure by the same main author as Pardicle describing and evaluating PDSDBSCAN who published also a 50,000 10D point dataset used also in that paper.

Götz, Bodenstein, Riedel: HPDBSCAN: highly parallel DBSCAN

The Bremen 3D point cloud and Twitter 2D GPS locations are available as full and subsampled (small) datasets: DOI: 10.23728/b2share.7f0c22ba9a5a44ca83cdf4fb304ce44e (Note: the original publication refers to the dataset via a handle.net handle which does not work anymore).

  • Twitter (dataset t): 16,602,137 2D points (eps=0.01, minPts=40). Note that this dataset contains some bogus artefacts (most likely Twitter spam with bogus GPS coordinates).
  • Twitter small (dataset ts): 3,704,351 2D points (eps=0.01, minPts=40)
  • Bremen (dataset b): 81,398,810 3D points (eps=100, minPts=10000)
  • Bremen small (dataset bs): 2,543,712 2D points (eps=100, minPts=312)

Neukirchen: Elephant against Goliath: Performance of Big Data versus High-Performance Computing DBSCAN Clustering Implementations

The same Twitter small dataset as provided by Götz et al. has been used with the same parameters.

Towards Exascale Computing: European DEEP-EST research project

Helmut Neukirchen, 17. May 2019

The DEEP-EST ("Dynamical Exascale Entry Platform - Extreme Scale Technologies") project is funded as part of the European Commission's Horizon 2020 ambitious Future and Emerging Technologies (FET) programme in order to create the blueprints of the next generation ("pre-exascale") supercomputer hardware and software.
The current goal in supercomputing is to reach exascale performance: a quintillion in American culture or a trillion in European culture or 10 to the power of 18 floating point arithmetic operations per second (FLOPS). These are needed to drive large-scale scientific simulations and big data analytics forward. Current supercomputers are able to achieve 0.2 exaFLOPS (or 200 petaFLOPS or 200 thousand teraFLOPS) (for comparison: if you have a very high-end personal computer, it's CPU can maybe compute half a teraFLOP).

Exascale computing is some sort of "wall", i.e. it is hard to reach it and in particular to go beyond anytime soon. While according Moore's law the number of transistors in a CPU doubles every two years, the performance of a CPU does not anymore double that fast (the transistors go into more cores and more caches). Currently, the only way to boost performance is to use not generic CPUs, but specialised "accelerators", e.g. graphical processors (GPUs), but also accelerators in other parts of a supercomputer, e.g. the network fabric that inter-connects the many CPU nodes of a supercomputer or the storage. DEEP-EST therefore suggest a Modular Supercomputing Architecture (MSA) where the supercomputer is composed of multiple modules, each being specialised in a particular domain, e.g. a GPU-heavy booster for computations that scale well and are suitable for GPUs, a "normal" CPU cluster module for applications that do not scale that well, a data analysis module having hardware specialised for machine learning.

Talking about accelerators: one of our project partners is CERN and the project meeting took place there: we were lucky enough that the Large Hadron Collider (LHC) and particle accelerator is currently in maintenance/upgrade phase, so we where able to see one of the detectors (when it is running, the collisions create lots of radiation). -- Find the human in the picture below:

LHC detector

DEEP-EST has reached the middle of the project duration and the first module, the CPU cluster module has been installed. Since an additional barrier in exascale computing is energy, which also means heat created by the computers that need to be cooled down, DEEP-EST is also working on novel cooling solutions, e.g. water cooling. While typical data centres use air cooling, i.e. extra energy is needed to cool down air that is then blown into the racks, the DEEP-EST water cooling allows to use water at normal temperatures and pipe it through those components that create most of the heat. This will warm up the water and the energy contained in this warm water can then be even used for something else. I.e. instead of needing extra energy from cooling, the DEEP-EST warm water cooling allows to even gain energy (of course, this is energy inserted in the system by the electrical power that the supercomputing components consume). You see the water pipes of the newly installed CPU cluster module in the middle rack below:

Rack with water cooling

Talking about energy efficiency: another trend are field-programmable gate arrays (FPGAs) that are more energy efficient than CPUs or GPUs. These are as well used in one of the specialised DEEP-EST modules.

The downside of the usage of accelerators is that they need special programming. University of Iceland is as DEEP-EST member developing machine learning software that exploits the DEEP-EST Modular Supercomputing Architecture (MSA) as good as possible. This includes clustering (DBSCAN) and classification via Support Vector Machines (SVMs) and Deep Learning/Deep Neural Networks.

You can follow the progress this project on the DEEP-EST web site and Twitter channel.

Scientists for Future / Fridays for Future / Protests for more climate protection

Helmut Neukirchen, 16. March 2019

Climate change is real and will affect us all. So it is good that the Fridays for Future protests have reached Iceland. Scientists in German-speaking countries made their statement that these concerns are justified and supported by the best available science: The current measures for climate, biodiversity, forest, marine, and soil protection are far from sufficient.

I am participating in the eSTICC (eScience Tools for Investigating Climate Change at High Northern Latitudes) NordForsk-funded research project. As part of the project an impressing (or depressing) simulation of the Greenland ice sheet and climate change has been created (the simulations ran on a supercomputer located in Iceland) that shows the surface air temperature in the Arctic and Greenland glacier ice thickness, e.g. when will the Arctic sea ice be gone during summer (we got used to already now) and during winter (=no ice at the North pole in winter -- imagine this) according to the simulations:

We all should act: