Is Supercomputing dead in the age of Big Data processing?

Helmut Neukirchen, 9. November 2016

In the age of Big Data and big data frameworks such as Apache Spark, one might be tempted to think that supercomputing/high-performance computing (HPC) is obsolete. But in fact, Big Data processing and HPC are different and one platform cannot replace the other. I outline this in a presentation on Science Day of the University of Iceland's School of Engineering and Natural Sciences Saturday October 29 2016. (Note that there is nowadays some convergence, and a graph-processing benchmark top 500 list to resemble less CPU-intensive workloads in HPC.)

Furthermore, the available open-source implementations of algorithms (e.g. clustering using DBSCAN) are currently much faster in HPC and the available Big Data implementations do in fact not even scale beyond a handful of nodes. Results of a case study performed during my guest research stay at the research group High Productivity Data Processing of the Federated Systems and Data division at the Jülich Supercomputing Centre (JSC) are published in this Technical Report.

One of the reviewers of the 1st IEEE International Workshop on Big Spatial Data (BSD 2016) seems not to like the message that Big Data needs to do its homework to match HPC, hence my paper was rejected. While I assume that an HPC conference (such as ISC) might accept it, it would be nice to get the message to the Big Data community: I might submit it to The 6th International Workshop on Parallel and Distributed Computing for Large Scale Machine Learning and Big Data Analytics or later at BDCloud 2017 : The 7th IEEE International Conference on Big Data and Cloud Computing. Non-public source implementations may also be worth considering: A novel scalable DBSCAN algorithm with Spark or A Parallel DBSCAN Algorithm Based on Spark. (If we get access to the implementation, but lacking possibility of reproducing/verifying scientific results is another story covered in my Technical Report.) Also, I might add threats to validity (such as construct, internal and external validity [Carver, J., VanVoorhis, J., Basili, V., August 2004. Understanding the impact of assumptions on experimental validity.])

Update from 9.11.2016: Erich Schubert (thanks!) pointed me to this related article "The (black) art of runtime evaluation: Are we comparing algorithms or implementations?" (DOI: 10.1007/s10115-016-1004-2) which support my findings. A statement from that article on k-means: "Judging from the measured runtime and even assuming zero network overhead, we must assume that a C++ implementation using all cores of a single modern PC will outperform a 100-node cluster easily." For DBSCAN, they show that a C++ implementation is one order of magnitude faster than the Java ELKI (which confirms my measurements concerning the C++ HPDBSCAN and the Java ELKI) on their used dataset. They also support my claim that the implementation matters: "Good implementations with index accelerations would process this data set in less than 2 seconds, whereas the fastest linear scan implementation took over 90 seconds, and naïve implementations would frequently require over 100 times the best runtime. But not every index we evaluated was implemented correctly, and sometimes an index was even slower than the linear scan. Between different implementations using linear scan, we can observe runtime differences of more than two orders of magnitude. Comparing the fastest implementation measured (optimized C++ code for this task) to the slowest implementation (outdated versions of Weka), we observe four orders of magnitude: less than two seconds instead of over four hours."