Past, Present, and Future in and of Software Visualization
Stephan Diehl, University of Trier, Germany
Visual Content Fingerprinting and Search - An MPEG Perspective
Miroslaw Z. Bober, University of Surrey, United Kingdom
On Interaction in Data Mining
Andreas Holzinger, Medical University Graz, Austria
Ubiquitous, Dynamic Inclusion and Fusion of Tracking Data from Various Sources for Mobile AR Applications in "AR-ready Environments"
Gudrun Klinker, Independent Researcher, Germany
Past, Present, and Future in and of Software Visualization
Stephan Diehl
University of Trier
Germany
Brief Bio
Stephan Diehl is a full professor at University Trier, Germany. His main research areas are software engineering and information visualization, and in particular the intersection of both areas, nameley software visualization. A considerable part of his visualization research can be briefly characterized as applying (visual) data mining techniques to analyze software evolution and developing new ways to visualize the change of structured information over time. Stephan Diehl is author of the book "Software Visualization -- Visualizing the Structure, Behavior and Evolution of Software" (Springer). He is a member of ACM, IEEE, and GI, the steering committees of the IEEE working conference series on Mining Software Repositories MSR and on Software Visualization VISSOFT. Since 2007 he is a member of the scientific directorate of Schloss Dagstuhl -- Leibniz-Center for Informatics.
Abstract
Starting with a selective retrospective of the history of software visualization, I will identify various trends and paradigms of previous and current research. In particular, I will discuss examples of applying visualization techniques to analyze the past and present state of software as well as to predict its future development. I will argue that prediction is an important task, but that software visualization research has only scratched the surface of it. As a consequence, speculative visualization will be one of the major future challenges identified in this talk.
Visual Content Fingerprinting and Search - An MPEG Perspective
Miroslaw Z. Bober
University of Surrey
United Kingdom
Brief Bio
Miroslaw Bober is Professor of Video Processing in the Centre for Vision, Speech and Signal Processing (CVSSP) at the University of Surrey, UK. From 1997 to 2011, he was the General Manager of Mitsubishi Electric R&D Center Europe (MERCE-UK) and the Head of Research for its Visual and Sensing Division. Miroslaw received his M.Sc. degree in Electrical Engineering from the AGH University of Science and Technology, Poland in 1990. Subsequently he received the M.Sc. with distinction in Signal Processing and Artificial Intelligence (1991) and the Ph.D. in 1995, both from the University of Surrey. Miroslaw has been actively involved in the development of visual analysis tools in MPEG, chairing the work of MPEG-7 Visual group and recently the work on Compact Descriptors for Visual Search. He developed shape description and image and video signature technologies which are now a part of the ISO standards. Miroslaw is an inventor of over 70 US patents and several of his inventions are deployed in consumer and professional products. His publication record includes over 60 refereed publications and three books and book chapters. His research interests include image and video processing and analysis, computer vision and machine learning.
Abstract
Our society is creating, storing and using an ever increasing volumes of digital multimedia content. We have access to hundreds of billions of images and videos on the Internet, in the archives of professional content creators and owners, and in the personal libraries of home users. In this context, content-based identification of images' and videos and visual search capabilities are essential enablers for a range of applications that require fast, robust and efficient algorithms. The interoperability of the technologies and systems is also an important consideration.
During the lecture Miroslaw Bober will review the latest advances in content fingerprinting and visual search and how they impact related standardisation efforts within the MPEG group. In particular will present the MPEG Visual Signatures, which include the Image Signature and Video Signature content description tools. The Visual Signatures are designed specifically to enable fast and robust identification of near duplicate or derived (modified) visual content in large-scale databases. We will also look at the latest addition to the MPEG family of standards: Compact Descriptors for Visual Search (CDVS). Achieving a state-of-the-art identification and recognition performance, the applications are numerous and include rights management and monetization, distribution management, usage monitoring, and professional or personal database management.
On Interaction in Data Mining
Andreas Holzinger
Medical University Graz
Austria
https://www.aholzinger.at/
Brief Bio
Andreas Holzinger is lead of the Holzinger Group (Human-Centered AI) at the Medical University Graz and Visiting Professor for explainable AI at the Alberta Machine Intelligence Institute in Edmonton, Canada. Since 2016 he is Visiting Professor for Machine learning in health informatics at Vienna University of Technology. Andreas was Visiting Professor for Machine Learning and Knowledge Extraction in Verona, RWTH Aachen, University College London and Middlesex University London. He serves as consultant for the Canadian, US, UK, Swiss, French, Italian and Dutch governments, for the German Excellence Initiative, and as national expert in the European Commission. Andreas obtained a Ph.D. in Cognitive Science from Graz University in 1998 and a second Ph.D. (Habilitation) in Computer Science from TU Graz in 2003. Andreas Holzinger works on Human-Centered AI (HCAI), motivated by efforts to improve human health. Andreas pioneered in interactive machine learning with the human-in-the-loop. For his achievements, he was elected as a member of Academia Europea in 2019. Andreas is paving the way towards multimodal causability, promoting robust interpretable machine learning, and advocating for a synergistic approach to put the human-in-control of AI and align AI with human values, privacy, security, and safety.
Abstract
One of the grand challenges in our networked world are the large, weakly structured and unstructured data sets. This is most evident in Biomedicine (Medical Informatics + Bioinformatics): The trend towards personalized medicine results in increasingly large amounts of (-omics) data. In the life sciences domain, most data models are characterized by complexity, which makes manual analysis very time-consuming and often practically impossible. To deal with such data, solutions from the machine learning community are indispensable and it is marvelous what sophisticated algorithms can do within high-dimensional spaces. We want to enable a domain expert end-user to interactively deal with these algorithms and data, so to enable novel discoveries and previously unknown insights. Our quest is to make such approaches interactive, hence to enable a computationally non-expert to gain insight into the data, yet to find a starting point: “What is interesting?”. When mapping the results back from arbitrarily high-dimensional spaces R^n into R^2 there is always the danger of modeling artifacts, which may be interpreted wrongly. A synergistic combination of methodologies and approaches of two areas offer ideal conditions towards working on solutions for such problems: Human-Computer Interaction (HCI) and Knowledge Discovery/Data Mining (KDD), with the goal of supporting human intelligence with machine intelligence. Both fields have many unexplored, complementing intersections and the aim is to combine the strengths of automatic, computer-based methods, both in time and space, with the strengths of human perception and cognition, e.g. in discovering patterns, trends, similarities, anomalies etc. in data.
Ubiquitous, Dynamic Inclusion and Fusion of Tracking Data from Various Sources for Mobile AR Applications in "AR-ready Environments"
Gudrun Klinker
Independent Researcher
Germany
Brief Bio
Prof. Gudrun Klinker, Ph.D. studied computer science (informatics) at the Friedrich-Alexander Universität Erlangen, Universität Hamburg (Diplom) and Carnegie-Mellon University (Ph.D.) in Pittsburgh, PA, USA, focusing on a physical approach to color computer vision. In 1989, she joined the Cambridge Research laboratory of Digital Equipment Corporation in Boston, MA, working in the visualization group on the development of a reusable tele-collaborative data exploration environment to analyze and visualize 3D and higher-dimensional data in medical and industrial applications. Since 1995, she has been researching various aspects of the newly emerging concept of Augmented Reality, first at the European Computer-industry Research Center, then at the Fraunhofer Institute for Computer Graphics, and since 2000 at the Technical University of Munich. Here, her research focus lies on developing approaches to ubiquitous augmented reality that lend themselves to realistic industrial applications. Prof. Klinker is one of the co-founders of the International Symposium of Augmented Reality (ISMAR). She has served on numerous program committees such as VR, VRST, 3DUI, and UIST. She is author and co-author of more than 100 reviewed scientific publications.
Abstract
With augmented reality becoming increasingly mobile, the need for ubiquitously available and flexibly configurable tracking services is becoming more and more urgent. At TU Munich we envision AR-ready environments that offer stationary tracking resources - to be fused on mobile devices with their own, built-in tracking facilities, depending on availability and application requirements. The talk will describe the underlying principles and capabilities of the Ubitrack system that is being developed in Munich.