Keynote 1:

Knowledge Graph for Drug Discovery

Ying Ding

Ying Ding
University of Texas at Austin, USA

Abstract:

A critical barrier in current drug discovery is the inability to utilize public datasets in an integrated fashion to fully understand the actions of drugs and chemical compounds on biological systems. There is a need to intelligently integrate heterogeneous datasets pertaining to compounds, drugs, targets, genes, diseases, and drug side effects now available to enable effective network data mining algorithms to extract important biological relationships. In this talk, we demonstrate the semantic integration of 25 different databases and showcase the cutting-edge machine learning and deep learning algorithms to mine knowledge graphs for deep insights, especially the latest graph embedding algorithm that outperforms baseline methods for drug and protein binding predictions.

Bio:

Dr. Ying Ding is Bill & Lewis Suit Professor at School of Information, University of Texas at Austin. Before that, she was a professor and director of graduate studies for data science program at School of Informatics, Computing, and Engineering at Indiana University. She has led the effort to develop the online data science graduate program for Indiana University. She also worked as a senior researcher at Department of Computer Science, University of Innsbruck (Austria) and Free University of Amsterdam (the Netherlands). She has been involved in various NIH, NSF and European-Union funded projects. She has published 240+ papers in journals, conferences, and workshops, and served as the program committee member for 200+ international conferences. She is the co-editor of book series called Semantic Web Synthesis by Morgan & Claypool publisher, the co-editor-in-chief for Data Intelligence published by MIT Press and Chinese Academy of Sciences, and serves as the editorial board member for several top journals in Information Science and Semantic Web. She is the co-founder of Data2Discovery company advancing cutting edge AI technologies in drug discovery and healthcare. Her current research interests include data-driven science of science, AI in healthcare, Semantic Web, knowledge graph, data science, scholarly communication, and the application of Web technologies.

 

Keynote 2:

Incremental learning and learning with drift

Barbara Hammer

Barbara Hammer
CITEC Centre of Excellence, Bielefeld University, Germany

Abstract:

Neural networks have revolutionised domains such as computer vision or language processing, and learning technology is included in everyday consumer products. Yet, practical problems often render learning surprisingly difficult, since some of the fundamental assumptions of the success of deep learning are violated. As an example, only few data might be available for tasks such as model personalization, hence few shot learning is required. Learning might take place in non-stationary environments such that models face the stability-plasticity dilemma. In such cases, applicants might be tempted to use models for settings they are not intended for, such that invalid results are unavoidable. Within the talk, I will address three challenges of machine learning when dealing with incremental learning tasks, addressing the questions, how to learn reliable given few examples only, how to learn incrementally in non-stationary environments where drift might occur, and how to enhance machine learning models by an explicit reject option, such that they can abstain from classification if the decision is unclear.

Bio:

Professor Barbara Hammer received her Ph.D. in Computer Science in 1999, and her venia legendi in Computer Science in 2003, both from the University of Osnabrueck. Since 2010, she is Professor for Machine Learning at the Faculty of Technology at Bielefeld University. Before that she was Professor for Theoretical Computer Science at Clausthal University of Technology from 2004 to 2010, and junior research group leader for 'Learning with Neural Methods on Structured Data' at the University of Osnabrueck from 2000 to 2004. Her areas of expertise include hybrid systems, self-organizing maps, clustering, and recurrent networks as well as applications in bioinformatics, industrial process monitoring, and cognitive science. Several research stays have taken her to Italy, the Netherlands, the U.K., France, the U.S. and India. Prof. Hammer is member of IEEE CIS and GI. She has been chairing the IEEE CIS Technical Committees on Data Mining, Neural Networks, and the Distinguished Lecturers Committee. She is currently an elected member of the IEEE CIS ADCOM-Administrative Committee.

 

Keynote 3:

From Sensors to Dempster-Shafer Theory and Back: the Axiom of Ambiguous Sensor Correctness and its Applications

Dirk Draheim

Dirk Draheim
Tallinn University of Technology, Estonia

Abstract:

Since its introduction in the 1960s, Dempster-Shafer theory became one of the leading strands of research in artificial intelligence with a wide range of applications in business, finance, engineering and medical diagnosis. In this talk, we aim to grasp the essence of Dempster-Shafer theory by distinguishing between ambiguous-and-questionable and ambiguous-but-correct perceptions. Throughout the talk, we reflect our analysis in terms of signals and sensors as a natural field of application. We model ambiguous-and-questionable perceptions as a probability space with a quantity random variable and an additional perception random variable (Dempster model). We introduce a correctness property for perceptions. We use this property as an axiom for ambiguous-but-correct perceptions. In our axiomatization, Dempster's lower and upper probabilities do not have to be postulated: they are consequences of the perception correctness property. Even more, we outline how Dempster's lower and upper probabilities can be understood as best possible estimates of quantity probabilities. Finally, we define a natural knowledge fusion operator for perceptions and compare it with Dempster's rule of combination.

Bio:

Dirk Draheim is full professor of information systems and head of the Information System Group at Tallinn University of Technology (TTÜ). The TTÜ IS Group is deeply involved into the design and implementation of the Estonian e-Governance ecosystem. Dirk holds a Diploma in computer science from Technische Universität Berlin, a PhD from Freie Universität Berlin and a habilitation from the University of Mannheim. Until 2006 he worked as a Researcher at Freie Universität Berlin. From 2006-2008 he was area manager for database systems at the Software Competence Center Hagenberg, Linz, as well as Adjunct Lecturer in information systems at the Johannes-Kepler-University Linz. From 2008-2016 he was head of the data center of the University of Innsbruck and, in parallel, Adjunct Reader at the Faculty of Information Systems of the University of Mannheim. Dirk is co-author of the Springer book "Form-Oriented Analysis" and author of the Springer books "Business Process Technology", "Semantics of the Probabilistic Typed Lambda Calculus" and "Generalized Jeffrey Conditionalization". His research interest is the design and implementation of large-scale information systems. Dirk Draheim is member of the ACM.

 

Keynote 4:

Knowledge Availabilty and Information Literacies

Gerald Weber

Gerald Weber
University of Auckland, New Zealand

Abstract:

At least since Tim Berners-Lee's call for 'Raw Data Now' in 2009, which he combined with a push for linked data as well, the question has been raised how to make the wealth of data and knowledge available to the citizens of the world.  We will set out to explore the many facets and multiple layers of this problem, leading up to the question of how we as users will access and utilize the knowledge that should be available to us.

Bio:

Gerald Weber joined the University of Auckland in 2003 and is Senior Lecturer in Computer Science and Software Engineering. Gerald is interested in the interaction of Humans, Computers and Data, and with his team has received Best Paper awards at CHI 2019 and DocEng 2018. Gerald has been chairing leading conferences in the fields of Databases and Human-Computer Interaction. His work has been featured in popular science as well, including BBC Click.

 

Keynote 5:

Explainable Fact Checking for Statistical and Property Claims

Paolo Papotti

 

Paolo Papotti
​​​​​​​EURECOM, France

Abstract:

Misinformation is an important problem but fact checkers are overwhelmed by the amount of false content that is produced online every day. To support fact checkers in their efforts, we are creating data driven verification methods that use structured datasets to assess claims and explain their decisions. For statistical claims, we translate text claims into SQL queries on relational databases. We exploit text classifiers to propose validation queries to the users and rely on tentative execution of query candidates to narrow down the set of alternatives. The verification process is controlled by a cost-based optimizer that considers expected verification overheads and the expected claim utility as training samples. For property claims, we use the rich semantics in knowledge graphs (KGs) to verify claims and produce explanations. As information in a KG is inevitably incomplete, we rely on rule discovery and on text mining to gather the evidence to assess claims. Uncertain rules and facts are turned into logical programs and the checking task is modeled as a probabilistic inference problem. Experiments show that both methods enable the efficient and effective labeling of claims with interpretable explanations, both in simulations and in real world user studies with 50% decrease in verification time. Our algorithms are demonstrated in a fact checking website (https://coronacheck.eurecom.fr), which has been used by more than twelve thousands users to verify claims related to the coronavirus disease (COVID-19) spreads and effects.

Bio:

Paolo Papotti is an Associate Professor at EURECOM, France since 2017. He got his PhD from Roma Tre University (Italy) in 2007, and had research positions at the Qatar Computing Research Institute (Qatar) and Arizona State University (USA). His research is focused on data integration and information quality. He has authored more than 100 publications, and his work has been recognized with two “Best of the Conference” citations (SIGMOD 2009, VLDB 2016), a best demo award at SIGMOD 2015, and two Google Faculty Research Award (2016, 2020). He is associate editor for PVLDB and the ACM Journal of Data and Information Quality (JDIQ).