BDAS`05-12 paper searching system
Home > Description
Prof. Jens Allmer
Department of Molecular Biology and Genetics, Izmir Institute of Technology, Urla, Izmir, Turkey
Bio: Jens Allmer is Associate Professor of Bioinformatics in the Molecular Biology and Genetics at the Izmir Institute of Technology and also faculty member of the Biotechnology and Bioengineering doctoral program as well as the CEO of Bionia Incorporated founded as a University spinoff in 2014. In 2010 he was selected outstanding young scientist for bioinformatics from the Turkish Academy of Sciences. Jens Allmer received his PhD in Biology with great honor from the University of Münster where he also obtained his MSc. His main research interests are bioinformatics and computational biology with a special focus on multiple omics data integration.
Title: Database Integration Facilitating the Merging of MicorRNA and Gene Regulatory Pathways in ALS (by Hamid Hamzeiy, Rabia Suluyayla, Christoph Brinkrolf, Ralf Hofestaedt, and Jens Allmer)
Abstract: Thousands of databases exist in the domain of bioinformatics. Many have a particular purpose like the online mendelian inheritance in man database which hosts human genetic disorders. A well known resource is Genbank (NCBI) which is an online repository containing all available gene sequences known to date. This is a large resource synchronized with the databank of Japan and the European bioinformatics institute. While ‘all known genes’ may sound authoritative, the size is dwarfed by the sequence read archive which contains petabytes of sequencing data. Some databases like the aforementioned ones are less structured, while others are well structured around a central topic for example the Kyoto Encyclopedia of Genes and Genomes (KEGG) which represents metabolic and regulatory pathways. Due to the large amount of federated bioinformatics databases, data warehouses like BioDWH became important for data integration. Complex human diseases like Amyotrophic Lateral Sclerosis (ALS), famous because of Stephen Hawking, highlight the need for data integration to unravel the underlying molecular disease mechanism. Here we used a number of databases, including three microRNA resources and KEGG to investigate gene regulation in ALS. Mere data integration, however, is not sufficient to allow proper analysis and visualization is another key aspect influencing the practical success of data integration. Here we use VANESA in combination with the BioDWH data warehouse to integrate gene and microRNA regulatory pathways to elucidate the disease phenotype. We established an extensible data warehouse with visual querying facilities and suggest new important genes which need scrutiny in respect to ALS.
Prof. Dr. Dirk Labudde
Bioinformatics group Mittweida (bigM) and Forensic Science Investigation Lab (FoSIL), University of Applied Sciences, Mittweida, Germany
Bio: Dirk Labudde is a professor at the University of Applied Sciences Mittweida, Germany, since September 1, 2009. He received his diploma in 1993 and obtained his Ph.D. in theoretical physics in 1997, both at Rostock University, while also studying medical physics at Kaiserslautern University. He later worked as a lecturer and research assistant at Medical School and Clinical Center for Neurosurgery in Neubrandenburg, Leibnitz Institute for Molecular Pharmacology in Berlin, Technical University Munich, and Technical University Dresden before accepting a professorship position for bioinformatics and forensics at Mittweida.
His main areas of research are algorithms and computational methods in (digital) forensics and structural bioinformatics.
Title: 3D Crime Scene and Disaster Site Reconstruction using Open Source Software
Abstract: Recent developments have given rise to a plethora of soft- and hardware toolkits for three-dimensional reconstruction of objects, locations, places and larger areas. In this respect, reconstruction algorithms have become so efficient that such modeling tasks can be conducted on a single desktop machine in reasonable time. Open source software realizations further provide attractive cost efficient solutions.
In modern forensics, the 3D reconstruction of crime scenes has become popular over the last decade. By integrating temporal data of actions and movements, spatiotemporal models of a given crime can be reconstructed. Such three-dimensional path-time-diagrams can be of great use in ongoing investigations and for archiving as well as reviewing in the future. Additionally, reconstruction procedures can be applied to disaster sites (such as train or plane crash sites, reactor accidents or areas severely harmed by floods, landslides or earthquakes), whereat resulting 3D models can aid in efficient task force planning.
This talk will address the current state of 3D reconstruction processes and spatiotemporal modeling of criminal events by means of open source software. Furthermore, present problems in acquiring underlying data as well as adequate storing are discussed. Requirements for future standards of data quality and processing are elucidated and illustrated on exemplary models obtained from photogrammetric reconstructions.
Prof. Witold Pedrycz
Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Canada
Bio: Witold Pedrycz is a Professor and Canada Research Chair (CRC) in Computational Intelligence in the Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Canada. He is also with the Systems Research Institute of the Polish Academy of Sciences, Warsaw, Poland. He also holds an appointment of special professorship in the School of Computer Science, University of Nottingham, UK. In 2009 Dr. Pedrycz was elected a foreign member of the Polish Academy of Sciences. In 2012 he was elected a Fellow of the Royal Society of Canada. Witold Pedrycz has been a member of numerous program committees of IEEE conferences in the area of fuzzy sets and neurocomputing. In 2007 he received a prestigious Norbert Wiener award from the IEEE Systems, Man, and Cybernetics Council. He is a recipient of the IEEE Canada Computer Engineering Medal 2008. In 2009 he has received a Cajastur Prize for Soft Computing from the European Centre for Soft Computing for "pioneering and multifaceted contributions to Granular Computing". In 2013 has was awarded a Killam Prize. In the same year he received a Fuzzy Pioneer Award 2013 from the IEEE Computational Intelligence Society.
His main research directions involve Computational Intelligence, fuzzy modeling and Granular Computing, knowledge discovery and data mining, fuzzy control, pattern recognition, knowledge-based neural networks, relational computing, and Software Engineering. He has published numerous papers in this area. He is also an author of 15 research monographs covering various aspects of Computational Intelligence, data mining, and Software Engineering.
Dr. Pedrycz is intensively involved in editorial activities. He is an Editor-in-Chief of Information Sciences and Editor-in-Chief of WIREs Data Mining and Knowledge Discovery (Wiley). He currently serves as an Associate Editor of IEEE Transactions on Fuzzy Systems and is a member of a number of editorial boards of other international journals.
Data Analytics: Selected Insights into Data Quality, Associations, and Information Granules
Abstract: Data are the blood life of today’s society. The diversity of data is enormous. The quality of data including their comprehensive and multifaceted characterization becomes of paramount importance and is central to further data analysis and processing.
In this presentation, we cover a suite of selected insights into data quality and elaborate on their quantification. The two central issues involving coping with incomplete data (invoking data imputation) and imbalanced data are discussed.
In addressing these issues and delivering algorithmically sound solutions, we advocate a central role of information granularity being played in dealing with the two above stated problems and yielding the results quantified in terms of information granules.
Revealing interpretable and conceptually stable associations (relationships) within data form another central item on the agenda of data analytics. We show how granular mappings engaging granular parameter spaces are developed and assessed. Associative relationships constructed in terms of granular bidirectional and multidirectional associative memories are investigated. We also develop granular autoencoders and stacked granular autoencoders considered in architectures of deep learning.
Dr. Dominik Szczerba
Department of Theoretical Physics, University of Silesia, Poland
Bio: Dominik Szczerba is an assistant professor at the department of theoretical physics at the University of Silesia in Poland. He obtained his PhD in physics from the ETH in Zürich. His main research interest is focused on mathematical and computational modeling in biology and physiology. His research in this area started after his PhD by joining the medical image analysis group at the ETH. He contributed to understanding how blood vessels are formed on the microscale. In particular, he was able to show, that formation of bifurcations, feeding arteries and draining veins can be explained purely by physical and chemical phenomena. He was also studying the complex blood flow conditions in ascending aorta, abdominal bifurcation and cranial and abdominal aneurysms. He was again able to show, that the wall failure in abdominal bifurcations can be directly related purely to mechano-chemical factors. He has also been active in the CFD community where he was able to re-derive established computational methods to take full advantage of unstructured body fitted meshes and modern hybrid hardware. He has also significant experience in generation of complex computational models from medical image data. For several years he was a key member in the Virtual Population project.
Abstract: Recent progress in high performance computing have made it possible to perform realistic simulations of setups involving complex anatomical models. This is of particular relevance to the field of computational life sciences. Simulations are now increasingly used to assess the safety and efficacy of medical devices, optimize treatments considering patient specific information, study different types of exposure situations (physical, chemical), investigate hypotheses on biological and physiological interaction pathways, etc. Modeling offers a way to quickly test hypotheses, to vary parameters in a highly controlled manner and to obtain detailed information that might not be accessible by experimental measurements. The quality of simulations (level of detail, realism) has reached a level where modeling involving anatomical models is becoming an accepted approach for regulatory agencies such as the FDA and a valuable tool during the development process in the medtech industry. A large number of detailed anatomical models have become available in recent times which makes it possible to assess the impact of individual variations across the population. As such simulations can quickly consume large amounts of memory and processor time high performance computing approaches like parallelization and hardware acceleration become indispensable. The widespread availability of multi-core CPUs and GPUs is a critical component that has made these approaches accessible to a large public. Another key factor is the increasing availability of medical image data (MRI, CT, etc.) which allows the creation of realistic geometrical models as well as the assignment of inhomogeneous material parameter distributions and boundary conditions. In this talk I will present a few cases demonstrating how complex anatomical models and state of the art simulation methods can be used to tackle real life problems and provide practical solutions.
Prof. Jean-Charles Lamirel
SYNALP team, LORIA, Vandœuvre-lès-Nancy, France
Bio: Jean-Charles Lamirel is a lecturer since 1997. He got his PhD in 1995 and his Research Accreditation in 2010. He is currently teaching Information Science and Computer Science at the University of Strasbourg and achieving his research at the INRIA-LORIA laboratory of Nancy (France). He was a research member of the INRIA-CORTEX project whose scope is Neural Networks and Biological Systems. He has then integrated the INRIA-TALARIS project (recently becoming the LORIA SYNALP Project) whose main concern is automatic language and text processing. His main domain of research is Textual Data Mining based on Neural Networks. He has interests in both theoretical models for Data Mining and Data Mining applications. He is more specifically specialized in unsupervised learning methods. He is the creator of the paradigms of Data Analysis based on Multiple Viewpoints (MVDA) and Measure based on Feature Maximization (F-Max). The related models for which it has been proven that they outperform classical models begin to be used in many challenging Data Mining applications. His other main topics of research concern Visualization Methods for Data Analysis, Quality Issues in Data Analysis and Novelty Detection models. He and his tools have been currently involved in several European projects on Webometrics and Data Analysis, like the recent EISCTES or QUAERO projects. He is a board member of international Webometrics journals and organizer of international conferences in the same domain, like the recent WSOM+ 2017 conference. His research work and direction led to the successful presentation of more than 10 different Ph.Ds. thesis. It also generated an important scientific production: more than 27 invited conferences, more than 16 special sessions in international conferences and more than 150 publications in international conferences and journals. This work also was worthy for him as a whole the recognition of many prestigious foreign institutional partners like NIEHS (USA), NSC (Taiwan), KU Leuven (Belgium), UTS (Australia) and University of Santiago (Chile).
Title: Text mining in the big data context: existing approaches and challenges
Abstract: It's a no-brainer to say that we have entered an era where the textual data in all its forms is overwhelming all of us whether in his personal or professional environment: the increasing number of necessary documents to companies or governments, the profusion of textual data available via the Internet, the development of data in free access (open data), the digital libraries and online archives, the web data or social media are just a few examples illustrating the evolution of the notion of text, its diversity and its proliferation.
Facing with automatic methods of data mining and more specifically with those of text mining has thus become essential. Recently, the deep learning methods have created new opportunities for research to process massive data. However, many questions remain unresolved, for example with regard to the management of large textual collections. Being able to have effective tools of textual analysis, capable to adapt to large volumes of data, often heterogeneous nature, rarely structured, in various languages, in very specialized areas or instead in natural language form remains a challenge.
This tutorial will cover all essential aspects of text mining and the multiple research areas that are involved in it as the automatic language processing, artificial intelligence, linguistics and statistics... Different techniques like text summarization, text categorization, topic extraction and sentiment analysis will be explored in the context of very diverse applications like information retrieval, spam filtering, recommender systems, scientific or economic survey, or even, counter-terrorism and forensics.
Requirements for attendees (Skills): none