Keynote Speakers of MICAD2022
(Alphabetize by Last Name)
Asst. Prof. Ehsan Adeli
Stanford University, United States
Co-director of Stanford AGILE (Advancing technoloGy for fraIlty and LongEvity) Consortium
Director of Mind and Motion Lab
Senior Member of IEEE
Ehsan Adeli, Ph.D., is an Assistant Professor at the Stanford University, Department of Psychiatry and Behavioral Sciences and is affiliated with the Computer Science department. His research interests include computational neuroscience, computer vision, machine learning, and healthcare. Dr. Adeli is an executive co-director of Stanford AGILE (Advancing technoloGy for fraIlty and LongEvity) Consortium, which aims to develop methods to diagnose and treat frailty. He is an Associate Editor of two journals in the field: IEEE Journal of Biomedical and Health Informatics and the Journal of Ambient Intelligence and Smart Environments. He is a Senior Member of IEEE and has served as area chair for several conferences (MICCAI, CVPR, ICLR, AAAI) over the past 3-4 years.
Speech Title: Dealing with confounders and bias in medical studies in the age of deep learning
Abstract: The presence of confounding effects is inarguably one of the most critical challenges in medical applications. They influence both input (e.g., neuroimages) and the output (e.g., diagnosis or clinical score) variables and may cause spurious associations when not properly controlled for. Confounding effect removal is particularly difficult for a wide range of state-of-the-art prediction models, including deep learning methods. These methods operate directly on images and extract features in an end-to-end manner. This prohibits removing confounding effects by traditional statistical analysis, which often requires precomputed features (image measurements). In this talk, I will present methods to learn confounder-invariant discriminative features and novel normalization techniques to remove confounding and bias effects while training neural networks.
Prof. Adrian Barbu
Florida State University, United States
Adrian Barbu received a Ph.D. in Mathematics in 2000 from Ohio State University and a Ph.D. in Computer Science in 2005 from the University of California, Los Angeles.
From 2005 to 2007, he was a research scientist and later a project manager in Siemens Corporate Research, working on medical imaging problems.
He received the 2011 Thomas A. Edison Patent Award with his Siemens coauthors for their work on Marginal Space Learning.
In 2007, he joined the Statistics Department at Florida State University as an assistant professor and since 2019 as a professor.
He has published more than 70 papers in computer vision, machine learning, and medical imaging and has more than 25 patents
related to medical imaging and image denoising.
He also wrote a book with his Ph.D. advisor Song-Chun Zhu. The book is titled "Monte Carlo Methods" and was published in Springer in 2020.
Talk title:Organ Segmentation: A Journey from Level Sets to Shape Denoising
Abstract: This talk starts by introducing a current approach we took for 2D or 3D organ segmentation that generalizes the Chan-Vese level set method in multiple ways. Chan-Vese is a low-level segmentation method that simultaneously evolves a level set while fitting locally constant intensity models for the interior and exterior regions. Our approach replaces its simple length-based regularization with a shape model based on a U-Net CNN, which needs to be trained using examples. We show how to train this CNN and what type of data augmentation methods can be used to avoid overfitting. The obtained Chan-Vese Neural Network (CVNN) has very good segmentation accuracy while having a small number of parameters compared to other CNN based models. From here, focusing on the data augmentation part, we stumble on a segmentation problem that has not received much attention in the literature, which we call Shape Denoising. Representing shapes as binary images, the problem is to recover the shape of an object (e.g. a liver or a horse) after it was perturbed by some deformations (noise). We study different kinds of noise that perturb the shape in different ways, and empirically compare multiple methods that can recover the original shape from the noisy one. The methods include different CNN models such as Deep Bolzman Machine (DBM), Centered Convolutional DBM, Energy Based Models, U-Net and Masked Autoencoder. We observe that some noises are more difficult than others and the U-Net and Masked Autoencoder consistently outperform the other methods on all types of noise. In the future we plan to study shape denoising in the wild where the shapes are not aligned, making the problem more difficult and overfitting more severe.
Prof. Liangxiu Han
Manchester Metropolitan University, UK
Co-Director of Centre for Advanced Computational Science
Deputy Director of Crime and Wellbeing Big Data Centre
Prof. Liangxiu Han has a PhD in Computer Science from Fudan University, Shanghai, P.R. China (2002). Prof. Han is currently a Professor of Computer Science at the Department of Computing and Mathematics, Manchester Metropolitan University. She is a co-Director of Centre for Advanced Computational Science and Deputy Director of ManMet Crime and Well-Being Big Data Centre. Han’s research areas mainly lie in the development of novel big data analytics/Machine Learning/AI, and development of novel intelligent architectures that facilitates big data analytics (e.g., parallel and distributed computing, Cloud/Service-oriented computing/data intensive computing) as well as applications in different domains (e.g. Precision Agriculture, Health, Smart Cities, Cyber Security, Energy, etc.) using various large scale datasets such as images, sensor data, network traffic, web/texts and geo-spatial data. As a Principal Investigator (PI) or Co-PI, Prof. Han has been conducting research in relation to big data/Machine Learning/AI, cloud computing/parallel and distributed computing (funded by EPSRC, BBSRC, Innovate UK, Horizon 2020, British Council, Royal Society, Industry, Charity, respectively, etc.). Prof. Han has served as an associate editor/a guest editor for a number of reputable international journals and a chair (or Co-Chair) for organisation of a number of international conferences/workshops in the field. She has been invited to give a number of keynotes and talks on different occasions (including international conferences, national and international institutions/organisations). Prof. Han is a member of EPSRC Peer Review College, an independent expert for Horizon 2020 proposal evaluation/mid-term project review, and British Council Peer Review Panel.
Talk title: Scalable Deep Learning for Alzheimer’s Disease Diagnosis from Large Neuroimaging Data
Abstract: Computer-aided early diagnosis of Alzheimer’s disease (AD) and its prodromal form mild cognitive impairment (MCI) based on structure Magnetic Resonance Imaging (sMRI) has provided a cost-effective and objective way for early prevention and treatment of disease progression, leading to improved patient care. In this work, we have proposed a new scalable deep learning solution for efficient and early Alzheimer’s Disease Diagnosis. Meanwhile, to understand inside our model and how our model reach decisions, visual explanation approach was also applied to identify and visualize those important areas contributing to our model decisions. The experimental evaluation shows the proposed work has a competitive advantage over existing methods.
Prof. Klaus Maier-Hein
Heidelberg University, Germany
Managing Director of Data Science and Digital Oncology at the German Cancer Research Center (DKFZ)
Klaus Maier-Hein is full professor at Heidelberg University and Managing Director of Data Science and Digital Oncology at the German Cancer Research Center (DKFZ). He heads the Division of Medical Image Computing at the DKFZ and the Pattern Analysis and Learning Group at Heidelberg University Hospital. After studying computer science at Karlsruhe Institute of Technology and École Polytechnique Fédérale de Lausanne he received his PhD in computer science in 2010 from the University of Heidelberg, followed by postdoctoral work at DKFZ and Harvard Medical School. His research is focused on deep learning methodology in the context of medical imaging and the development of research software infrastructure for efficient translation of results.
Speech Title: Machine Learning in Medical Imaging: Current Challenges
Abstract: Despite its vast potential, the actual practice-changing clinical impact of machine learning in medical imaging has so far been rather modest. Why is that? The talk covers several major challenges that I consider essential in unlocking the full potential of machine learning in medical imaging, and I present current examples of our ongoing research that address them.
Prof. Hongen Liao
Tsinghua University, China
Prof. Hongen Liao is currently a Full Professor and Vice Dean in the School of Medicine, and the Department of Biomedical Engineering, Tsinghua University, China. He has been selected as a National Distinguished Professor of China since 2010. He made a great success and numerous major achievements in the 3D autostereoscopic medical image processing and display, spatial see-through surgical navigation, solved the long-existing “hand-eye discoordination problem” suffered by medical doctors. He has also been involved in long viewing distance autostereoscopic display and 3D visualization. He is the author and co-author of more than 320 peer-reviewed articles and proceedings papers, including publication in IEEE Transactions, Nature Photonics, Theranostics, Medical Image Analysis, as well as 80 international invited lectures, over 60 patents and 340 conference abstracts.
He is an Associate Editor of International Conference of IEEE Engineering in Medicine and Biology Society (Since 2008), the Organization Chair of Medical Imaging and Augmented Reality Conference (MIAR) 2008, the Program Chair of the Asian Conference on Computer-Aided Surgery (ACCAS) 2008 and 2009, the Tutorial co-chair of the Medical Image Computing and Computer Assisted Intervention Conference (MICCAI) 2009, the Publicity Chair of MICCAI 2010, the General Chair of MIAR 2010 and ACCAS 2012, the Workshop Chair of MICCAI 2013, and the General Co-chair of MIAR 2016, ACCAS 2018. He has served as a President of Asian Society for Computer Aided Surgery and Co-chair of Asian-Pacific Activities Working Group, International Federation for Medical and Biological Engineering (IFMBE).
Speech Title: 3D Medical Imaging and Visualization for Intelligent Minimally Invasive Surgery
Prof. Dr. Thomas Schultz
University of Bonn, Germany
Head of the Visualization and Medical Image Analysis Group
Thomas Schultz is a university professor for Life Science Informatics and Visualization at the University of Bonn, Germany, where he is heading the Visualization and Medical Image Analysis Group at the B-IT and Department of Computer Science. His work focuses on the development and integration of computational tools for quantitative image analysis, machine learning, and interactive visualization, in order to gain insights from large, complex, and dynamic image data, which challenges traditional approaches to image analysis and interpretation. He has served as an area chair/IPC member at various conferences, including MICCAI, MIDL, IEEE VIS, EuroVis, PacificVis, and VCBM.
Speech title: Interpretable and Interactive Machine Learning for Medical Image Analysis
Abstract: In this talk, I will argue that making machine learning approaches interpretable and interactive is important to realize their full potential for medical image analysis. Interpretability increases the trustworthiness of automated methods by providing some level of insight into their decision making process. Suitable interaction techniques make it efficient to proofread automated results and to correct remaining errors. I will illustrate these points with specific examples from our recent work on detecting peripheral arterial disease based on color fundus photography, and on segmentation correction in optical coherence tomography.
Prof. Ronald Summers
Senior Investigator and Staff Radiologist at the NIH
Ronald M. Summers received the B.A. degree in physics and the M.D. and Ph.D. degrees in Medicine/Anatomy & Cell Biology from the University of Pennsylvania. In 1994, he joined the Diagnostic Radiology Department at the NIH Clinical Center in Bethesda, MD where he is now a tenured Senior Investigator and Staff Radiologist. In 2013, he was named a Fellow of the Society of Abdominal Radiologists. He is currently Chief of the Clinical Image Processing Service and directs the Imaging Biomarkers and Computer-Aided Diagnosis (CAD) Laboratory. In 2000, he received the Presidential Early Career Award for Scientists and Engineers, presented by Dr. Neal Lane, President Clinton’s science advisor. In 2012, he received the NIH Director’s Award, presented by NIH Director Dr. Francis Collins. His research interests include deep learning, virtual colonoscopy, CAD and development of large radiologic image databases. His clinical areas of specialty are thoracic and abdominal radiology and body cross-sectional imaging. He is a member of the editorial boards of the Journal of Medical Imaging and Academic Radiology and a past member of the editorial board of Radiology. He is a program committee member of the Computer-aided Diagnosis section of the annual SPIE Medical Imaging conference and will be co-chair of the entire conference in 2018 and 2019.He was Program Co-Chair of the 2018 IEEE ISBI symposium.
Speech Title: Challenges and Opportunities for AI in Abdominal Radiology
Abstract: AI in radiology is demonstrating explosive growth. Beneficial applications of AI in radiology for patient care are being implemented and on the horizon. AI for abdominal radiology is a relatively understudied area with potential clinical benefits. In this presentation, I will describe some of the many applications of AI in abdominal radiology including opportunistic screening, body composition analysis, and detection, segmentation and classification of major organ diseases.
Prof. Tolga Tasdizen
University of Utah, United States
Dr. Tasdizen is a Professor of Electrical and Computer Engineering and a faculty member of the Scientific Computing and Imaging (SCI) Institute at the University of Utah. His areas of expertise are image processing, biomedical image analysis and machine learning. His laboratory has been funded by the National Institutes of Health, the National Science Foundation, the Department of Energy and the Department of Homeland Security. He received the National Science Foundation’s Early CAREER award in 2012. Dr. Tasdizen’s research emphasizes developing novel solutions in image analysis and machine learning as well as making contributions to the driving medical applications. He is particularly interested in deep learning applications where labeled data is scarce, and researches alternative methods of supervision and semi-supervised learning approaches for solving these problems. He has served as an Associate Editor for IEEE Transactions on Image Processing, IEEE Signal Processing Letters and BMC Bioinformatics, Area Chair for MICCAI, and currently serves as a Senior Area Editor for IEEE Transactions on Image Processing.
Speech Title: Stain-based Contrastive Learning for Histopathological Image Classification
Abstract: We will present a novel semi-supervised learning approach for classification of histopathology images. We demonstrate how strong supervision with patch-level annotations can be combined with a novel co-training loss to create a semi-supervised learning framework. Co-training relies on multiple conditionally independent and sufficient views of the data. We separate the hematoxylin and eosin channels in pathology images using color deconvolution to create two views of each slide that can partially fulfill these requirements. Two separate CNNs are used to embed the two views into a joint feature space. We use a contrastive loss between the views in this feature space to implement co-training. We evaluate our approach in clear cell renal cell and prostate carcinomas, and demonstrate improvement over state-of-the-art semi-supervised learning methods.
Prof. Sotirios A Tsaftaris
University of Edinburgh, UK
Canon Medical/Royal Academy of Engineering Research Chair in Healthcare AI
Chair in Machine Learning and Computer Vision
Turing Fellow, ELLIS Fellow
Sotirios A. Tsaftaris is currently Chair (Full Professor) in Machine Learning and Computer Vision at the University of Edinburgh. He also holds the Canon Medical/Royal Academy of Engineering Research Chair in Healthcare AI. He is also a Turing Fellow with the Alan Turing Institute and an ELLIS Fellow of the European Lab for Learning and Intelligent Systems (ELLIS) of Edinburgh’s ELLIS Unit. His research interests are image analysis, image processing, data mining and machine learning, and distributed computing. Core research applications are in computer aided diagnosis in medicine and phenotyping in biology.
Prof. Sotirios A. Tsaftaris has published extensively, particularly in interdisciplinary fields, with more than 180 journal and conference papers in his active record, with a variety of co-authors and collaborators.
While he has served in many technical program committees of international conferences, and he actively reviews papers for several prestigious international journals, most notably he currently is an Associate Editor (AE) for the IEEE Transactions on Medical Imaging. He served as an AE for IEEE Journal of Biomedical and Health Informatics (2011-2021) and Elsevier DSP (2014-2018). He was tutorial chair for ECCV 2020. He was Doctoral Symposium Chair for IEEE ICIP 2018 (Athens). He has served as area chair for CVPR 2021, MICCAI 2018 (Granada), ICME 2018 (San Diego), ICCV 2017 (Venice), MMSP 2016 (Montreal), VCIP 2015 (Singapore). He has also co-organized workshops and tutorials for ECCV (2020, 2014), CVPR (2019), ICCV (2017), BMVC (2015), and MICCAI (2016, 2017, 2021). He is a member of the IEEE, Senior Member, ISMRM, and SCMR.
Speech Title: Diffusion Models in Medical Imaging and Analysis. Hype or Hope?
Abstract: Generative models, such as VAEs, GANs, Normalising Flows, have been extremely useful in medical imaging and analysis for finding useful representation spaces, creating additional unseen examples, or simply acting as regularisers within a multitask learning setting. A new breed of models is now receiving considerable attention in AI and Computer Vision. Diffusion models are now empowering famous examples of massive multimodal (text/image) generative models such as Stable Diffusion (Stability.AI), ImageGen (Google), Dall-E (OpenAI). But are they full of hype or hope? And what is their role in medical imaging and analysis? In this talk we will briefly recap the theory of diffusion models, summarise recent papers that use diffusion models in medical imaging and analysis, and offer inspiring papers from computer vision to help guide future directions in the use of these models in our field. We conclude that such models are far from being …!
Prof. Linwei Wang
Rochester Institute of Technology, United States
Dr. Linwei Wang is a Professor of Computing and Information Sciences at the Rochester Institute of Technology in Rochester, NY. She directs RIT’s Signature Interdisciplinary Research Area in Personalized Health Technology. She also directs the Computational Biomedical Lab (CBL) that conducts interdisciplinary research at the intersection of artificial intelligence and healthcare, especially in the development of Bayesian inference and Bayesian deep learning techniques for health data understanding. Her group’s research is supported by over 8-million funding from the National Science Foundation and the National Institutes of Health. Dr. Wang serves as a current member on the Board of the MICCAI Society. She is a recipient of the NSF CAREER Award in 2014 and the United States’ Presidential Early Career Awards for Scientists and Engineers (PECASE) in 2019.
Speech Title: Few-shot Generation of Personalized Neural Surrogates for Cardiac Simulation
Abstract: Clinical adoption of personalized virtual heart simulations faces challenges in model personalization and expensive computation. While an ideal solution is an efficient neural surrogate that at the same time is personalized to an individual subject, the state-of-the-art is either concerned with personalizing an expensive simulation model, or learn- ing an efficient yet generic surrogate. This paper presents a completely new concept to achieve personalized neural surrogates in a single coher- ent framework of meta-learning (metaPNS). Instead of learning a single neural surrogate, we pursue the process of learning a personalized neural surrogate using a small amount of context data from a subject, in a novel formulation of few-shot generative modeling underpinned by: 1) a set- conditioned neural surrogate for cardiac simulation that, conditioned on subject-specific context data, learns to generate query simulations not included in the context set, and 2) a meta-model of amortized varia- tional inference that learns to condition the neural surrogate via simple feed-forward embedding of context data. As test time, metaPNS deliv- ers a personalized neural surrogate by fast feed-forward embedding of a small and flexible number of data available from an individual, achieving – for the first time – personalization and surrogate construction for ex- pensive simulations in one end-to-end learning framework. Synthetic and real-data experiments demonstrated that metaPNS was able to improve personalization and predictive accuracy in comparison to conventionally- optimized cardiac simulation models, at a fraction of computation.