IEEE HPCC 2015 Keynote Speakers
Sun-Yuan Kung

Sun-Yuan Kung
IEEE Fellow, Princeton University, USA
Bio:

Sun-Yuan Kung was born in Taiwan on January 2, 1950. He received the B.S. in Electrical Engineering from the National Taiwan University in 1971; M.S. in Electrical Engineering from the University of Rochester in 1974; and Ph.D. in Electrical Engineering from Stanford University in 1977. From 1977 to 1987, he was on the faculty of Electrical Engineering-Systems at the University of Southern California. In 1984, he was a Visiting Professor at Stanford University and later in the same year, a visiting professor at the Delft University of Technology. Since September 1987, he has been a Professor in the Department of Electrical Engineering, Princeton University. He currently serves on the IEEE Technical Committees on VLSI Signal Processing and Neural Networks and an Editor-in-Chief of Journal of VLSI Signal Processing.
[Download Bio]

Lecture Topic:
Kernel Machine for Visualization and Classification of Big Data

When: 10:40 Aug. 24

Where: Ballroom

Abstract:

Big data has many divergent types of sources, from physical (sensor/IoT) to social and cyber (web) types, rendering it messy and, imprecise, and incomplete. The intensive computing need in big data calls for special hardware and software technologies for parallel and/or distributed computing systems, wit architectural platform closely coupled with the novel and error-tolerant data mining technologies. This talk will attempt a balanced coverage between the theoretical foundation, algorithmic innovation, and architectural codesign.
Due to its quantitative (volume and velocity) and qualitative (variety) challenges, big data to the users resembles something like "the elephant to the blind men". It is imperative to enact a major paradigm shift in data mining and learning tools so that information from diversified sources must be integrated together to unravel information hidden in the massive and messy big data, so that, metaphorically speaking, it would let the blind men "see" the elephant. This talk will address yet another vital "V"-paradigm: "Visualization". Visualization tools are meant to supplement (instead of replace) the domain expertise (e.g. a cardiologist) and provide a big picture to help users formulate critical questions and subsequently postulate heuristic and insightful answers.
For big data, the curse of high feature dimensionality is causing grave concerns on computational complexity and over-training. In this talk, we shall explore various projection methods for dimension reduction - a prelude to visualization of vectorial and non-vectorial data. A popular visualization tool for unsupervised learning is Principal Component Analysis (PCA). PCA aims at best recoverability of the original data in the Euclidean Vector Space (EVS). We shall propose a supervised PCA - Discriminant Component Analysis (DCA) - in a Canonical Vector Space (CVS). Simulations confirm that DCA far outperforms PCA, both numerically and visually. More importantly, via proper interplay between anti-recoverability in EVS and discriminant power in CVS, DCA is promising for privacy protection when personal data are being shared on the cloud in collaborative learning environment s.
We shall extend PCA/DCA to kernel PCA/DCA for the purpose of visualizing nonvectorial data. The success of kernel methods depend critically on which kernel function is used to represent the similarity of a pair of objects. For visualization of nonvectorial and incompletely specified data, our experimental study points to a promising application of multi-kernels , including an imputed Gaussian RBF kernel and a partial correlation kernel. [Download Abstract]

Link to personal website

Jack Dongarra

Jack Dongarra
IEEE Fellow and ACM Fellow, University of Tennessee, USA
Bio:

Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Ph.D. in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Computer Science Department at the University of Tennessee and holds the title of Distinguished Research Staff in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL); Turing Fellow at Manchester University; an Adjunct Professor in the Computer Science Department at Rice University; and a Faculty Fellow of the Texas A&M University's Institute for Advanced Study. He is the director of the Innovative Computing Laboratory at the University of Tennessee. He is also the director of the Center for Information Technology Research at the University of Tennessee which coordinates and facilitates IT research efforts at the University.
He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced-computer architectures, programming methodology, and tools for parallel computers. His research includes the development, testing and documentation of high quality mathematical software. He has contributed to the design and implementation of the following open source software packages and systems: EISPACK, LINPACK, the BLAS, LAPACK, ScaLAPACK, Netlib, PVM, MPI, NetSolve, Top500, ATLAS, and PAPI. He has published approximately 200 articles, papers, reports and technical memoranda and he is coauthor of several books. He was awarded the IEEE Sid Fernbach Award in 2004 for his contributions in the application of high performance computers using innovative approaches; in 2008 he was the recipient of the first IEEE Medal of Excellence in Scalable Computing; in 2010 he was the first recipient of the SIAM Special Interest Group on Supercomputing's award for Career Achievement; in 2011 he was the recipient of the IEEE IPDPS Charles Babbage Award; and in 2013 he was the recipient of the ACM/IEEE Ken Kennedy Award for his leadership in designing and promoting standards for mathematical software used to solve numerical problems common to high performance computing. He is a Fellow of the AAAS, ACM, IEEE, and SIAM and a member of the National Academy of Engineering.[Download Bio]

Lecture Topic:
Architecture-aware Algorithms and Software for Peta and Exascale Computing

When: 13:30 Aug. 24

Where: Ballroom

Abstract:

In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our software. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run--time environment variability will make these problems much harder.
We will look at five areas of research that will have an importance impact in the development of software and algorithms.
We will focus on following themes:

  • Redesign of software to fit multicore and hybrid architectures
  • Automatically tuned application software
  • Exploiting mixed precision for performance
  • The importance of fault tolerance
  • Communication avoiding algorithms
  • [Download Abstract]

    Link to personal website

    Ruqian Lu

    Ruqian Lu
    中国科学院院士
    Chinese Academy of Sciences and Academy of Mathematics and Systems Science (CAS), China
    Bio:

    Ruqian Lu is a professor of computer science of the Institute of Mathematics, Academy of Mathematics and Systems Science, at the same time an adjunct professor of Institute of Computing Technology, Chinese Academy of Sciences and Peking University. He is also a fellow of Chinese Academy of Sciences. His research interests include artificial intelligence, knowledge engineering, knowledge based software engineering, formal semantics of programming languages and quantum information processing. He has published more than 180 papers and 10 books. He has won two first class awards from the Chinese Academy of Sciences and a National second class prize from the Ministry of Science and Technology. He has also won the 2003 Hua Loo-keng Mathematics Prize from the Chinese Mathematics Society and the 2014 lifetime achievements award from the China’s Computer Federation.[Download Bio]

    Lecture Topic:
    Combining Process Algebra with Logic Programming

    When: 8:30 Aug. 24

    Where: Ballroom

    Abstract:

    This talk presents Knorc - a calculus for KNowledge based ORChestration, which is a conservative extension of the Orc calculus. Orc is, as claimed by its authors, a language for wide area computation and has been developed at University of Dallas. It is simple and powerful with site calls as program units and four combinators to compose them. There was quite a lot of following up works alone this line. Knorc is yet another extension of Orc in direction of knowledge processing. The main new ingredient is logic programming whose combination with process algebra is a major technical challenge to the design of Knorc. Besides introducing new possibilities of implementing site calls, the advantages of this combination include better structuredness of programs, separation of knowledge content from control flow and reusability of knowledge. The second main ingredient is the availability of a set of different parallel programming paradigms, which makes Knorc a process algebra not only with logic programming facilities, but also with powerful parallel logic programming facilities. In particular, it is possible to do massive parallel programming in Knorc. The third main ingredient is the introduction of a specific data type - the abstract knowledge sources to increase its knowledge processing power. While Orc has no data types at all, several extension works in the literature have introduced different data types, e.g. the XML data type. The introduction of abstract knowledge sources makes Knorc a language based on Open World Assumption, rather than Closed World Assumption. We have formalized the syntax and semantics of Knorc. A first implementation of Knorc is underway.


    Tarek El-Ghazawi

    Tarek El-Ghazawi
    IEEE Fellow, George Washington University, USA
    Bio:

    Tarek El-Ghazawi is a Professor in the Department of Electrical and Computer Engineering at The George Washington University, where he leads the university-wide Strategic Academic Program in High-Performance Computing. His research interests include high-performance computing, computer architecture, reconfigurable computing and parallel programming.
    He is the founding director of The GW Institute for Massively Parallel Applications and Computing Technologies (IMPACT) and a founding Co-Director of the NSF Industry/University Center for High-Performance Reconfigurable Computing (CHREC). He is one of the principal co-authors of the UPC parallel programming language and the primary author of the UPC book from John Wiley and Sons. He has received his Ph.D. degree in Electrical and Computer Engineering from New Mexico State University in 1988. El-Ghazawi has published well over 250 refereed research publications in this area. Dr. El-Ghazawi has served in many editorial roles including an Associate Editor for the IEEE Transactions on Computers. He chaired and co-chaired many international conferences and symposia. He has served on many advisory boards and in consulting roles including service as a consultant at NASA GSFC and NASA Ames. Dr. El-Ghazawi’s research has been frequently supported by Federal agencies and industry including DARPA/DoD, NSF, DoE/LBNL, AFRL, NASA, IBM, HP, Intel, AMD, SGI, and Microsoft. El-Ghazawi is a Fellow of the IEEE, a Research Faculty Fellow of the IBM Center for Advanced Studies, Toronto; a recipient of the Alexander von Humboldt Research Award; and a recipient of the Alexander Schwarzkopf Prize for Technical Innovation and the GW SEAS Distinguished Researcher Award. He also served as a U.S. Senior Fulbright Scholar. [Download Bio]

    Lecture Topic:
    Exploiting Hierarchical Locality for Productive Extreme Computing

    When: 10:40 Aug. 25

    Where: Room 1D

    Abstract:

    Modern high-performance computers are characterized with massive hardware parallelism and deep hierarchies. Hierarchical levels may include cores, dies, chips, and nodes to name a few. Locality exploitation at all levels of the hierarchy is a must as the cost of data transfers can be high. Programmer’s knowledge and the expressivity of locality-aware programming models such as the Partitioned Global Address Space (PGAS) can be very useful. However, locality awareness can come at a high cost. In addition, asking programmers to worry about expressing locality relations at multiple architecture hierarchy levels is detrimental to productivity and systems and hardware must provide adequate support for exploiting hierarchical locality.
    In this talk I will discuss a framework for understanding and exploiting hierarchical locality in preparation for the next era of extreme computing. The role of system and hardware support will be highlighted will be stressed and examples will be shared.[Download Abstract]

    Link to personal website