IEEE ICESS 2015 Keynote Speakers
Sun-Yuan Kung

Sun-Yuan Kung
IEEE Fellow, Princeton University, USA
Bio:

Sun-Yuan Kung was born in Taiwan on January 2, 1950. He received the B.S. in Electrical Engineering from the National Taiwan University in 1971; M.S. in Electrical Engineering from the University of Rochester in 1974; and Ph.D. in Electrical Engineering from Stanford University in 1977. From 1977 to 1987, he was on the faculty of Electrical Engineering-Systems at the University of Southern California. In 1984, he was a Visiting Professor at Stanford University and later in the same year, a visiting professor at the Delft University of Technology. Since September 1987, he has been a Professor in the Department of Electrical Engineering, Princeton University. He currently serves on the IEEE Technical Committees on VLSI Signal Processing and Neural Networks and an Editor-in-Chief of Journal of VLSI Signal Processing.
[Download Bio]

Lecture Topic:
Kernel Machine Learning for Big Data

When: 10:40 Aug. 24

Where: Ballroom

Abstract:

The intensive computing need in big data will undoubtedly necessitate special hardware and software technologies for high performance (parallel and/or distributed) systems whose architectural platform must be closely dependent upon a novel “big-data" algorithmic paradigm. While not yet well-defined, big data is known to be characterized by 3Vs: Volume, Variety and Velocity. This talk shall explore the 3Vs from a kernel learning perspective.
Regarding the “volume" of data, there are two separate issues (1) large training data size and (2) high feature dimensionality. As to large data size, we shall review various (statistical and algebraic) approaches. Examples include: divide-and-conquer; K-means to handle the data partitioning, and selection criterion base on kernel matrix. The typical answer to high feature dimensionality is dimension reduction, which is being used as an effective antidote to counteract two feature-dimension-related problems: computation costs and data overtraining. For unsupervised learning scenarios, a classical reduction method is Principal Component Analysis (PCA). We shall show that PCA's trace-norm optimization can be extended to supervised learning applications. More exactly, by incorporating the SNR metric (of Fisher Discriminant Analysis) into the formulation, we can derive DCA (Discriminant Component Analysis) which may be viewed as the supervised learning counterpart of PCA.
The second V-issue (variety) is inevitable for big data, which by definition has many divergent types of sources, from physical (sensor/IoT) to social and cyber (web) types. Some of the data may be fuzzy, unreliable, or heterogeneously formatted. In more severe scenario, the data could be defective, messy, and partially missing. This prompts a relatively new application paradigm of incomplete data analysis (IDA). For big data, it is inevitable to encounter missing/defective entries in the columns or rows of the original data matrix. Consequently, if the traditional ``total availability" criterion were adopted too many defective columns or rows would be discarded or, equivalently, too few be retained for learning analysis. It then makes sense.
In order to maximize the data utilization, the ``total availability” should be replaced by a less restrictive notion of ``pairwise availability", leading to Kernel Approach to Incomplete Data Analysis (KAIDA). KAIDA focuses on deriving correlation between data entries co-existent in both partial vectors in each pair. It is our opinion that imputation amidst highly sparse data tends to prone to uncertainty and other adverse effects. Thus, we shall advocate a non-imputed kernel approach. Furthermore, experimental results will demonstrate the strong resilience of the proposed approach against high data sparsity.
The third V-issue (velocity) is already partially addressed by the aforementioned techniques using dimension reduction, data partitioning, and selection criterion. Nevertheless, it is worth noting that the data size N tends to be enormously large for big data, and, consequently, kernel learning may be better performed in its intrinsic space rather than its empirical space. For example, the complexity of SVM learning in the empirical space will be of the order N^2, while the complexity of kernel ridge regression (KRR) in the intrinsic space grows linearly with N, which is clearly the best possible scenario.
This talk will give a balanced coverage between the theoretical foundation and practical consideration.[Download Abstract]

Link to personal website

Jack Dongarra

Jack Dongarra
IEEE Fellow and ACM Fellow, University of Tennessee, USA
Bio:

Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Ph.D. in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Computer Science Department at the University of Tennessee and holds the title of Distinguished Research Staff in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL); Turing Fellow at Manchester University; an Adjunct Professor in the Computer Science Department at Rice University; and a Faculty Fellow of the Texas A&M University's Institute for Advanced Study. He is the director of the Innovative Computing Laboratory at the University of Tennessee. He is also the director of the Center for Information Technology Research at the University of Tennessee which coordinates and facilitates IT research efforts at the University.
He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced-computer architectures, programming methodology, and tools for parallel computers. His research includes the development, testing and documentation of high quality mathematical software. He has contributed to the design and implementation of the following open source software packages and systems: EISPACK, LINPACK, the BLAS, LAPACK, ScaLAPACK, Netlib, PVM, MPI, NetSolve, Top500, ATLAS, and PAPI. He has published approximately 200 articles, papers, reports and technical memoranda and he is coauthor of several books. He was awarded the IEEE Sid Fernbach Award in 2004 for his contributions in the application of high performance computers using innovative approaches; in 2008 he was the recipient of the first IEEE Medal of Excellence in Scalable Computing; in 2010 he was the first recipient of the SIAM Special Interest Group on Supercomputing's award for Career Achievement; in 2011 he was the recipient of the IEEE IPDPS Charles Babbage Award; and in 2013 he was the recipient of the ACM/IEEE Ken Kennedy Award for his leadership in designing and promoting standards for mathematical software used to solve numerical problems common to high performance computing. He is a Fellow of the AAAS, ACM, IEEE, and SIAM and a member of the National Academy of Engineering.[Download Bio]

Lecture Topic:
Architecture-aware Algorithms and Software for Peta and Exascale Computing

When: 13:30 Aug. 24

Where: Ballroom

Abstract:

In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our software. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run--time environment variability will make these problems much harder.
We will look at five areas of research that will have an importance impact in the development of software and algorithms.
We will focus on following themes:

  • Redesign of software to fit multicore and hybrid architectures
  • Automatically tuned application software
  • Exploiting mixed precision for performance
  • The importance of fault tolerance
  • Communication avoiding algorithms
  • [Download Abstract]

    Link to personal website

    Sandeep K Shukla

    Sandeep K Shukla
    Indian Institute of Technology Kanpur, Indian
    Bio:

    Professor Sandeep K. Shukla received his bachelor’s degree in Computer Science and Engineering at Jadavpur University, Kolkata in 1991, his Masters and PhD degrees in Computer Science from the State University of New York at Albany, NY, USA in 1995 and 1997 respectively. He worked as a scientist at the GTE labs on telecommunications network management, distributed object technology, and event correlation technologies 1997-1999. Between 1999 and 2001, he worked at the Intel Corporation on the formal verification of the ITANIUM processor, and on system level design languages. 2001-2002, he was a research faculty at the University of California at Irvine working on embedded system design. From 2002 till 2015, he has been an assistant, associate, and full professor at Virginia Tech, USA. He co-founded the Center for Embedded Systems for Critical Applications (CESCA) in 2007, and has been a director of the center between 2010 and 2012. The center grew to 9 tenure-track faculty, 60 graduate students and number of research faculty, and crossed the 2M dollar/year expenditure threshold during his directorship. Since 2012, he has been focusing on cyber-security of critical infrastructures. He received the prestigious Presidential Early Career Award for Scientists and Engineers (PECASE) from the White House in 2004, Frederich Wilhelm Bessel Award in 2008 from the Humboldt Foundation, Germany, ACM Distinguished Scientist in 2013, and IEEE Fellow in 2014. He also served as ACM Distinguished Speaker between 2007 and 2015, IEEE Computer Society Distinguished Visitor between 2008 and 2012. He is currently the editor-in-chief of the ACM Transactions on Embedded Systems, Associate Editor of ACM Transactions on Cyber-Physical Systems, and Computing Reviews. In the past, he has been associate editors for IEEE Transactions on Computers, IEEE Transactions on Industrial Informatics, IEEE Design & Test, IEEE Embedded Systems Letters, and many other journals. He has guest-edited more than 15 special issues for various IEEE and ACM journals. He has written or edited 9 books, published over 200 journal and conference papers. He graduated 12 PhDs, and directed five post-doctoral scholars before joining IIT Kanpur in 2015. His group developed a number of co-simulation tools for data-communication enabled smart-grid, and SCADA system for industrial automation for the purpose of cyber-security threat modeling, simulation of cyber-attacks, and mitigation experimentation. He is also an expert in formal methods, formal verification, and program synthesis which he uses for cyber-security work – such as software vulnerability detection. His main focus currently is cyber-security of cyber-physical systems, in particular, application of machine learning, and formal analysis to discover ways to distinguish physical dynamics variations due to stochastic variations, and cyber-attack induced variations.[Download Bio]

    Lecture Topic:
    Smart-Grid: Where Embedded Computing, Communication and Power Systems Meet

    When: 14:30, Aug. 24

    Where: Ballroom

    Abstract:

    The vision of a smart grid is predicated upon pervasive use of embedded intelligence, anddigital communication techniques in today's power system. As wide area measurements and control techniques are being developed and deployed for a more resilient power system, the role of computing and communication network is becoming inalienable. The system state estimation, protection, control of oscillations are real-time computing applications. Similarly, the power system dynamics gets influenced by the communication delays in the network. Therefore, extensive integration of power system and its computing/communication infrastructure mandates that the two systems are studied as a single distributed cyber-physical system.
    In this talk we will discuss some of the problems and their solutions germane to this inter-dependency between two critical infrastructures. In particular, a power/network co-simulation framework which integrates power system dynamic simulator and network simulator together using an accurate synchronization mechanism will be discussed. The accuracy of this co-simulation system is tunable based on the time-scale requirements of the phenomena being studied. This co-simulation can improve the practical investigation of smart grid and evaluate wide area measurement and control schemes. We will also discuss some case studies including an agent-based remote backup relay system simulated and validated on this co-simulation framework. [Download Abstract]

    Link to personal website

    Xiaodong Wang

    Xiaodong Wang
    Columbia University, USA
    Bio:

    Professor Xiaodong Wang was an assistant professor from July 1998 to December 2001 at the Department of Electrical Engineering at Texas A&M University. In January 2002, he joined the Department of Electrical Engineering at Columbia University as an assistant professor. Dr. Wang's research interests fall in the general areas of computing, signal processing, and communications. He has worked and published extensively in the areas of wireless communications, statistical signal processing, parallel and distributed computing, nanoelectronics, and quantum computing. Dr. Wang has received the 1999 NSF CAREER Award. He has also received the 2001 IEEE Communications Society and Information Theory Society Joint Paper Award.[Download Bio]

    Lecture Topic:
    Event-driven Decentralized Statistical Signal Processing

    When: 15:10, Aug. 25

    Where: Room 1D

    Abstract:

    For many emerging applications that rely on scarce energy resources (e.g., wireless sensor networks), event-driven sampling, in which a sample is taken when a significant event occurs in the signal, is a promising alternative to the conventional uniform sampling. In these event-based sampling methods, samples are taken based on the signal amplitude instead of time, as apposed to the conventional uniform sampling. As a result, the signal is encoded in the sampling times, whereas in uniform sampling the sample amplitudes encode the signal. This yields a significant advantage in real-time applications, in which sampling times can be tracked via simple one-bit signaling. In this talk, we present the use of event-driven sampling as a means of information transmission for decentralized detection and estimation. We start with the decentralized detection problem where we address the challenge of noisy transmission channels via level-triggered sampling. Then, we discuss the sequential estimation of linear regression parameters under a decentralized setup. Using a variant of level-triggered sampling we design a decentralized estimator that achieves a close-to-optimum average stopping time performance, and linearly scales with the number of parameters while satisfying stringent energy and computation constraints. Finally, we discuss decentralized sequential joint detection and estimation.
    Applications in cognitive radio and smart grid will be presented.[Download Abstract]

    Link to personal website