Important Dates

January 30, 2017

Workshop/Special Session Proposal Due


March 24, 2017 (Firm)

Paper Submission Deadline


May 10, 2017

Authors Notification


June 10, 2017

Camera-ready & Registration




Sponsors

Tutorials

IEEE SmartWorld 2017 and co-located IEEE UIC 2017, ATC 2017, ScalCom 2017, CBDCom 2017, IoP 2017 and SCI 2017 provides a platform for the world-wide researchers to meet and discuss the latest research and educational development in Smart X. Tutorials offer a unique opportunity to disseminate in-depth information on specific topics in smart world at the event.

In general, tutorials attract a broad range of audiences, including professionals, researchers from academia, students, and practitioners, who wish to enhance their knowledge in the selected tutorial topic. If you have proposals or questions about tutorials, please contact the Tutorial Chairs at Shiyan Hu shiyan@mtu.edu and Haibo He he@ele.uri.edu.

All registered participants are welcome to attend the following tutorials that will be held on the first day (August 4) and the last day (August 8) during the conferences.




Smart Decision Making under Uncertainty

Mykel J. Kochenderfer

Stanford University, USA







Abstract
. Many important problems involve decision making under uncertainty, including aircraft collision avoidance, wildfire management, and disaster response. When designing automated decision support systems, it is important to account for the various sources of uncertainty when making or recommending decisions. Accounting for these sources of uncertainty and carefully balancing the multiple objectives of the system can be very challenging. One way to model such problems is as a partially observable Markov decision process (POMDP). Recent advances in algorithms, memory capacity, and processing power, have allowed us to solve POMDPs for real-world problems. This tutorial will discuss models for sequential decision making and algorithms for solving them. In addition, this tutorial will highlight applications to decision making for drones and automated vehicles. This tutorial is intended for a broad engineering audience, requiring only basic familiarity with probability and an ability to understand algorithms represented using pseudocode. The book “ Decision Making Under Uncertainty ” serves as a basis for the tutorial.

Biography. Mykel Kochenderfer is Assistant Professor of Aeronautics and Astronautics at Stanford University. Prof. Kochenderfer is the director of the Stanford Intelligent Systems Laboratory (SISL), conducting research on advanced algorithms and analytical methods for the design of robust decision making systems. Prior to joining the faculty, he was at MIT Lincoln Laboratory where he worked on airspace modeling and aircraft collision avoidance. He received a Ph.D. from the University of Edinburgh and B.S. and M.S. degrees in computer science from Stanford University. He is the author of Decision Making under Uncertainty: Theory and Application from MIT Press.



Rapid Development and Deployment of Environmental Sensors

Tom Zimmerman

IBM Research-Almaden, USA







Abstract
. Smart Cities and Environments require networked sensors (Internet of Things) that produce useful data and survive the physical challenges of the real world. Making something work outdoors in the elements is much harder than making something work in the lab. Agile design principles prescribe an iterative process of concept, design and develop, where an idea is realized in physical form, deployed in the environment for testing, then bought back into the lab for improvements and redeployment.

Designing and deploying end-to-end systems in the environment requires mastery of many disciplines, including electronics (transducers, embedded systems, power management, wireless communication), mechanical engineering (waterproof containers and connectors), software (signal processing, communication protocol, cognitive computing), and user interface (control and data visualization). The designer must also content with humans, deterring theft, vandalism and hacking.

In this tutorial I will share many of the tools, techniques and approaches I use to quickly design, build and deploy systems to monitor people and animals in the environment. My philosophy is it’s better to have a crude (“quick and dirty”) implementation of a good idea, rather than a well-crafted implementation of a bad idea. By doing quick implementations, ideas can be rapidly tested, producing real-world experience and data to guide the refinement of both concept and design. This approach has enabled me to develop technology, products and patents in a wide range of applications including monitoring water quality, acoustic noise, sea turtles nests and aquatic plankton in the environment, and humans in grocery stores, airports and public spaces.

Video 1: Meet IBM Master Inventor Tom Zimmerman
Video 2: Creating a Strong Invention Ecosystem to Solve Critical 21st Century Problems

Biography. Tom Zimmerman is a Research Staff Member and Master Inventor at IBM Research-Almaden in San Jose, California. He has over 40 years of experience exploring the frontiers of human-machine interaction. His 50+ patents cover position tracking, user input devices, wireless communication, image and audio signal processing, biometrics and encryption. His Data Glove invention established the field of Virtual Reality, selling over one million units. His electric field Personal Area Network (PAN) invention sends data through the human body, exchanging electronic business cards with a handshake, and prevents air bags from injuring children. His expertise combines electrical engineering with computer science, enabling him to engage in all aspects of design and innovation including circuits and sensors, signal processing and communication, firmware and systems, and intellectual property protection. He received his B.S. in Humanities and Engineering and M.S. in Media Science from MIT.



Neural Machine Interfaces: Design, Applications, and Challenges

Xiaorong Zhang

San Francisco State University, USA







Abstract
. Neural Machine Interface (NMI) is an emerging technology that senses bioelectrical signals from the human neural control system, interprets the signals to identify human states such as emotion, intention, and motion, and then makes decisions to control machines. One prominent example of NMIs is electromyography (EMG)-based control interface which utilizes electrical activity produced by muscle contractions to identify the user’s movement intentions. It has great potential in extending human ability and turning the paradigm of human machine interaction by allowing natural, intuitive control of many applications such as prostheses, human-assisting robots, rehabilitation devices, and smart input devices. However, to make the potential a reality, the research in this field has to be significantly advanced to address its inherent technical challenges in 1) extracting meaningful information from the complex human neuromuscular system for accurate user intent recognition; and 2) integrating hardware and software of the NMI into real-time, robust, reliable, and resource-efficient embedded computer systems.

In this tutorial, we will discuss the applications, challenges, design methods, and future trends of EMG-based NMIs. In addition, we will introduce an open, low-cost, and flexible platform called MyoHMI for developing EMG-based NMIs. MyoHMI has been developed by the ICE Lab with the aim of providing a low-cost solution to facilitate collaborations in the NMI community, and accelerate the development of more and better myoelectric controlled systems that can potentially benefit our society and improve the quality of life. MyoHMI facilitates the interface with a commercial EMG-based armband Myo and provides a highly modular and customizable C/C++ based software engine which seamlessly integrates a variety of interfacing and signal processing modules, from data acquisition through signal processing and pattern recognition, to real-time evaluation and control. A tutorial will be given on how to understand and use the MyoHMI software to develop EMG-based NMIs to control robots and virtual reality applications.

Biography. Dr. Xiaorong Zhang is currently an Assistant Professor in the School of Engineering and the Director of the Intelligent Computing and Embedded Systems Laboratory (ICE Lab) at San Francisco State University. She has broad research experience in human-machine interfaces, neural-controlled artificial limbs, embedded systems, and wearable devices. She has served in the professional societies in various capacities including Associate Editor of the IEEE Inside Signal Processing E-Newsletter, Co-Chair of the Doctoral Consortium at 2014 IEEE Symposium Series on Computational Intelligence, Faculty Advisor of the SWE SFSU chapter, and Program Committee Member of various international conferences. She received her bachelor’s degree in Computer Engineering from Huazhong University of Science and Technology, Wuhan, China in 2006, and her master’s and Ph.D. degree in Computer Engineering from the University of Rhode Island, Kingston, RI in 2009 and 2013, respectively.



Designing and Building Programmable Matter

Julien Bourgeois, Benoît Piranda

Univ. Bourgogne Franche-ComtéFEMTO-ST, France







Abstract
. Programmable matter (PM) has different meanings but they can be sorted depending on four properties: Evolutivity, Programmability, Autonomy and Interactivity. In an introductive talk, we will present our research in the Claytronics project which is an instance of PM, evolutive, programmable, autonomous and interactive. In Claytronics, PM is defined as a huge modular self-reconfigurable robot. To manage the complexity of this kind of environment, we propose a complete environment including programmable hardware, a programming langage, a compiler, a simulator, a debugger and distributed algorithms.

The practical part of this tutorial is an initiation to distributed programming. We will introduce the MELD language, which can be used to program distributed systems and more particularly modular robots. Thanks to several examples, we will show the execution of distributed programs in several connected robots. Two kinds of environment will be used, our simulator called VisibleSim and the Blinky Block hardware.

VisibleSim is a 3D environment used for executing distributed programs on distributed robots in a simulated environment. The robots are placed on a lattice. They can be linked together and move freely or in cooperation with other robots. VisibleSim can simulate many different kinds of robots but its main target is modular robots and more particularly programmable matter. It can therefore manage huge numbers of robots. It provides distributed debugging features and interactive actions to users like add or remove robots, stop, restart a simulation or tap a robot to interact with it. (visibleSim booklet)


Blinky Block are 4 cm large cubic modular robots designed by Carnegie Mellon University and developed by FEMTO-ST. These robots are able to light in color, play sound, and communicate with their 6 neighbors in order to create distributed behaviors.

Biography. Julien Bourgeois is a professor of computer science at the University of Bourgogne Franche-Comté (UBFC) in France. He is leading the computer science department at the FEMTO-ST institute (UMR CNRS 6174). His research interests include distributed intelligent MEMS (DiMEMS), Programmable Matter, P2P networks and security management for complex networks. He has worked for more than 15 years on these topics and has co-authored more than 140 international publications.He was an invited professor at Carnegie Mellon University (US) from September 2012 to August 2013, at Emory University (US) in 2011 and at Hong Kong Polytechnic University in 2010, 2011 and 2015. He led different funded research projects (Smart Surface, Smart Blocks, Computation and coordination for DiMEMS). He is currently leading the programmable matter project funded by the ANR and the topic “System architecture, communication, networking” in the LABEX ACTION, a 10 M€ funded program whose aims at building integrated smart systems. He has also worked in the Centre for Parallel Computing at the University of Wetsminster (UK) and in the Consiglio Nazionale delle Richerche (CNR) in Genova. He collaborated with several other institutions (Lawrence Livermore National Lab, Oak Ridge National Lab, etc.). He organized and chaired many conferences (dMEMS 2010, 2012, HotP2P/IPDPS 2010, Euromicro PDP 2008 and 2010, IEEE GreenCom 2012, IEEE iThings 2012, IEEE CPSCom 2012, GPC 2012, IEEE HPCC 2014, IEEE ICESS 2014, CSS 2014, IEEE CSE 2016, IEEE EUC 2015, IEEE ATC 2017, IEEE CBDCom 2017). He is also acting as a consultant for the French government and for companies.


Benoît Piranda is associate professor of computer science at the University of Franche-Comté in France. He is part of the complex networks team of the FEMTO-ST institute (UMR CNRS 6174). His main domains of research are distributed programming, physical and visual simulations and computer graphics (image synthesis). He is author of numbers of paper in these subjects. He is active member of many projects on programmable matter and distributed algorithms (Smart Blocks, CO2DIM, Programmable Matter Interface). He leads the development of the VisibleSim software which is a behavioral simulator of modular robots. This simulator runs modules internal running code, simulates communications, motions, physical interactions, and various sensors and actuators. Benoît Piranda has worked on the organization of several conferences and served as Program chair and Publicity chair, and was member of the program committees (IEEE ATC 2017, IEEE EUC 2016, IEEE CSE 2016, IEEE HPCC 2014, IEEE Ithings 2012, IEEE GreenCom 2012, IEEE CPSCom 2012).

Organizers:

  


Contact: scalcom2017@googlegroups.com
Copyright ScalCom-2017. Created and Maintained by ScalCom-2017 Web Team.

url and counting visits