|I- Professor Amr El Abbadi is currently a Professor in the Computer Science Department at the University of California, Santa Barbara. He received his B. Eng. in Computer Science from Alexandria University, Egypt, and received his Ph.D. in Computer Science from Cornell University in August 1987. Prof. El Abbadi is an ACM Fellow, an AAAS Fellow and was Chair of the Computer Science Department at UCSB from 2007 to 2011. He has served as a journal editor for several database journals, including, currently, The VLDB Journal. He has been Program Chair for multiple database and distributed systems conferences, most recently SIGSPATIAL GIS 2010 and ACM Symposium on Cloud Computing (SoCC) 2011 and COMAD 2012. He has served as a board member of the VLDB Endowment from 2002—2008, and is currently a member of the Executive Committee of the Technical Committee of Data Engineering (TCDE). In 2007, Prof. El Abbadi received the UCSB Senate Outstanding Mentorship Award for his excellence in mentoring graduate students. He has published over 250 articles in databases and distributed systems.
Over the past two decades, database and distributed systems researchers have made significant advances in the development of protocols and techniques to provide data management solutions that carefully balance three major requirements when dealing with critical data: high availability, fault-tolerance, and data consistency. However, over the past few years the data requirements, in terms of data availability and system scalability, from Internet scale enterprises that provide services and cater to millions of users, has been unprecedented. Cloud computing has emerged as an extremely successful paradigm for deploying Internet and Web-based applications. Scalability, elasticity, pay-per-use pricing, and autonomic control of large-scale operations are the major reasons for the successful widespread adoption of cloud infrastructures. In this tutorial, we will first discuss some of the critical distributed systems protocols that are essential for understanding current large scale data management. We analyze the design choices that allowed modern NoSQL data management systems (key-value stores) to achieve orders of magnitude higher levels of scalability compared to traditional databases. With this understanding, we highlight some design principles that make key-value stores especially suitable in the Cloud. We then analyze the need and the revival of SQL database management systems in large cloud settings. Of particular interest are applications which require data management across multiple datacenters. We will therefore explore the data management design space for geo-replicated data and discuss different approaches for replicating data in multi-datacenter environments.
|II-Professor Mootaz Elnozahy has been appointed the new Dean of the Computer, Electrical, and Mathematical and Engineering (CEMSE) Division at King Abdullah University of Science and Technology (KAUST). He assumed his duties in January 2013.
Mootaz obtained a B.Sc. degree with Highest Honours in Electrical Engineering from Cairo University in 1984, and the M.S. and Ph.D. degrees in Computer Science from Rice University in 1990 and 1993, respectively.
From 1993 until 1997, he was on the faculty at the School of Computer Science at Carnegie Mellon University, where he received a prestigious NSF CAREER award. In 1997, he moved to the IBM Austin Research Lab and started the Systems Software Department, which today includes over 25 researchers investigating analytics, high performance computing, low-power systems, and simulation tools. From 2001 to 2006 he led the PERCS project, IBM’s effort under DARPA HPCS program, which was one of the largest efforts in high end computing. From 2005 to 2007, Mootaz joined the product division to accelerate the productization of PERCS. Prior to joining IBM, he has worked on rollback-recovery, replication, and reliable distributed systems. While at IBM, he has worked on code and trace compression cc-NUMA systems, acceleration of the Web site performance for the bureau of U.S. Census, blade-based servers, low-power servers, security of IP-based protocols, 3D integration, and performance tools. Currently, he leads the definition of the reliability aspects of Exascale computing at IBM.
Mootaz served on 35 technical program committees in the areas of distributed operating systems and reliability. Mootaz’s research interests include distributed systems, operating systems, computer architecture, and fault tolerance. He has published 31 refereed articles in these areas, and was awarded 20 patents. Mootaz is an IEEE Fellow and currently is a Senior Manager and Master Inventor in IBM Research-Austin.
The tutorial covers the modern trends in high performance computing, including the design and implementation of current and future supercomputers. Topics will cover:
- The central role that the Interconnect plays in overall system performance.
- The power crisis and its implications for current and future systems.
- Modern programming languages such as Partitioned Address Space languages and how they influence the networking topology and overall system design.
- The problems facing programmers and how the hardware abstraction of multicore systems can lower productivity and what can be done about it.
The tutorial targets members of the technical community who would like to get an in-depth, yet compressed course covering the issues related to the design, implementation and programming of supercomputers and high-performance computing systems. The tutorial also should be of interest to those involved in procuring high performance computing systems who wish to get a high-level view of the current trends and issues that affect these systems. The treatment is practical and draws from the industrial experience of the presenter.