Keynote OSCAR Parallelizing and Power Reducing Compiler for Multicores
Speaker Hironori Kasahara, Waseda University
Abstract This talk introduces OSCAR (Optimally SCheduled Advanced multiprocessoR) automatic parallelizing and power reducing compiler and OSCAR multicore architecture to support the compiler optimization for emerging applications, such as a heavy particle cancer treatment system, automobile control, model based design, natural disaster simulation and so on. In this approach, firstly, to efficiently and automatically parallelize the wide range of application quickly, the compiler is designed. Secondly, the multicore architecture is designed to allow the compiler to execute the applications with low overheads though data localization, data transfer overlapping, DVFS, power gating and so on. The application users can parallelize their application easily and execute them fast and low power on the architecture. The processing performances of the compiler on not only the OSCAR architecture but also various multicore chips are introduced including 110 times speedup for “Earthquake Wave Propagation Simulation” on 128 cores of IBM Power 7 against 1 core, 55 times speedup for “Carbon Ion Radiotherapy Cancer Treatment” on 64cores IBM Power7, 1.95 times for “Automobile Engine Control” on Renesas 2 cores using SH4A based OSCAR multicore RP2, 55 times for “JPEG-XR Encoding for Capsule Inner Cameras” on 64 cores Tile64 manycore. In automatic power reduction, consumed powers for real-time multi-media applications like Human face detection, H.264, mpeg2 and optical flow were reduced to 1/2 or 1/3 using 3 cores of ARM Cortex A9 and Intel Haswell and 1/4 using Renesas SH4A based 8 cores OSCAR multicore RP2 against ordinary single core execution. Finally, an OSCAR vector multicore being developed with OSCAR Technology for the emerging applications is introduced.
Bio Hironori Kasahara

Hironori Kasahara received a PhD in 1985 from Waseda University, Tokyo, joined its faculty in 1986, and has been a professor of computer science since 1997 and a director of the Advanced Multicore Research Institute since 2004. He was a visiting scholar at University of California, Berkeley and the University of Illinois at Urbana–Champaign's Center for Supercomputing R&D. Kasahara has served as a chair or member of 225 society and government committees, including a member of the IEEE Computer Society Board of Governors; chair of CS Multicore STC and CS Japan chapter; associate editor of IEEE Transactions on Computers; vice PC chair of the 1996 ENIAC 50th Anniversary International Conference on Supercomputing; general chair of LCPC; PC member of SC, PACT, PPoPP, and ASPLOS; board member of IEEE Tokyo section; and member of the Earth Simulator committee. He received the CS Golden Core Member Award, IFAC World Congress Young Author Prize, IPSJ Fellow and Sakai Special Research Award, and the Japanese Minister’s Science and Technology Prize. Kasahara led Japanese national projects on parallelizing compilers and embedded multicores, and has presented 210 papers, 132 invited talks, and 27 patents. His research has been introduced in 520 newspaper and Web articles.

Slides OSCAR Parallelizing and Power Reducing Compiler for Multicores

Keynote The Convergency of HPC and Big Data. Are we ready?
Speaker Pete Beckman
Abstract For decades, the basic architecture of extreme-scale systems has been largely static. In one area of our machine room we have compute nodes and in another area, a large shared file system. A slowly evolving, spartan “HPC Software Stack” links the two pieces. This arrangement is out of step both with today’s new architectures, such as those that provide NVRAM everywhere, as well as new models for computational science which require features such as in-situ analysis, processing streaming instrument data, and on-demand software stacks. HPC must adopt a new, more agile system software architecture that can simultaneously support both classic HPC computation and new Big Data approaches. From the low-level operating system to the high-level workflow tools, convergence is moving forward. Are we ready?
Bio Pete Beckman

Pete Beckman is the co-director of the Northwestern-Argonne Institute for Science and Engineering. From 2008-2010 he was the director of the Argonne Leadership Computing Facility, where he led the Argonne team working with IBM on the design of Mira, a 10 petaflop Blue Gene/Q. Pete joined Argonne in 2002. He served as chief architect for the TeraGrid, where he led the design and deployment team that created the world's most powerful Grid computing system for linking production HPC computing centers for the National Science Foundation. After the TeraGrid became fully operational, Pete started a research team focusing on petascale high-performance system software, wireless sensors, and operating systems. Pete also coordinates the collaborative research activities in extreme-scale computing between the US Department of Energy and Japan’s ministry of education, science, and technology. Pete leads the Argo project for extreme-scale operating systems and run-time software. He is the founder and leader of the Waggle project to build intelligent attentive sensors. The Waggle technology and software framework is being used by the Chicago Array of Things project to deploy 500 sensors on the streets of Chicago beginning in 2016. Pete also has experience in industry. After working at Los Alamos National Laboratory on extreme-scale software for several years, he founded a Turbolinux-sponsored research laboratory in 2000 that developed the world's first dynamic provisioning system for cloud computing and HPC clusters. The following year, Pete became vice president of Turbolinux's worldwide engineering efforts, managing development offices in the US, Japan, China, Korea, and Slovenia. Dr Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985).

Slides The Convergency of HPC and Big Data. Are we ready?