Bergen Language Design Laboratory (BLDL)
BLDL has a weekly internal meeting series. Some of these have a content which may be of interest to a larger audience. The program of these are announced here.
Contact Magne Haveraaen for more information.
Seminars Spring 2010
Tuesday, 2010-02-16 1430, room 4138
Eva Burrows (BLDL): A Retrospective of Crystal - another ``parallel compiler´´ ambition from the late 80'sTuesday, 2010-03-02 1430, room 4138
Eva Burrows (BLDL): Parallel and Concurrent Programming in Haskell - an overviewTuesday, 2010-03-09 1430, room 4138
Presentations for ETAPS 2010-
Adrian Rutle (HiB), Alessandro Rossini (UiB), Yngve Lamo (HiB), Uwe Wolter (UiB):
A Formalisation of Constraint-Aware Model Transformations
FASE -
Anya Helene Bagge (BLDL):
Language Description for Frontend Implementation
LDTA
-
Adrian Rutle (HiB), Alessandro Rossini (UiB), Yngve Lamo (HiB), Uwe Wolter (UiB):
A Formalisation of Constraint-Aware Model Transformations
Wednesday, 2010-03-17 1215, room LA 2142
Eva Burrows (BLDL): Trends and Challenges in Multi-core programming
When the physical limits of semiconductor-based microelectronics could not be stretched any further in order to manufacture even faster processors, hardware vendors looked for other solutions to increase performance. Speed is now characterised by the accumulated computing power of several processors integrated on a single multi-core chip. At the same time, the computer game industry driven Graphics Processing Unit development has made highly parallel (240+ cores) chips available for general purpose computations as well. The further appearance of heterogeneous multi-cores, and steady developments of Field Programmable Gate Arrays, all point in the same direction: Multi-core programming is becoming main-stream. While hardware vendors have set the goal to double the number of processors on a chip every other year or so, harnessing this enormous computing power available under the "desk" or on the "lap" remains challenging.Wednesday, 2010-05-19 1015, room 4138
Eugene Zouev (ETH Zürich): Project Zonnon: The Language, The Compiler, The Environment
This talk presents the key concepts of the Zonnon language project (Prof J.Gutknecht, Dr E.Zouev, ETH, 2003-2006) and major principles and technical solutions in its implementation for the .NET platform. The Zonnon language is a member of the family of Pascal, Modula-2 and Oberon; it is modern, compact, easy to learn and use. It supports modularity (with importing units and exporting unit members), OO approach based on definition/implementation paradigm and refinement of definitions, and concurrency based on the notion of active objects and syntax-based communication protocols. The Zonnon compiler is written in C#. The front end uses conventional compilation techniques (recursive descent parser with full semantic checks). The back end is implemented using the CCI (Common Compiler Infrastructure) framework from Microsoft as a code generation utility. The compiler is fully integrated into the Visual Studio environment using CCI's integration support.Wednesday, 2010-05-19 1315, room 4138
Eugene Zouev (ETH Zürich): Semantic API for C++
The talk presents basic ideas behind the research related to semantic interfaces to programming languages. Concretely, an approach to design and development of an API for languages like C++ is discussed. The semantic information could provide client applications with comprehensive semantic information about source programs (including hidden semantics). The API will also support so called semantic search which is fundamentally more powerful than conventional text based search. Such an API could be a powerful platform for various language oriented tools like verifiers, checkers, static analyzers etc.Friday, 2010-06-18 1015, conference room B, VilVite, Thormøhlensgt 51
Satnam Singh (Microsoft Research, UK): Compiling Target-Independent Data-Parallel Descriptions to GPUs, Multicore Vector Processors and FPGA Circuits
This presentation introduces an embedded domain specific language (DSL) called Accelerator for data-parallel programming which can target GPUs, SIMD instructions on x64 multicore processors and FPGA circuits. This system is implemented as a library of data-parallel arrays and data-parallel operations with implementations in C++ and for .NET languages like C#, VB.NET and F#. We show how a carefully selected set of constraints allow us to generate efficient code or circuits for very different kinds of targets. Finally we compare our approach which is based on JIT-ing with other techniques e.g. CUDA which is an off-line approach as well as to stencil computations. The ability to compile the same data parallel description at an appropriate level of abstraction to different computational elements brings us one step closer to finding models of computation for heterogonous multicore systems. The Accelerator system can be downloaded from http://connect.microsoft.com/acceleratorv2.Friday, 2010-06-18 1115, conference room B, VilVite, Thormøhlensgt 51
Eva Burrows (BLDL): Harnessing the driving force of dependencies
(Winning presentation of the ACM Student Research Competition at the 2010 Conference on Programming Language Design and Implementation (PLDI))
Computational devices are rapidly evolving into massively parallel systems: multicore processors and graphics processing units (GPUs) have already become standard. However, classical parallel programming paradigms cannot readily exploit these, and the writing of efficient, portable parallel code remains a challenge. One of the major issues in parallelizing applications is to deal with the underlying inherent dependency structure of the program. Data dependency graphs can abstract how parts of a computation depend on data supplied by other parts, and this served as a basis for the first parallelizing compilers. However, automatic dependence analysis is diffcult for the general cases, and as a result parallelizing compilers cannot make the most of the underlying dependencies. In this framework we present the user with a programmable interface to express the data dependency of the computation as real code. This in turn provides means for a parallelizing compiler to harness directly the implicit driving force of dependencies and generate parallel code to virtually any parallel system which has a well defined space-time communication structure.
Room 4138 is in Høyteknologisenteret, Thormøhlensgt 55.