HILT 2012             ACM logo - Advancing Computing as a Science & Profession

High Integrity Language Technology
ACM SIGAda’s Annual International Conference

Sunday Full-Day Tutorials (9:00am - 5:30pm)

SF1: Design of multitask software: The entity-life modeling approach
Bo I. Sandén (Colorado Technical University)

This tutorial assumes a general knowledge of tasking or threading.

Architects, designers, and programmers of real-time and interactive software as well as software-engineering academics and students interested in concurrency will understand and eventually learn the ELM way of designing reactive, multitasking software.

The tutorial introduces entity-life modeling (ELM). It is a design approach for multitask, reactive software, that is software that responds to events in the environment as they occur. It is not a multi-step method but rather an extension of object orientation into the time dimension: The central idea is that the task architecture should reflect concurrency that exists in the problem.

The tutorial follows the presenter’s recent book Design of multithreaded software: The entity-life modeling approach (IEEE Computer Society/Wiley 2011) but uses Ada terminology. ELM was originally developed with Ada tasking in mind but works with Real-time Java as well. The tutorial is illustrated with multiple Ada examples.

Outline

  1. Introduction
  2. Overview of tasking features in Ada
  3. State modeling as a basis for the design of reactive software
  4. ELM concepts
  5. Design patterns basing tasks on state models
  6. Event-thread patterns for resource sharing
  7. Simultaneous exclusive access to multiple resources
  8. Summary.

Sunday Morning Tutorials (9:00am - 12:30pm)

SA1: Leading-edge Ada Verification Technologies: Highly automated Ada contract checking using Bakar Kiasan
Jason Belt (Kansas State University), Patrice Chalin (Kansas State University), John Hatcliff (Kansas State University), Dr. Robby (Kansas State University)

Audience: Engineers and researchers interested in automated formal verification of critical systems.

Knowledge of Spark/Ada is helpful, but not required.

This tutorial presents a new approach to Spark/Ada contract checking using Bakar Kiasan—a highly automated, evidence-based symbolic execution tool. Bakar Kiasan aims to lower the barrier of entry and reduce the burden of engineers as they specify and verify Ada contracts. Even in the absence of contracts, Bakar Kiasan can check code for possible runtime exceptions and provide visualizations of semantic constraints along paths through procedures. As engineers progressively add contracts, Bakar Kiasan can verify the consistency of code and contracts, thus providing increased confidence, often proportional to the efforts made to capture fuller behavioral specifications via contracts. Bakar Kiasan also provides compositional checking; that is, it can be used on incomplete systems, where contracts are only present for some program components (which may not even have been implemented). This allows contract checking to be used as the program is being developed starting early in the software development process.

Bakar Kiasan provides helpful feedback and evidence of its verification results. For example, it automatically generates counter examples as program test cases for illustrating how contracts are violated (this is very helpful when debugging code/contracts), as well as providing various visualization cues, for example, highlighting problematic code or contract segments similar to how modern Integrated Development Environments (IDEs) illustrate compile (type) errors. Kiasan also generates test cases for illustrating how contracts are satisfied, which is helpful for understanding code/contracts or confirming how a program should behave. Bakar Kiasan is integrated in the Eclipse IDE as a plug-in, and an integration with the GNAT Programming Studio (GPS) is currently being developed in collaboration with AdaCore.

This tutorial includes a gentle introduction to symbolic execution and case studies on how Bakar Kiasan has been applied to, for example, industrial code and contracts supplied by our collaborators at Rockwell Collins. During the tutorial, attendees equipped with a laptop will be able to install and apply Bakar Kiasan to some sample code/contracts to get a hands-on experience of Spark/Ada checking using Bakar Kiasan. The tutorial will also provide an introduction to a companion tool, Bakar Alir, that provides rich visualizations of information flow through Spark programs, which can be useful to developers when writing Spark “derives” clauses and reasoning about end-to-end security and separation policies.

The development of Bakar Kiasan is partially funded by the US National Science Foundation (NSF) CAREER grant #0644288, the US Air Force Office of Scientific Research (AFOSR), the Natural Sciences and Engineering Research Council (NSERC) of Canada grant #261573, and Rockwell Collins. Any opinions, findings, and conclusions or recommendations expressed in this tutorial are those of the presenters and do not necessarily reflect the views of the previously mentioned institutions.

SA2: Ada 2012 Contracts and Aspects
Ed Colbert (Absolute Software)

Audience: Ada programmers, designers, and reviewers. Basic understanding of Ada syntax and semantics for types, subtypes, subprograms, packages, private types.

Ada 2012 augments Ada’s already powerful capabilities to efficiently detect program errors early with the addition of contracts, which make use of another new feature, aspects. Aspects allow the programmer to specify properties of an entity (e.g. an object, a type) that are not defined by the basic declaration of that entity. This tutorial will look at the following topics.

Sunday Afternoon Tutorials (2:00 - 5:30pm)

SP1: Leading-Edge Ada Verification Technologies: Combining Testing and Verification with GNATTest and GNATProve — the Hi-Lite Project
Johannes Kanig (AdaCore)

The tutorial will be most interesting for engineers and students who want to see new technologies that may be part of their workflow in the future.

Assumes a familiarity with Ada

We give a hands-on introduction to the tools GNATtest and GNATprove, both developed at AdaCore in the Hi-Lite research project. They allow us to do verification of Ada 2012 contracts through testing and formal verification, and also allow a combination of the results of both tools.

The tutorial will contain a very short introduction to Ada 2012, and attendees will write a small example on which they can play with GNATtest to develop test cases, and GNATprove to do some formal verification.

SP2: Object-Oriented Programming with Ada 2005 and 2012
Ed Colbert (Absolute Software)

Audience and Prerequisites: Ada programmers, designers, and reviewers; Understanding of OO concepts and Ada 83

Ada 05 makes it easier to learn and use object-oriented programming (OOP) than Ada 95, as well integrating OOP with concurrent programming. Ada 2012 (which may be the official standard at the time of the tutorial) makes small changes to the OOP model of Ada 05’s that facilitate its use, and that make it easier for the compiler to detect errors. Ada 2012 also adds contracts that allow the OO programmer to define the perceived behavior of a tagged type and at the same time constrain the behavior of derived types.

This tutorial will look at the following topics.

Monday Full-Day Tutorials (9:00am - 5:30pm)

MF1: Safety of Embedded Software
Nancy Leveson (Massachusetts Institute of Technology), Cody Fleming (Massachusetts Institute of Technology), John Thomas (Massachusetts Institute of Technology)

Level — Beginning. Anyone will benefit from this tutorial. No prior knowledge required.

Traditional safety techniques were created 40-50 years ago for electro-mechanical systems. The underlying assumptions of these techniques about the cause of accidents (e.g., component failure) do not match software nor do they match the types of accidents we are having that are related to software. As a result, a large number of accidents are now related to software, although usually the pilot (for aircraft) or other human operators are blamed. Often, the software design leads to the operator errors. We will describe the problems with software that are leading to accidents (primarily in the requirements) and how to deal with them. Most of the current approaches rely on reducing "failures" although software does not fail. Something else is needed.

In the tutorial we will present a new accident causality model (STAMP) and teach how to use a new hazard analysis technique (STPA) based on it that can be used on complex, software-intensive systems. The topics will include how to generate software safety requirements from an STPA hazard analysis and how to design software that does not induce human error. The tutorial will be based on a new book, Engineering a Safer World by Nancy Leveson and published in January 2012 by MIT Press.

Monday Morning Tutorials (9:00am - 12:30pm)

MA1: Developing Verified Programs with Dafny
K. Rustan M. Leino (Microsoft Research)

The tutorial is geared toward software engineers and students.

The participants are expected to have programming experience.

Reasoning about programs is a fundamental skill that every software engineer needs. This tutorial provides participants an opportunity to get hands-on experience with Dafny, a tool that can help develop this skill.

Dafny is a programming language and state-of-the-art program verifier. The language is type-safe and sequential, and it includes common imperative features, dynamic object allocation, and inductive datatypes. It also includes specification constructs like pre- and postconditions, which let a programmer record the intended behavior of the program along with the executable code that is supposed to cause that behavior. Because the Dafny verifier runs continuously in the background, the consistency of a program and its specifications is always enforced.

In this tutorial, I will give a taste of how to use Dafny in program development. This will include an overview of Dafny, basics of writing specifications, how to debug verification attempts, and how to formulate and prove lemmas.

Dafny has been used to verify a number of challenging algorithms, including Schorr-Waite graph marking, Floyd's “tortoise and hare” cycle-detection algorithm, and snapshotable trees with iterators. Dafny is also being used in teaching, with over 100,000 unique program-verification attempts submitted to the online version of the tool. Dafny was a popular choice in the VSTTE 2012 program verification competition, where two of the Dafny teams were among the competition's 6 medalists. Its open-source implementation has also been used as a foundation for other verification tools.

To run Dafny during the tutorial, the participants will need a computer with a web browser and an Internet connection.

More information is found from the Dafny project page: http://research.microsoft.com/en-us/projects/dafny/

MA2: Service-Oriented Architecture (SOA) Concepts and Implementations
Ricky E. Sward (The MITRE Corporation), Jeff Boleng (Software Engineering Institute)

As an Introductory tutorial, there is no assumption about the audience’s background experience. The tutorial introduces the topic of SOA, ESB and AWS. A modest background in software engineering, software development and Ada programming will be beneficial to the audience.

This tutorial explains how to implement a Service-Oriented Architecture (SOA) for reliable systems using an Enterprise Service Bus (ESB) and the Ada Web Server (AWS). The first part of the tutorial describes terms of Service-Oriented Architectures (SOA) including service, service registry, service provider, service consumer, Service Oriented Architecture Protocol (SOAP), and Web Service Description Language (WSDL). This tutorial also presents principles of SOA including loose coupling, encapsulation, composability of web services, and statelessness of web services. The tutorial also covers the benefits of SOA and organizations that are supporting SOA infrastructure.

The second part of the tutorial covers the Enterprise Service Bus (ESB) including definitions, capabilities, benefits and drawbacks. The tutorial discusses the difference between SOA and an ESB, as well as some of the commercially available ESB solutions on the market. The Mule ESB is explored in more detail and several examples are given.

In the third part, the tutorial covers the Ada Web Server (AWS) built using the Ada programming language. The tutorial covers the capabilities of AWS and explains how to build and install AWS. The tutorial explains how to build an AWS server and include the server in an Ada application. The tutorial demonstrates how to build a call back function in AWS and build a response to a SOAP message.

Finally, the tutorial explains how to connect an AWS server to an ESB endpoint. AWS is a key component to building a SOA for a reliable system. This capability allows the developer to expose services in a high-integrity system using the Ada and SPARK programming languages.

Detailed Outline

  1. Introduction and Background
  2. Part 1 - Service-Oriented Architecture
    1. SOA Background Terminology: XML, XML Document, XSD, XSLT
    2. SOA Terminology: Service, Registry, Provider, Consumer, WSDL, SOAP, SLA
    3. SOA Message Patterns: Request/Response, Publish/Subscribe
    4. SOA Orchestration and Web Service Security
    5. Principles of SOA
    6. Benefits of SOA
    7. SOA Organizations: W3C, OASIS
  3. Part 2 - Enterprise Service Bus (ESB)
    1. ESB description and terminology
    2. Commercial ESB options
    3. ESB's and SOA
    4. Mule ESB Case Study and examples
      1. Mule endpoints
      2. Mule configuration file
  4. Part 3 - Ada Web Server (AWS)
    1. AWS definitions and capabilities
    2. Building and Installing AWS
    3. Building an AWS web server
    4. Building a call-back function
    5. AWS server example
    6. AWS and WSDLs
  5. Connecting AWS to Mule
    1. Configuration file
    2. Building an Ada web service
    3. Exposing an Ada web service
    4. Example
  6. Conclusions

Monday Afternoon Tutorials (2:00 - 5:30pm)

MP1: Multicore Programming using Divide-and-Conquer and Work Stealing
Tucker Taft (AdaCore)

This tutorial is aimed at engineers and students who are interested in learning more about parallel programming, particularly for systems with growing numbers of physical processors or cores.

Some familiarity with threads, mutual exclusion, synchronization, scheduling, etc. will be useful.

This tutorial will introduce the attendee to some of the issues of parallel programming for multicore systems. We will discuss some of the models used for creating and then managing efficiently large numbers of "picothreads." The tutorial will first cover the basic technique of "divide and conquer" as it applies to splitting up computations into large numbers of separate sub-computations. We will provide examples using Intel's Cilk+ language as well as using ParaSail, a new parallel programming language. The tutorial will then go on to investigate the "work-stealing" scheduling mechanism used by the Cilk+ run-time, Intel's Threaded Basic Blocks library, as well as the ParaSail virtual machine. Work-stealing is an efficient way to handle the large number of very small "picothreads" created in abundance by these parallel programming technologies. We will end with a short discussion of heterogeneous parallel programming, using auxiliary chips such as Graphics Processing Units (GPUs) as general purpose processors (GPGPU).

MP2: Understanding Dynamic Memory Management in Safety Critical Java
Kelvin Nilsen (Atego, Inc.)

Audience: Computer scientists who are familiar with programming language compiler type checking and static analysis techniques.

Participants should be familiar with stack memory management as implemented by compiler-generated code in common programming languages like Ada and C. Participants do not need to understand the implementation of tracing garbage collection. Participants should also be familiar with verification and validation considerations related to assurance of stack memory usage in high-integrity systems. Two considerations of particular interest in this tutorial are prevention of stack overflow and elimination of dangling pointers.

In spite of the high-level abstraction benefits of automatic tracing garbage collection, current prevailing sentiment within the safety certification community is that a simpler memory model is required for the most rigorous levels of software safety certification. Thus, the draft JSR-302 specification for safety critical Java relies on scope-based memory allocation rather than tracing garbage collection. The scoped memory model for JSR-302 is a simplification of the RTSJ model. JSR-302 enforces a strict hierarchy of scopes and distinguishes private scopes, which can be seen only by one thread, from mission scopes, which can be accessed by all the threads that comprise a mission, including threads running within inner-nested sub-missions. The hierarchical memory structure allows implementations to guarantee the absence of memory fragmentation for scope management, unlike the Real-Time Specification for Java from which the JSR-302 specification was derived.

In the absence of block structure, it is more difficult in Java to safely manage references to stack-allocated objects than in Ada. While the simplified hierarchical management of scoped memory that is part of JSR-302 addresses memory fragmentation concerns, it does not guarantee the absence of dangling pointers. As with the Real-Time Specification for Java, JSR-302 requires a run-time check to enforce that no reference assignment creates a relationship whereby an outer-nested object is allowed to point to an inner-nested object. This rule assures the absence of dangling pointers, but it introduces a different problem: every assignment to a reference field must be accompanied by a run-time check to validate the appropriate scope nesting relationship. This run-time check will throw a run-time exception if the assignment is deemed inappropriate.

The safety certification evidence for a given safety-critical Java program must therefore include an argument for every reference assignment that it will not cause the program to abort with a run-time exception. An optional annotation checker, described in an appendix to the draft JSR-302 specification, confirms that all reference assignments are safe. This annotation checker requires application developers to associate symbolic names with scopes and enforces a one-to-one correspondence between particular classes and the scopes within which they are allowed to reside. Classes that are not annotated, including the standard library classes, are not allowed to be seen within any scope except the scope within which they are allocated. While this annotation system is sound, it is both restrictive and burdensome. Many scope-safe assignments are prohibited by the conservative nature of the annotation checking system. And the rules that are enforced require replication of code and copying of objects that would otherwise not be necessary.

An alternative annotation system that is implemented in the Perc Pico product is also discussed. This annotation system is based on a design that was explored within an Open Group forum as a precursor to formation of the JSR-302 expert group. This annotation system differs from the JSR-302 annotation system in that it treats scopes as implicit rather than first-class objects. Also, this approach does not name scopes or require that particular classes be instantiated only within particular scopes. Instead, the annotations assert relative nesting relationships between method arguments, local variables, and reference fields, and these assertions are enforced by the Perc Pico annotation checker. This annotation system has been evaluated in several experimental development efforts during the last several years, and has evolved slightly in response to feedback from those studies. A convention has been designed to allow mixing of the Perc Pico annotations with the optional JSR-302 annotations.

This tutorial describes the JSR-302 stack-memory allocation model, the JSR-302 optional annotation checker, the Perc Pico annotation checker, and the synergy mechanism that allows the two annotation systems to be applied to different parts of a large safety-critical Java application.


Created on 30 November 2012;  website comments and corrections to ClydeRoby at ACM.Org