Index by title

Public Deliverables

This page summarizes all public deliverables of the SALTY project (document and software). Note that these deliverables concerns WPs 1 and 4, as WP 2 and WP 3 are respectively concerned with the provision of design and implementation of different parts of the SALTY framework, all put together incrementally in the WP 1 outputs.

WP 1: Requirements and Architecture

WP 4 : Use Cases and Validation



Rapid Prototyping of Feedback Control Systems

Companion web page for the ICAC'2012 submitted paper


To illustrate our approach we take an example scenario for an HTC environment. We consider Condor infrastructure for executing large scientific workflows. With too many concurrently running workflow enacting engines (DAGMan) submitting lot of jobs, the Condor scheduler can become a bottleneck an eventually overload leading to overall system throughput degradation.

Our objective is therefore to quickly graft a control system that would ensure an high throughput, while preventing the infrastructure overload.

Following figure shows the architecture model developed for this scenario:

The trigger periodically (every t seconds) observes the state of the system using these three sensors:

The N and the outputs are stabilized using the moving average queueStatAvg, serviceRateAvg before we pass them to the submissionRate. This controller is responsible of computing the delay that will be imposed system wide to all running DAGMans.
The delay is in our example simply written into a file by the delayer from which it is read by DAGMan the next time it tries to submit a job.

Next, we extend the PeriodicTrigger type to provide an effector setPeriod that would allow us to change the initial trigger period (t - initialPeriod property) and the CondorQueueStat sensor type in order to provide an information about how long did it take the last time to execute the condor_q command. The last step is to connect them in a new controller triggerRate : TriggerRateController that will be responsible for adjusting the trigger rate (t) based on the execution time of the condor_q.

Note: This example is not meant to reflect a real world Condor a deployment. As with the presented adaptation model, the main purpose is to illustrate the capabilities of the proposed architecture modeling and supporting tools as an overall approach to the engineering problem of these systems.

Architecture Model

system {

    // data types in the system
    typedef float
    typedef int32
    typedef string

    typedef struct SubmissionRateInput {
        serviceRateAvg : float
        processCounter : int32
        queueStatsAvg : int32

    // default data and control links type
    // in thies scenario we do not have any requirements on a specific link behavior
    dlink DL
    clink CL

    <typename T> filter<<T>> MovingAverage {
    // note: we can omit at the type level the link
    // mode if the component is agnostic whether the data are being pushed to it
    // or it has to full it. The extra pull code will be added based on the actual mode
    // setting at the instance level
        required dlink<<T>> input : DL

    <typename T> active sensor<<T>> PropertyGetter

    <typename T> effector<(<T>)> PropertySetter

    <typename T> active filter<<T>> PeriodicTrigger {
        property<float> initialPeriod

        required observing dlink<<T>> input : DL

        provided sensor period : PropertyGetter<T=float>
        provided effector setPeriod : PropertySetter<T=float>    

    sensor<int32> ProcessCounter {
        required property<string> processName

    filter<SubmissionRateInput> Synchronizer {
        required observing dlink<float> serviceRate : DL
        required observing dlink<int32> processCounter : DL
        required observing dlink<int32> queueStat : DL

    sensor<float> CondorServiceRate {
        required property<string> condorConfigPath

    sensor<int32> CondorQueueStat {
        required property<string> condorConfigPath

        provided sensor execTime : PropertyGetter<T=float>    

    effector<(int32)> CondorDAGManDelay

    controller SubmissionRateController {
        required notifying dlink<SubmissionRateInput> input : DL
        required clink<(int32)> delay : CL

    controller TriggerRateController {
        required notifying dlink<float> input : DL
        required clink<(float)> period : CL

    main composite Main {
        required property<string> condorConfigPath = string("condor_config")

        // the main sensors
                // at this point we need to specify all required properties
        feature serviceRate : CondorServiceRate (condorConfigPath(:condorConfigPath))
        feature processCounter : ProcessCounter (processName(string("condor_dagman")))
        feature queueStat : CondorQueueStat (condorConfigPath(:condorConfigPath))

        // averages
                // at this point we need to specify the mode of the link that has been left
                // unspecified in the type declaration
                // also the appropriate type parameters need to be specified
        feature serviceRateAvg : MovingAverage<T=float> (input(observing))
        feature queueStatAvg : MovingAverage<T=int32> (input(observing))
        feature execTimeAvg : MovingAverage<T=float> (input(notifying))

        // a latch for the synchornizing the input
        feature sync : Synchronizer

        // the period trigger
        feature trigger : PeriodicTrigger<T=SubmissionRateInput>

        // controllers
        feature submissionRateController : SubmissionRateController
        feature triggerRateController : TriggerRateController

        // delayer
        feature delayer : CondorDAGManDelay 

        // bindings - 1. loop
        dbind serviceRate to serviceRateAvg.input as b1
        dbind queueStat to queueStatAvg.input as b3
        dbind processCounter to sync.processCounter as b2
        dbind serviceRateAvg to sync.serviceRate as b4
        dbind queueStatAvg to sync.queueStat as b5
        dbind sync to trigger.input as b6
        dbind trigger to submissionRateController.input as b7
        cbind delayer to submissionRateController.delay as b8

        // bindings 2. loop
        dbind queueStat.execTime to execTimeAvg.input as b9
        dbind execTimeAvg to triggerRateController.input as b10
        cbind trigger.setPeriod to triggerRateController.period as b11        

SPIN Verification Support

The architecture model can be translated into a Promela model:

SPIN verifier can than be used to validate some LTL formulas:


SALTY (Self-Adaptive very Large disTributed sYstems) is an ANR funded research project (Agence Nationale de la Recherche - ANR-09-SEGI-012). It aims at providing an innovative self-managing software framework at run-time for Very-Large Scale Distributed Systems (VLSDS).

In few years, the software industry has adopted the service architectures paradigm to manage complexity, heterogeneity, adaptability and costs. The growing demand leads to the deployment of ever-larger scale systems, which reliability or performance are impaired by hardly predictable events (software faults, hardware failures, mobility, etc). Consequently, a lot of work towards software and hardware self-adaptations has been carried out and deployed into the field redundancy, resources reservation, scheduling, etc), but none of them is designed to address run-time self-adaptation of VLSDSs that consistently federate local platforms into a global distributed system to support collaborating applications.

SALTY addresses this challenge by considering the complementarities between two major trends in computer science and computer system engineering: Service Oriented Architecture (SOA) and Autonomic Computing. The scientific breakthroughs that have to be achieved in order to fill this gap are:

  1. Making run-time self-adaptation capabilities a first class concern into VLSDSs,
  2. Making self-adaptation capabilities an effective tool in the hands of software engineers.

The project has to go beyond the state-of-the-art in the domains of MDE for very large scale systems, service and component infrastructures, workflow, autonomic computing and self-adaptation, decision making processes, service-level agreement and contracting, large-scale deployment. Versatile SOA usages are considered through a general Service and Component Architecture (SCA) basis, compliant with up-to-date SOA standards. This basis will be augmented with mechanisms enabling adaptations to unexpected events and new missions required by system users such as deployment for large-scale distributed SOA-based systems, dynamic reconfiguration and contract management.

The adaptation process will be driven by a decision-making framework for distributed systems able to autonomously decide for local adaptations or more global adaptations, taking into account tradeoffs between cost, performance and availability.

The key achievement of SALTY towards the next generation of VLSDSs is the provision of a software framework, covering both design and runtime, that supports scalable self-adaptations capabilities. The framework will make usage of standard distributed reflective middleware to support the implementation of an adaptable architecture (PETALS Enterprise Service Bus and FraSCAti, its SCA support).

Two use cases covering a wide application domains are used for validating the approach and tooling propose: path-tracking of very large truck fleets via multi-means geo-positioning, and Alzheimer's disease study through huge image database analysis pipelines over a production GRID. They are emblematic (1) of the tackled issues and the technological evolutions that created them: respectively, run-time adaptation and evolution, increasing complexity of the computer based infrastructures design; (2) as well as the economical context where they lie: continuous optimization of the tradeoffs between costs related to the end-users usage, Total Cost of Ownership and Quality-of-Service of the infrastructures.

The SALTY proposal is a well-balanced consortium of 4 academics: 3 small-to-medium-enterprises: and one large company:

The project also has the approval of two competitiveness poles, SCS and Systematic .


Project Leader: Philippe Collet (UNS), Philippe dot Collet @ unice dot fr

Publications in the SALTY project







M9 (01/08/2010)

Requirements on use cases' domains and on technical aspects, are to be determined: done! See Deliverables

M15 (01/02/2011)

First models composing the autonomic framework are to be specified and put together so that the overall functioning can be assesed: done!

M18 (01/05/2011)

First implementation of both use cases are to be provided: done! See Deliverables

M24 (01/11/2011)

First implementation of the framework will be available.

M36 (01/11/2012)

The complete framework is available, and is demonstrated on use cases.