Monday, March 7, 2011

Self-adaptive deployment with Disnix

In an earlier blog post I have described Disnix, a Nix-based distributed service deployment system. Disnix uses declarative specifications of the services, infrastructure and distribution to automatically, reliably and efficiently deploy a service-oriented system using the purely functional deployment properties of Nix.

Although Disnix offers useful deployment features, networks in which service-oriented systems are deployed are often dynamic. Various events may occur, such as a crashing machine which disappears from the network, an addition of a new machine with new capabilities or a change of a capability, e.g. an increase of memory. Such events may partially or completely break a system or render a deployment scenario suboptimal.

For these types of events a redeployment may be required to fix the system or change the deployment to have all the desired non-functional constraints supported, which takes some effort. The infrastructure model must be updated reflecting the new configuration of machines. Moreover, a new distribution model must be written mapping services to the right machines in the network. Manually updating these models every time an event occurs is often a complex and time consuming process.


We have developed a self-adaptive framework on top of Disnix to deal with such events, shown in the figure above, to automatically redeploy a system in such a way that desired non-functional constraints are supported in case of an event.

At the top, an infrastructure generator is shown. This is a tool generating a Disnix infrastructure model using a discovery service, capturing the present machines in the network and their capabilities and properties.

The generated infrastructure model, however, may not contain all required properties for deployment, such as authentication credentials and other privacy-sensitive information. The infrastructure augmenter is used to augment those additional properties into the discovered infrastructure model.

Then the distribution generator is invoked with the services model and generated infrastructure model. This tool generates a Disnix distribution model and uses a policy described in a QoS model, to dynamically map services to machines using non-functional properties defined in the services and infrastructure models. Various algorithms can be invoked from the QoS model to generate a suitable mapping.

Finally, the generated infrastructure and distribution models along with the services model are passed to disnix-env, which performs the actual (re)deployment of the system as reliable and efficient as possible.


The current implementation uses Avahi for capturing machines in the network and their properties, but is not limited to a particular protocol. One can easily replace this tool by a different lookup service using SSDP or a custom protocol.

The distribution generator is a very generic and extensible approach. In a QoS model, a developer can pick from a range of distribution filters and combine them together to achieve a desired result. New algorithms can be easily integrated in this architecture.

{ services, infrastructure
, initialDistribution, previousDistribution
, filters
}:

filters.minsetcover {
  inherit services infrastructure;
  targetProperty = "cost";
  
  distribution = filters.mapAttrOnList {
    inherit services infrastructure;
    serviceProperty = "type";
    targetPropertyList = "supportedTypes";
  };
}

The code fragment above shows an example of a QoS model. This model is a function taking several arguments: the services model (provided by the user), the generated infrastructure model, an initial distribution model (which maps all services to all targets), a previous distribution model (can be used to reflect about upgrade effort) and a set of filter functions.

In the body of the expression various filter functions are used. Each filter function takes a candidate distribution and return a filtered distribution. These functions are composed together to achieve a desirable mapping. In the example above, we first map all services having a specific type attribute to machines which support that particular type (i.e. the supportedTypes attribute in the infrastructure model containing a list of possibilities). By using this filter function we can e.g. map Apache Tomcat web applications to servers running Apache Tomcat and MySQL databases to services running a MySQL DBMS. This is a very crucial feature, as we don't want to deploy a service to a machine which is not capable of running it.

In the outer function, we use a minimum set cover approximation method over the result of the previous function. This function can be used to find the minimum cost deployment given that every machine has a fixed cost price, no matter how many services it will host.


We have applied the dynamic Disnix framework to a number of use cases, such as several toy systems, ViewVC and an industrial case study from Philips research.

The dynamic Disnix framework is still under heavy development and is going to be part of Disnix 0.3. In the dynamic framework, a number of algorithms and division strategies are supported. For more information have a look at the 'A Self-Adaptive Deployment Framework for Service-Oriented Systems' paper, available from my publications page. I will present this paper at SEAMS 2011 (colocated with ICSE 2011) on Waikiki, Honolulu, Hawaii. You can find the slides on my talks page, once I have prepared them. See you in Hawaii!

1 comment:

  1. The new link for the 'A Self-Adaptive Deployment Framework for Service-Oriented Systems' paper:
    https://dl.acm.org/doi/10.1145/1988008.1988039
    (from the list of publications at http://sandervanderburg.nl/index.php/publications )

    ReplyDelete