Discover millions of ebooks, audiobooks, and so much more with a free trial

From $11.99/month after trial. Cancel anytime.

Quantitative Assessments of Distributed Systems: Methodologies and Techniques
Quantitative Assessments of Distributed Systems: Methodologies and Techniques
Quantitative Assessments of Distributed Systems: Methodologies and Techniques
Ebook642 pages7 hours

Quantitative Assessments of Distributed Systems: Methodologies and Techniques

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Distributed systems employed in critical infrastructures must fulfill dependability, timeliness, and performance specifications. Since these systems most often operate in an unpredictable environment, their design and maintenance require quantitative evaluation of deterministic and probabilistic timed models. This need gave birth to an abundant literature devoted to formal modeling languages combined with analytical and simulative solution techniques

The aim of the book is to provide an overview of techniques and methodologies dealing with such specific issues in the context of distributed systems and covering aspects such as performance evaluation, reliability/availability, energy efficiency, scalability, and sustainability. Specifically, techniques for checking and verifying if and how a distributed system satisfies the requirements, as well as how to properly evaluate non-functional aspects, or how to optimize the overall behavior of the system, are all discussed in the book. The scope has been selected to provide a thorough coverage on issues, models. and techniques relating to validation, evaluation and optimization of distributed systems.  The key objective of this book is to help to bridge the gaps between modeling theory and the practice in distributed systems through specific examples. 

LanguageEnglish
PublisherWiley
Release dateApr 8, 2015
ISBN9781119131137
Quantitative Assessments of Distributed Systems: Methodologies and Techniques

Related to Quantitative Assessments of Distributed Systems

Related ebooks

Electrical Engineering & Electronics For You

View More

Related articles

Reviews for Quantitative Assessments of Distributed Systems

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Quantitative Assessments of Distributed Systems - Dario Bruneo

    Preface

    Modern technology has to implement and provide services and systems able to meet ever-increasing quality standards while minimizing costs. A way to pursue such a goal is through distributed systems, implementing multiple and complex operations to manage the user demand, thereby ensuring adequate quality levels. A distributed system consists of a collection of interconnected (autonomous) entities, subsystems or systems, properly managed and coordinated to achieve a common goal, so that it is perceived as a whole, single, integrated facility.

    Distributed systems are usually a melting pot of heterogeneous technologies and processes (computing, networking, manufacturing, marketing, mechanical, economical, biological, etc.) involving complex interactions (dependencies, influences, interferences, etc.). In order to achieve an adequate standard level, not only basic functionalities have to be provided through adequate mechanisms, but also advanced ones implementing specific quality-driven policies. That way, both functional and non-functional aspects and properties become key issues to address during the whole system/product/process lifecycle at design time and run time, as well as at maintenancetesting stages, which call for adequate methodologies and techniques for their evaluation.

    Indeed, distributed systems, in particular those which are a part of critical infrastructures, have to meet tight dependability, timeliness, and performance requirements and specifications. Since these systems most often operate in an unpredictable environment, their design and maintenance require quantitative evaluation of deterministic and probabilistic timed models. Specifically required are techniques for checking and verifying if and how a distributed system satisfies the requirements (verification), as well as properly evaluating its nonfunctional aspects (evaluation) or optimizing the overall behavior of the system (optimization). Verification is a process of system quality management by which a product, service or system is checked, inspected and/or tested to verify that the requirements are satisfactory. It is mainly applied at early design stages to check the system properties through specific logic statements. Evaluation refers to the act of evaluating the system’s nonfunctional properties such as performance, reliability, and availability. Optimization is instead related to the identification and selection of the best configuration available for the distributed system according to some given (usually multiple) parameters in order to meet high level requirements such as overall costs and sustainability

    The boundaries among verification, evaluation, and optimization techniques and methodologies are smooth, i.e., often verification techniques include evaluation and/or optimization ones and vice versa. In particular, evaluation and optimization often overlap, but a difference between them could consist of the number of properties considered: the former usually investigates a single nonfunctional aspect of the system, while optimization problems usually evaluate the system taking into account multiple, complex, and/or composed properties such as dependability, performability, and sustainability, often also including costs. Anyway, all of them rely on models to provide their useful insights. All such considerations and needs have given birth to an abundance of literature devoted to formal modeling languages combined with analytical and simulation solution techniques. The aim of this book is to provide an overview of techniques and methodologies dealing with such specific issues in the context of distributed systems and to cover aspects such as correctness, validity, performance, reliability, availability, energy efficiency, and sustainability.

    Following this path, the book has been organized in three parts dealing with verification (Part 1), evaluation (Part 2), and optimization and sustainability (Part 3) problems and issues, providing and discussing related models and techniques for investigating nonfunctional properties of distributed systems. The chapters have been selected to provide a good, although not exhaustive, coverage of issues, models and techniques related to validation, evaluation and optimization of distributed systems, hoping that this will be somehow useful in guiding students, researchers, and practitioners when approaching the quantitative assessment of distributed systems. Indeed, a key objective of this book is to help bridge the gaps between modeling theory and practice in a context distribution system through specific examples.

    Specifically, included in Part 1 are three contributions on verification and model-checking models and techniques for distributed systems. Chapter 1 by Marco Beccuti, Giuliana Franceschinis, and Jeremy Sproston addresses the problem of decision making on distributed systems through a high-level probabilistic model checking formalism based on Markovian models, applied to a case study on a peer-to-peer illegal botnet. Chapter 2 by Rocco De Nicola, Diego Latella, and Michele Loreti focuses on the verification of nonfunctional properties in distributed systems through statistical model-checking techniques implemented in the StoKlaim tool, and adopted in the analysis of three election algorithms. Chapter 3 by Elvio G. Amparore, and Susanna Donatelli proposes a stochastic model-checking technique to investigate stochastic path properties of distributed systems, which has been applied to the evaluation of a flexible manufacturing system.

    Part 2 focuses on the evaluation of nonfunctional properties of distributed systems and is composed of five chapters. Chapter 4 by Vitali Volovoi and Shahnewaz Siddique deals with reliability and failure propagation issues through two different strength/load interaction models adopted in the modeling of scale-free phenomena and self-organized criticality. Chapter 5 by Andras Horvath, Marco Paolieri, and Enrico Vicario addresses the problem of fitting statistical data through matrix exponential distributions, proposing a new approach based on Bernstein expolynomials applied to the representation of some well-known distributions and to the evaluation of a whole distributed system example. Chapter 6 by Anne Bouillard and Giovanni Stea is related to the performance evaluation of tandem queueing systems through network calculus, proposing a solution technique based on integer programming that is applied to a tandem scenario network. Chapter 7 by Massimo Ficco, Massimiliano Rak, Salvatore Venticinque, Luca Tasquier, and Giuseppe Aversano deals with benchmarking and monitoring techniques of different metrics in Cloud computing, comparing several available solutions. Chapter 8 by Enrico Barbierato, Marco Gribaudo, and Mauro Iacono proposes multi-formalism approaches for evaluating complex phenomena and multiple quantities in distributed systems, providing several examples in computing contexts such as service-oriented architecture, distributed software, and Big Data.

    Part 3 deals with optimization of distributed systems considering multiple metrics, proposing different techniques in five chapters. Chapter 9 by Salvatore Cavalieri, Ferdinando Chiacchio, Gabriele Manno, and Peter Popov deals with performability and dependability evaluation of networks through Stochastic Activity Networks and Adaptive Transition Systems used in the evaluation of two case studies on telecommunication and power grid contexts. Chapter 10 by Almir P. Guimaraes, Paulo Maciel, and Rivalino Matias Jr. focuses on the design of IT infrastructure, proposing a quasi-optimal design strategy for data centers implementing a trade-off among technical and business aspects based on Petri nets and reliability block diagrams. It has been adapted to different data center configurations, comparing them through several performance/dependability-oriented and business-oriented metrics. Chapter 11 by Javier Alonso and Kishor S. Trivedi deals with software degradation due to aging phenomena, and also discusses several software rejuvenation techniques through examples on distributed computing systems. Chapter 12 by Diego Rughetti, Pierangelo Di Sanzo, Francesco Quaglia, and Bruno Ciciani proposes machine learning techniques for dealing with data management in distributed infrastructures, considering both quality of service requirements and costs, which are then applied to a real case study on the Amazon Elastic Cloud Computing infrastructure. Chapter 13 by Aris Leivadeas, Chrysa Papagianni, and Symeon Papavassiliou focuses on energy efficiency, sustainability, performance, and costs of networked Cloud computing, proposing a specific framework and simulation technique for the analysis of related infrastructures, which are then applied to a datacenter evaluation.

    The chapters have been written by more than 40 leading experts in distributed systems, modeling formalisms, and evaluation techniques, from both academia and industry. We wish to thank all of them for their contributions and cooperation. Special thanks go to the Scrivener staff, and in particular to Martin Scrivener, who patiently supported us, and also to Krishna B. Misra and John Andrews for their valuable advice. We hope that practitioners will find this book useful when looking for solutions to practical problems, and that researchers can consider it as a first-aid reference when dealing with distributed systems from a quantitative perspective.

    Dario Bruneo and Salvatore Distefano

    Messina, Italy, January 2015

    PART I

    VERIFICATION

    CHAPTER 1

    MODELING AND VERIFICATION OF DISTRIBUTED SYSTEMS USING MARKOV DECISION PROCESSES

    MARCO BECCUTI¹, GIULIANA FRANCESCHINIS² AND JEREMY SPROSTON¹

    ¹Dipartimento di Informatica, Università di Torino, Italy.

    {beccuti,sproston}@di.unito.it

    ²DiSIT, Istituto di Informatica, Università del Piemonte Orientale, Italy.

    [email protected]

    Abstract.

    The Markov Decision Process (MDP) formalism is a well-known mathematical formalism to study systems with unknown scheduling mechanisms or with transitions whose next-state probability distribution is not known with precision. Analysis methods for MDPs are based generally on the identification of the strategies that maximize (or minimize) a target function based on the MDP’s rewards (or costs). Alternatively, formal languages can be defined to express quantitative properties that we want to be ensured by an MDP, including those which extend classical temporal logics with probabilistic operators.

    The MDP formalism is low level: to facilitate the representation of complex real-life distributed systems higher-level languages have been proposed. In this chapter we consider Markov Decision Well-formed Nets (MDWN), which are probabilistic extensions of Petri nets that allow one to describe complex nondeterministic (probabilistic) behavior as a composition of simpler nondeterministic (probabilistic) steps, and which inherit the efficient analysis algorithms originally devised for well-formed Petri nets. The features of the formalism and the type of properties that can be studied are illustrated by an example of a peer-to-peer illegal botnet.

    Keywords. Markov decision processes, modeling and verification.

    1.1 Introduction

    The mathematical formalism of Markov Decision Processes (MDPs) was introduced in the 1950s by Bellman and Howard [17, 7] in the context of operations research and dynamic programming, and has been used in a wide area of disciplines including economics, manufacturing, robotics, automated control and communication systems. An MDP can be regarded as a Markov chain extended with nondeterministic choice over actions, and is typically equipped with rewards (or costs) associated with transitions from state to state.

    A key notion for MDPs is that of strategy, which defines the choice of action to be taken after any possible time step of the MDP. Analysis methods for MDPs are based on the identification of the strategies which maximize (or minimize) a target function either based on the MDP’s rewards (or costs), or based on properties satisfied by the MDP’s execution paths. For example, in a distributed system, there may be different recovery and preventive maintenance policies (modeled by different actions in the MDP); we can model the system using an MDP in order to identify the optimal strategy with respect to reliability, e.g., the optimal recovery and preventive maintenance policy that maximizes system availability. Reward-based performance indices rely on standard methods for MDPs, whereas path-based properties rely on probabilistic model checking methods [8, 3].

    It is important to observe that the formalism of MDPs is low level, and it could be difficult to represent directly at this level a complex real-life distributed system. To cope with this problem, a number of higher-level formalisms have been proposed in the literature (e.g., stochastic transition systems [13], dynamic decision networks [14], probabilistic extensions of reactive modules [1], Markov decision Petri nets and Markov decision well-formed nets [5], etc.).

    In this chapter we introduce the MDP formalism in the context of distributed systems and discuss how to express and compute (quantitative) properties which should be ensured by an MDP model (Sec. 1.2). Markov decision well-formed nets (MDWNs) are presented highlighting how they can be a good choice to model multi-component distributed systems (Sec. 1.3) such as an illegal botnet example. Standard MDP analysis and probabilistic model checking techniques are used to compute a number of performance indices on the illegal botnet example (Sec. 1.4).

    An application example: peer-to-peer botnet. The application example presented in this chapter is inspired by the peer-to-peer illegal botnet model presented in [23]. Illegal botnets are networks of compromised machines under the remote control of an attacker that is able to use the computing power of these compromised machines for different malicious purposes (e.g., e-mail spam, distributed denial-of-service attacks, spyware, scareware, etc.). Typically, infection begins by exploiting web browser vulnerabilities or by involving a specific malware (a Trojan horse) to install malicious code on a target machine. Then the injected malicious code begins its bootstrap process and attempts to join to the botnet. When a machine is connected to the botnet it is called a bot, and can be used for a malicious purpose (we say that it becomes a working bot) or specifically to infect new machines (it becomes a propagation bot). This choice is a crucial aspect for the success of the malicious activity, meaning that the trade-off between the number of working bots and the number of propagation bots should be carefully investigated. To reduce the probability to be detected, the working and propagation bots are inactive most of the time. A machine can only be recovered if an anti-malware software discovers the infection, or if the computer is physically disconnected from the network.

    Our MDP model is similar to that of [23], apart from the fact that we let the choice between the type of malicious activity, working or propagating, be nondeterministic, rather than a fixed probabilistic choice. In this way, we represent all possible choices of assignment of activity to an infected machine, including dynamic strategies that adapt their behaviour to the current global state of the botnet. We consider performance indices such as the average number of working or propagation bots at time t, and the probability that the number of working machines exceeds some threshold within time t. The performance indices obtained from our model are often significantly different from those obtained from a purely probabilistic version of the model in which the choices of activity of a newly infected machine have equal probability.

    1.2 Markov Decision Processes

    Since the aim of this chapter is to describe how dynamic distributed systems can be effectively studied using MDPs, in this section we introduce the MDP formalism, while in the next section we consider a more high-level formalism for the description of systems which are based on MDPs (more precisely, MDPs provide the underlying semantics of the high-level formalism).

    An MDP comprises a set of states, which for the purposes of this chapter we can consider as being finite, together a description of the possible transitions among the states. In MDPs the choice as to which transition to take from a state s is made according to two phases: the first phase comprises a nondeterministic choice among a number of actions available in the state s; whereas the second phase comprises a probabilistic choice between the possible target states of the transition. The probability distribution used to choose the next state of the model in the second phase is determined by the choice of action made in the first phase.

    The possibility to combine both nondeterministic and probabilistic choice in MDPs is useful in a number of different contexts. In the context of the formal modeling of systems, nondeterministic choice can be used to represent such factors as interleaving between concurrent processes, unknown implementation details, and (automatic or manual) abstraction.

    In the following, we use a set of atomic propositions denoted by AP, which will be used to label the states of an MDP. For example, states corresponding to a system error will be labeled with a certain atomic proposition to distinguish them from non-error states.

    A discrete probability distribution over a finite set Q is a function μ : Q → [0, 1] such that = 1. We use Dist(Q) to denote the set of distributions over Q.

    We now formally present the definition of MDPs. An MDP M = (S, A, p, r, l) comprises: 1) a set S of states; 2) a set A of actions; 3) a partial transition function p : S x A → Dist(S); 4) a partial reward function r : S x A → 5) a labeling function l : S → 2AP.

    The transition function, when defined for a state s S and action a A, maps to a distribution p(s, a) over states. For each state s S, we let As denote the set of actions a A for which p(s, a) is defined. We assume that As is non-empty for each state s S. Intuitively, when in the state s, the choice of action a is made nondeterministically from As, and subsequently the next state will be s′ with probability p(s, a)(s′) (see Fig. 1.1). The partial reward function is defined for all states s S and actions a As, and maps to a rational value r(s, a). The labeling function associates with each state s S a set of atomic propositions l(s), which represents a set of observable events that are satisfied in s. The labeling function is used for analyses based on probabilistic model checking, which will be considered in later sections.

    Figure 1.1 A portion of a generic MDP

    There is no unique probability measure associated with an MDP: instead there is a (generally countably infinite) number of probability measures, each of which corresponds to a different way of resolving the nondeterministic choice during the execution of the system. We use the notion of strategy (also called schedulers or policies in the literature) to represent a particular selection of all of the possible nondeterministic choices to be made throughout the MDP’s execution. A strategy is a function mapping from each possible execution history, representing the behavior of the MDP up to a certain point, to the next action to take. Each strategy defines a Markov chain, on which a probability measure over system executions can be defined in the standard way. We are generally interested in extremal strategies with regard to some criteria regarding performance or correctness: for example, we may wish to consider (optimal) strategies which result in the maximal discounted expected reward in steady-state, or those which result in the minimum probability of system error.

    Computing optimal rewards and optimal strategies. The problem of computing a maximal (or minimal) reward and its associated optimal strategy for an MDP can be formulated as a linear program and computed in polynomial time (see, for example, [21]). However linear programming is not practical for large MDPs, and therefore alternative solution techniques based on iterative methods have been proposed, such as value iteration and policy iteration. Value iteration [7] consists in the successive approximation of the required values. At each iteration, a new value for a state is obtained by taking the maximum (or minimum) of values associated with the state’s outgoing actions. A value of an action is derived as a weighted sum over the values, computed during the previous iteration, of the possible next states, and where the weights are obtained from the probability distribution associated with the actions.

    Each iteration can be performed in time O(|S|² · |A|).

    In contrast, the policy iteration algorithm proposed by Howard [17] alternates between a value determination phase, in which the current policy is evaluated, and a policy improvement phase, in which an attempt is made to improve the current computed policy. The policy improvement step can be performed in O(|S|² · |A|), while the value determination phase in O(|S|³) by solving a system of linear equations. Therefore, for both methods, the total running time is polynomial if and only if the number of iterations required to find an optimal policy is polynomial.

    Model checking for MDPs. Probabilistic model checking is an extension of model checking, a formal, automatic verification technique which has been used for the analysis of a wide variety of systems [12, 2]. In addition to a formal model of the system, model checking also requires a formal model of the correctness property (or properties) that the system model should ideally satisfy. Correctness properties are typically represented in temporal logic [22, 11], and refer to sequences of system events, such as a goal state is reached within 100 execution steps or a response always follows a request. A model-checking algorithm is executed to establish automatically whether the system model satisfies the property. Model checking has been extended to the case of probabilistic systems, including MDPs [8, 3]. Properties are specified in a probabilistic extensions of classical temporal logic, and refer to the maximum ot minimum probability of temporal logic properties over execution sequences; for example, a system may be regarded as correct if the maximum probability of reaching a goal state within 100 steps is greater than 0.9. Model checking for MDPs relies on the combination of the aforementioned techniques for computing optimal rewards and strategies together with techniques from the field of (non-probabilistic) model checking. A comprehensive overview of probabilistic model checking for MDPs can be found in [15].

    1.3 Markov Decision Well-Formed Net formalism

    The MDP formalism is rather low level, since the system evolution must be expressed by explicitly describing all possible states, actions and corresponding probabilistic transitions. A higher-level (possibly domain-specific) description can ease the task of the modeler and reduce the risk of introducing errors. In this chapter, we concentrate on the MDWN formalism [5]. We introduce MDWNs in this section, assuming that the reader is familiar with the basic concepts on Well-Formed Nets (WNs, the details of which can be found in [10] or in Appendix A). The formalism is designed to ease the representation of multi-component (distributed) systems: a set C0 of component identifiers must always be defined in a MDWN model.

    An MDWN model is composed by two WN subnets, the probabilistic subnet Npr (enriched with a transition weight function) and the nondeterministic subnet Nnd: these submodels represent respectively the probabilistic and nondeterministic behavior of an underlying MDP. The two submodels share the same set of color classes C (including the special color class C0 comprising controllable and not controllable components) and the same set of places P. The sets of transitions are instead disjoint Tpr Tnd = , and are partitioned into two subsets, run and stop. Each transition is associated with a subset of components; for the transitions in Tnd these must be controllable components.

    Transitions of WNs represent parametric events: the transition parameters are typed variables (where the possible types are the color classes in ) appearing on the arcs connected to the transition. A transition instance is characterized by a binding (assigning a color from the appropriate class to each variable). In MDWN models (some of) the transition variables represent parametric components (compvar); hence a transition instance corresponds to an event where one component or a set of components are involved. If C0 has two or more static subclasses, each component variable of each transition may be constrained to belong to one static subclass (or to the union of some static subclasses).

    The formal definition of MDWN now follows. A Markov Decision Well-Formed Net is a tuple MDWN = Npr, Nnd, compvar , where:

    is a WN with weights associated with the transitions (weightpr(t) : cd (t) → );

    is a WN;

    ■ let t TprTnd and cdT(t) = var1 : type1, …, varn : typen; compvar (t) is a subset of variables in cdT(t), all of type C0, used to specify which components are involved in each instance of t. If |compvar(t)| > 1, the transition guard must ensure that the same component cannot be assigned to more than one variable. If the component corresponding to a given variable should belong to a specific static subclass of C0, the transition guard must imply this constraint.

    An MDWN model must also have an associated reward function, used to compute the corresponding MDP reward to be optimized: it is defined as a 3-tuple rs, rt, rg where:

    ■ , where represents the projection of colors in cd(p) on its static partition. In other words, if c cd(p) is a tuple of colors from is the tuple obtained by substituting its elements with the identifier of the static subclass they belong to.

    rt is a Tnd-indexed function rt(t) : → defining a reward value for each occurrence of a given element of the static partition of cd(t);

    rg : x → is the function used to combine the values from rs and rt in a unique global reward value: for any given state s and action σ the reward is defined as rg(rs(s), .

    The dynamics of an MDWN model is defined as follows: starting from the initial state s0 the nondeterministic subnet evolves until exactly one stop transition has fired for each controllable component: this kind of trace is called maximal nondeterministic transition sequence (MNDTS), corresponding to a (composite) action a, and leading the system to an intermediate marking ms0,a where the probabilistic subnet must evolve. From such a state a maximal probabilistic transition sequence (MPTS) is fired, containing exactly one stop transition for each component in C0. A probability can be computed on any MPTS based on the function weightpr. Then the two steps are repeated, starting from the state reached by the MPTS. This behavior can be obtained by proper composition of the two subnets (see [5] for the details), and application of a standard Reachability Graph (RG) construction algorithm to the complete model. The MDP can be obtained from the RG as follows: the set of states S comprises the initial state and all the states in the RG reached by some MPTS. The actions associated with state s are As = {t_count(σ): s } where σ is a MNDTS that can be fired in s, and t_count(σ) is a Tnd-indexed vector of natural numbers, counting how many times each transition instance fires in σ. For each a As the transition probability distribution corresponding to ms,a (the marking obtained firing all the transitions of the MNDTS represented by a from state s) is obtained as follows: p(ms,a)(s′) = where ms,a s′ denotes that σ′ is a MPTS that can fire from ms,a reaching s′, and Pr(σ′) is obtained as the product of the probabilities of each transition instance appearing in σ′, computed according to function weightpr. The set of possible actions in each state s may be reduced in case there exist two (or more) different actions a and a′ such that ms,a = ms,a′: in this case only the action with optimal reward (maximal or minimal, depending whether the reward should be maximized or minimized) is kept, assuming that the rg() function be monotone with respect to its second argument.

    Due to the symmetric properties of the MDWN arc functions, guards, weights and reward, the RG (and MDP) size can be considerably reduced by automatically aggregating equivalent states and transitions. As for WNs, the aggregate RG can be generated directly, without first building the whole RG. In the next section an MDWN model of the peer-to-peer botnet is presented and studied.

    We now consider briefly an alternative high-level formalisms for MDPs. Models of the probabilistic model checking tool PRISM [20] consist of a number of modules, each of which corresponds to a number of transitions. Each transition is guarded by a condition on the model’s variables, and the transitions of a module can update local variables of the module. Multiple transitions may be simultaneously enabled, and the choice between them is nondeterministic; the chosen transition determines a probabilistic choice as to how the variables should be updated. Modules may communicate through synchronization on shared actions with other modules. PRISM does not directly support a multistep nondeterministic or probabilistic transition accounting for the evolution of all components in a given time unit: this can be explicitly modeled by using a variable for each component which records whether the component has taken a transition this time unit.

    We briefly mention other formalisms for MDPs. The modeling language MODEST [9] incorporates aspects from process modeling languages and process algebras, and includes MDPs as one of the many formalisms which it can express. Stochastic transition systems [13] also subsume MDPs, but also permit both exponentially timed and immediate transitions. A number of process algebras featuring nondeterministic and probabilistic choice have been introduced; we refer to [18] for an overview of a number of these.

    1.4 Case study: Peer-to-Peer Botnets

    In this section the MDWN model of the peer-to-peer botnet example is described and its quantitative properties are assessed using different analysis techniques.

    The probabilistic and nondeterministic subnets of the botnet model are shown in Figs. 1.2(a) and (b) respectively. In this model contains only one color class Mach representing the computers that can potentially be infected (initially all in place NoInfected). This is also the color class representing the components of the model, which are all controllable. All transitions both in the probabilistic and in the nondeterministic net are stop transitions (in this particular model there was no need to use run transitions, which are required only when the probabilistic state change of one component comprises several steps and is modeled through a sequence of run transitions followed by the firing of one stop transition).

    Figure 1.2 MDWN probabilistic net with transition weights (a) and nondeterministic net (b) for the peer-to-peer botnet model.

    All places have color domain Mach so that the presence of a token of color ci Mach in a place represents the current state of machine ci. For all transitions compvar = x, where x is the variable appearing on all arcs of both subnet.

    In Fig. 1.2(a) place NoInfected contains all the uninfected machines, while place InitInf contains all the infected machines that are not yet connected to the botnet. An uninfected machine can be infected by an active propagation bot (i.e., transition BotInitInfection where y represents such generic active propagation bot)¹ or by a hacker (i.e., transition InitInfectbyHum) during the initialization of the botnet. The inhibitor arc between place InfectedMach and transition InitInfectbyHum ensures that the hacker infection happens only when no infected machines are presented in the system (i.e., place InfectedMach is empty).

    The firing of transition BotConnection moves a token from place InitInf to place ConnectedBot, and models the connection of an infected machine to the botnet. Subsequently, a role is associated with the new bot by the firing of transitions Become-PropBot or BecomeWorkBot: the former transition assigns a bot to the propagation activity, while the latter transition to the working activity.

    A bot, whether assigned to the propagation activity or to the working activity, can either be inactive or active. Switching between these two states is modeled by the transitions BecomeActProp and BecomeInacProp (resp., BecomeActWork and BecomeInacWork). Finally, the recovery of a machine is modeled by transitions DetectActPropBot, DetectInactPropBot, DetectActWorkBot and DetectInactWorkBot.

    All the probabilistic transitions have an integer priority used to avoid the confusion problem (i.e., transitions NormalWork and BotInitInfection have priority higher than those associated with transitions BecomeActProp and BecomeInacProp) and to reduce possible interleavings. The weight and priority associated with transitions is shown in Fig. 1.2(a) (bottom left corner).

    The nondeterministic net Nnd in Fig. 1.2(b), models the nondeterministic choice to assign a role to a new bot. We recall from the introduction that the nondeterminism allows us to consider all the possible choices of malicious activities. Net Nnd shares the places ConnectedBot, DecProp and DecWork with the probabilistic net Npr. For each bot in place ConnectedBot a nondeterministic choice is taken by the firing of transition SelectProp or transition SelectWork, which place a colored token in places DecProp and DecWork respectively. Transition NoSel is needed for technical reasons: it is used to conclude the nondeterministic step for all the machines not involved in the nondeterministic choice (i.e., not in place ConnectedBot). Also in this case priorities are used to reduce the possible interleavings.

    All the MNDTSs in this model include |Mach| transition instances, corresponding to a decision as to whether a just infected machine should be a propagating bot or a working bot (if a machine has not been infected yet, or was infected in some earlier step, no decision has to be taken and stop transition NoSel is included in the MNDTS for that machine). Also all MPTSs in this model include |Mach| transition instances, each representing the probabilistic state change for each specific machine; observe that there are several self-loop transitions in the probabilistic subnet, representing the case when a given component remains in the same state.

    Although this model has a single class of homogeneous components, one could easily refine it, e.g., to represent the fact that a subset Mach0 of machines is protected by a firewall while another subset Mach1 includes the machines in a DMZ. The model could then define different probabilities of infection for the machines in the DMZ and those protected by a firewall by properly defining weightpr as dependent on the color subclasses. It could also be interesting to add a non-controllable component representing a security software alternating between inactivity periods and security check periods (according to some probability distribution): during check periods it synchronizes with the machine components to detect the malware and, in case it is found, removes it (a number of run transitions would be needed to represent the detection and removal of malware on all of the machines). Given that this component is non-controllable, it would not be involved as a component in any transition appearing in a MNDTS.

    A sketch of the probabilistic subnet modeling this extension is shown in Fig. 1.3. where we have introduced two new places (NoActive and Active), encoding the security software status, and eight transitions. In detail, the stop transitions RemNoAct,RemAct,BecomeNoActive and BecomeActive are used to model the alternation between inactivity and security check periods. Instead, the run transitions RemoveInactPropBot, RemoveActPropBot, RemoveInactWorkBot and RemoveActWorkBot have a higher priority than all the other model transitions and represent the detection and removal of malware on all the infected machines.

    Figure 1.3 A sketch of the probabilistic subnet modeling security software

    An example of reward function for the botnet model could simply be a function providing the number of infected machines in each state (if, e.g., we want to study the strategy that maximizes the number of infected machines at time T, or the average number of working bots in steady state, or to compute the maximum probability that once the infection starts, all machines become infected within a given time T).

    Computing and analyzing an optimal strategy using GreatSPN. Now we show how the described MDWN model can be analyzed studying the optimal strategy which maximizes a defined reward function. We used the tool MDWNsolver included in the GreatSPN framework for this purpose [6]. Among all the possible MDWN reward functions we choose the one that allows us to maximize the number of working bots, so rg is defined as: rg = m(ActWorkBot)+ m(InactWorkBot).

    The RG of this MDWN model with 10 machines has 3.42 X 10¹² states. However efficient MDWN solution techniques [5], which automatically exploit the model symmetries to derive a lumped RG, can be used to reduce the size of the state space to 1.49 X 10⁷. Then we generate the underlying (lumped) MDP, which has 7,722 states. All the process requires ~30 min on an INTEL CORE I7 and ~1GB of memory.

    We recall that this difference in the number of states between the lumped RG of the MDWN and the MDP is due to the fact that the MDWN gives a component-based view of probabilistic and nondeterministic behaviors: complex nondeterministic and probabilistic behaviors are expressed as a composition of simpler nondeterministic or probabilistic steps, which are reduced to a single step in the final MDP.

    The optimal repair policy is not trivial even if for each bot the system can choose only between two actions (i.e., SelectProp and SelectWork), and requires 10 min and 300MB of RAM to reach a precision of 10−8 through the policy iteration. The resulting policy provides for each state the best choice: in general it is extremely difficult to characterize in an abstract way the set of states that lead to a given action. Hence in practice some efficient representation of the table of all states associated with the optimal action should be used to apply the policy.

    In order to study this optimal repair strategy we have computed the average number of working and propagation bots at time T by solving in transient the Markov chain obtained from the underlying MDP by fixing the action to take in every state according to the computed optimal strategy. The curves plotted in Figure 1.4 show these two performance measures for T from 0 to 500. The dashed lines represent the same measures computed when the choice between working and propagation activity is instead equiprobable. From this figure we observe, as expected, that the average number of working bots associated with the optimal strategy is greater than those computed according to an equiprobable choice (e.g., when T = 500 it is approximately 1.42 times larger)

    Figure 1.4 Average number of working and propagation bots using an optimal strategy or equiprobable choice (0 ≤ T ≤ 500, step 5)

    Moreover, the histogram and curves show in Figures 1.5(a) and 1.5(b) show the effect of increasing the probability for a propagating bot to be active on the average number of working bots in steady state and on the average number of propagation bots at time T, respectively. These measures are derived by solving the MDWN model for different weights associated with transitions RemActProp and BecomeInactProp (i.e., (0,4,0.55), (0.5,0.45), (0.6,0.35), (0.7,0.25), (0.8,0.15), and (0.85,0.05), where the first element of each pair refers to RemActProp, the second to BecomeInactProp; in the labels of the figures, we indicate only RemActProp). Then the Markov chains obtained from these underlying MDPs by fixing the action to take in every state according to the computed optimal strategies are solved in steady state and transient to compute the two measures. The

    Enjoying the preview?
    Page 1 of 1