Loading...
Please wait, while we are loading the content...
Counterexample-guided Abstraction Refinement for Model Checking ATL
| Content Provider | Semantic Scholar |
|---|---|
| Copyright Year | 2007 |
| Abstract | Preliminaries Alternating-time temporal logic (ATL) [1] is designed to specify collaborative as well as adversarial interaction between different components of a distributed system. Checking the validity of an alternating-time property in an explicit model is cheap: linear in the size of the formula and the model. Alternating transition systems (ATS) are used to model reactive components and their interactions. Given an ATS A and an ATL-specification φ we can construct a weak two-player game G A. (A weak game is an infinite game over a finite arena with a particularly simple acceptance mechanism.) The two players represent a ‘verifier’ that tries to demonstrate that A satisfies φ, and a ‘falsifier’, who tries to disprove this claim. Checking validity can be reduced to finding a winning strategy for either player in the model-checking game G A. This strategy also serves as a witness for the correctness of the model (if the ‘verifier’ wins) or as a counterexample (if the ‘falsifier’ wins). The size of the arena of this game is bilinear in the size of the ATS and the specification, while solving weak games takes only linear time. However, this seemingly low complexity is misleading, since models are usually described symbolically rather than explicitly. (For the specification language used in MOCHA, for example, the ATL model-checking complexity is provably exponential in the size of the specification.) The simplest method to model-check a symbolic model S is to first transform it into an explicit representationA, and then construct the model-checking game G A. This method is, however, intractable if the explicit state-space becomes infinite, and inefficient in most other cases. Thus, methods for coping with large (and infinite) state-spaces are crucial for bringing model checking into practice. The common approach to treating large and infinite state-spaces in model checking is abstraction. That is, we construct a ”simpler” model for which it is cheaper to check the desired property. Counterexample-guided refinement is a technique for the automatic construction of an abstraction, which is precise enough to prove (or disprove) the given property. One starts with a coarse abstraction; if the attempt to verify the abstract model fails, there is an abstract counterexample which is either genuine (that is, it has a concrete counterpart), or spurious. In the latter case, the spurious counterexample is exploited to guide the refinement of the abstract model. |
| File Format | PDF HTM / HTML |
| Alternate Webpage(s) | http://react.cs.uni-sb.de/fileadmin/user_upload/react/stud-proj-ar-atl.pdf |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |