CpSc 538G: Model Checking: Algorithmic Verification and Debugging

October 3, 2016
  0.  The big picture
    A.  Turing Award lecture by Emerson, Clarke, and Sifakis
      1.  Emerson: temporal logic
        a.  Reasoning about behaviours that involve changes over time
	b.  Description of the main forms of temporal logic: CTL, LTL,
	      and CTL*
      2.  Clarke: model-checking, how to scale to large problems
        a.  in the beginning -- explicit state model checking
        b.  OBDDs -> symbolic model checking
        c.  SAT -> bounded model checking
        d.  Abstraction and refinement
      3.  Sifakis: model checking in practice
        a.  Real-world specification languages
	b.  Executable models
	c.  Scalable verification methods
	d.  A guess at the future:
          *   compositional methods
          *   correct-by-construction design

    B.  Temporal properties
      1.  An example from logic puzzles
        a.  Mary, Tom, Joe, and Fred are among the smartest kids in second grade.
	      At recess, Mary shows Tom, Joe, and Fred her pet toad.  She offers
	      that one of them can take her toad home for the weekend, but they
	      have to solve her puzzle.  She shows them five hats: two are black
	      and three are white.  She said that they'll each have to close their
	      eyes, and she'll put a hat on each of their heads.  Then they can
	      open their eyes.  Each boy can see the hats on the other two boys
	      heads but can't see their own.  The first person to correctly state
	      the color of the hat on their head gets the toad.  BUT, if you guess
	      wrong, you have to kiss Mary.
	        When Mary tells them to open their eyes, each sees the other two
	      wearing white hats.  After a long pause, one of the boys says "My
	      hat is white."  He gets the toad.  How did he know?
        b.  I don't see a way to set this up as just as a simple SAT or SMT problem.
	c.  Key observations:
	  *   They are all very smart.  They can all make logical deductions, and
	        they all know that the others can as well.
	  *   Getting the toad for a weekend is a big prize that they all want.
	        If one of them can figure out with certainty the color of his
		hat is, he'll say it so he can win.
	  *   Kissing Mary is a risk that has to be avoided no matter what -- they
	        are second graders.
	  *   I'll assume that the boys are not vindictive.  In particular, none of
	        them will forego the opportunity to have Mary's toad for a weekend
		just to make one of their friends kiss her.
	  *   The number of boys with black hats must be 0, 1, or 2.
	d.  Reasoning:
	  *   If two boys were wearing black hats, the third boy would have to be
	        wearing a white hat.  The boy who sees two black hats declares his
		hat to be white.
	  *   Now, we come to the "temporal" part of this.  If, after a short pause,
	        no one declares their hat to be white, then we can eliminate the
		possibility that there are two boys wearing black hats.  The number
		of boys with black hats must be 0 or 1.
	  *   After that pause, if Tom sees that Joe is wearing a black hat and
	        Fred is wearing a white hat, then Tom concludes that his own hat
		must be white.  This is because if Tom's hat were black, Fred
		would have seen two black hats and would have already declared
		his hat to be white.  Note that there is a race between Tom
		and Fred.  The same argument applies in any situation where
		exactly one boy is wearing a black hat.
	  *   After two short pauses, i.e. one "long pause", everyone can conclude
	        that no one saw a black hat on the head of anyone else.  Everyone
		must be wearing a white hat.
	e.  Notice how the system went through a sequence of states.
	  *  We also had to make assumptions that if someone could deduced the color
	       of their hat, then they would "eventually" do so and declare it.
	  *  We will see examples of such "eventually" properties as we go on.

      2.  Examples from computer science
        a.  Mutual exclusion protocols
	  *   A parallel program running on a shared-memory machine with N cores
	        has a "critical section" where at most one core can execute at a
		time.
	  *   How can we implement a "lock" so that only one process enters its
	        critical section at a time?
	  *   Safety property: at most one process is in the critical section at
	        any time.
	  *   Liveness property: any process that is trying to enter the critical
	        section will eventually succeed.
        b.  Distributed commit protocols
	  *   In a database, a large transaction may be performed by multiple
	        machines -- for example, an ATM dispensing cash and the amount
		of the withdrawal being deducted from your account.
	  *   In case of a failure, we want to reach a state where either all
	         of the updates have occurred, or none of them have occurred.
	    +  If the bank make the deduction from your account, but the ATM
	         doesn't give you your cash, you're unhappy.
	    +  If the ATM gives you your cash but the bank hasn't deducted it
	         from your account balance, then the bank isn't happy.
        c.  Many more examples:
	  *  cache coherence protocols
	  *  network protocols
	  *  redundant storage for persistent objects
	  *  ...

    C.  Back to the paper:
      1.  How can we specify temporal properties?
      2.  How can we model the hardware or software that is supposed to have these
            properties?
      3.  How can we verify that the model satisfies the specification?
      4.  How does this work in practice?



  I.  Allen Emerson: temporal logics.
    A.  Main ideas:
      1.  start with propositional logic
      2.  add operators for "always" and "eventually"
        a.  Let p be a proposition.  The TL formula, p, means that p holds
	      in the current state.
        b.  "next p" means that p holds in the successor of the current state.
	  *    abbreviation: X p, where "X" is a letter in "neXt".
        c.  "always p" means that p holds in this state or in some
	         subsequent state.
	  *    abbreviation: G p, where "G" means "globally"
        d.  "eventually p" means that p holds in this state or some
	       subsequent state.
	  *    abbreviation: F p, where "F" means "eventually", because we
	         are going to use "E" for "exists".
        e.  "p until q" means that p holds in this state and will continue
	       to hold until we reach a state where q holds.
      3.  Model the system as a finite state machine (FSM)
        a.  The system state is a state of the FSM
	b.  Transitions correspond to transitions of the machine
	c.  A state may have more than one successor
	  *  We may have a concurrent system such as a distributed or
	         parallel system.
	    +  We don't know which process (or other component) will take the
	         next action.
	  *  We may have a reactive system, such as an operating system.
	    +  We don't know what requests will arrive in what order from
	         other processes.
	    +  Other examples include on-line transaction processing, web
	         servers, many more.
	  *  We may have an incomplete description of our system
	    +  We want our property to hold for *any* implementation that
	         satisfies certain requirements.
	  *  Having multiple successors is called: "non-determinism".
	d.  These ideas can be extended in various ways to infinite state
	      systems, but we'll focus on the finite-state case to keep it
	      simple.
      4.  The focus tends to be on reactive systems:
        a.  The software is not expected to terminate
	  *   It keeps running, waiting for the next request or event.
        b.  This framework has worked very well for hardware
	  *   hardware can naturally be modeled as state machines
	  *   we get non-determinism from
	    +   inputs received from other modules
	    +   communication with other timing domains where we can't know
	          the exact ordering of events.
	         next action.
        c.  Technical note: this means that each state has at least one
	        successor
	  *   The system doesn't terminate (but see note below)
	  *   A state may have itself as a successor
	    +   "wait" until some condition holds
	    +   or stay here forever (might as well be termination)

    B.  CTL -- computational tree logic
      1.  A model describes *all* computations that are allowed by the
             next state graph.
      2.  Temporal operators and path quantifiers
        a.  Temporal operators: G, F, and X as described above
	b.  path quantifiers
	  *   A -- holds on all paths from this state
	  *   E -- there exists a path for which the property holds
	c.  In CTL, these always come as pairs:
	  *   AX p -- p holds in all immediate successors of the current state
	  *   AG p -- p holds in this state and for all states along all paths
	                  from the current state.
	  *   AF p -- p holds in this state, or, for all paths from this state,
	                  p holds somewhere along that path.
	  *   EX p -- there exists an immediate successor of the current state
	                for which p holds
	  *   EG p -- p holds in this state and there exists a path from the
	                this state for which p holds in every state.
	  *   EF p -- p holds in this state or there exists a path from the
	                this state for which p eventually holds.
	d.  Note that p can be a temporal formula:
          *   AG (p implies EF q) means that for all paths from the current
	        state, if we reach a state where p holds, then there must be
		a path from that state that leads to another state where q
		holds.
	  *   AG (req implies EF ack) can be seen as a weakly stated response
	        property.  It means that if a request occurs, then there is
		an allowed execution that grants it.
      3.  CTL formulas are closed under typical boolean operations.
	a.  If p and q are CTL formulas, than so are (p AND q), (p OR q), and
	       (NOT p).
	b.  There are some handy versions of De Morgan's law:
	      NOT (AX p) <=> EX (NOT p)
	      NOT (AG p) <=> EF (NOT p)
	      NOT (AF p) <=> EG (NOT p)
      5.  A bit of philosophy
        a.  Properties such as AF p are in some sense "unobservable".
          *   The property says that p will "eventually" hold, but gives no
	        bound on how long you need to wait for "eventually".
          *   If you sell me a gadget that is supposed to satisfy AF p, and
	        I try to return it saying that I've been waiting 10 years,
		and it still hasn't reached a state that satisfies p, you
		can tell me that I just need to wait longer.
          *   In practice, AF can be very useful, but some caution is needed.
        b.  Properties such as EF p or EG p are even less observable.
	  *   Again, I can come back in 10 years and say I've never seen
	        such a trace and you can tell me that I just didn't wait
		long enough.

    C.  LTL -- linear-time logic
      1.  The ingredients
        a.  atomic propositions, p, just like CTL
        b.  boolean combinations of atomic propositions.
	c.  always (G or box) and eventually (F or diamond)
	d.  *no* negation of formulas with temporal operators
      2.  For a property to hold, it must hold for all executions.
      3.  Examples
        a.  A p -- p holds in all states
        b.  A (p => F q), from any state in which P holds, we must
	       eventually reach as state in which q holds.
	  *  p must be a boolean combination of atomic propositions
	    +  I.e. no temporal operators in p
	    +  That's because (p => F q) is the same as ((NOT p) OR F q)
	         and we can't negate an LTL formula with temporal operators.
        c.  F A (req => F ack), eventually, the system reaches a state from
	      which all subsequent requests are eventually granted.  The first
	      "eventually" allows for an initialization phase.
	      
    D.  CTL vs. LTL -- it's a religion
      1.  There are properties that can be stated in CTL that can't be
              stated in LTL
	a.  Anything with a non-negated "E"
	b.  Example EF g
      2.  There are properties that can be stated in LTL that can't be
              stated in CTL
	a.  Example F G p
	b.  Consider the model (an FSM)
	      state 1 is the initial state, it is labeled with p.
	        State 1 has transitions to states 1 and 2.
	      state 2 is labeled with (NOT p).
	        State 2 has a transition to state 3.
	      state 3 is labeled with p.
	        State 2 has a transition to state 3.
	c.  If we let M be the model from (b) and f be the LTL formula from (a),
	        we can show M |= f.
	  *  Every trace takes the transition from state 1 to state 2 either zero
	       or one times.
	  *  If the trace takes the transition zero times, then p holds for the
	       entire trace, and the property holds.
	  *  If the trace takes the transition one times, then the next transition
	       leads to state 3.  The model stays in state 3 from then on, and the
	       property holds.
	d.  We can't express this in CTL.
	  *   Example of a failed attempt AF AG p
	  *   Translation: "On all paths eventually reach a state from which p
	        holds forever".
	  *   The problem is "reach a state" -- we can't find such a state.
	    +   There is an execution that stays in state 1 forever.
	    +   But it is not the case that all executions from state 1 satisfy
	          p forever.
      3.  Which is better?
        a.  CTL allows negation of temporal formulas -- this enables some nice
	      manipulation of CTL formulas.
        b.  LTL is simpler, and we can argue that non-negated EF, EG, or EX
	      formulas are unobservable.
        c.  For most practical problems, either can be used
	  *   Sometimes, one is clearly a better choice for a particular
	        problem.
	  *   Each camp has its vocal proponents
	  *   Emerson clearly prefers CTL.
      4.  CTL* -- temporal logic for those who can't make up their minds
            between CTL and LTL
        a.  Allows path quantifiers without a temporal operator
        b.  Example from the paper E (F p1 AND F p2) is CTL*
	  *   This says "There exists a path, for which p1 holds at some
		state, and p2 holds at some (possibly other) state."
	  *   The equivalent CTL is (EF (p1 AND EF p2)) OR (EF (p2 AND EF p1))
	  *   The CTL enumerates the two possible orderings for encountering
		a p1 state and a p2 state.
	  *   Emerson notes that CTL* can have formulas that are exponentially
		shorter than their CTL equivalents.
        c.  CTL* has "state formulas" and "path formulas"
	  *  state formulas hold in a specific state:
	       an atomic proposition,
	       boolean combinations of state formulas
	       E f or A f, where f is a path formula
	  *  path formulas hold for a path from the current state:
	       any state formula:
		 just needs to hold in the current state,
		 any suffix is allowed
	       boolean combinations of path formulas
	       G f, F f, or X f, where f is a path formula
        d.  CTL* is strictly more expressive than CTL and LTL
	  *  Our LTL friend F G p and be written as A(FG p)
	  *  This says:
	       A:  for all paths from the initial state,
	       F:  there exists a point on that path such that
	       G:  for that state and all subsequent states
	       p:  p holds



 II.  Edmund Clarke: model checking and the state-explosion problem
    A.  How to model-check a CTL formula, f
      0.  A process of labeling states.
      1.  Initially, label all states according to atomic propositions in f.
      2.  Labeling states for formulas of the form EX p or AX p is easy:
        a.  EX p: find states that have at least one successor that satisfies p.
        b.  AX p: find states for which every successor satisfies p.
      3.  Labeling states for formulas of the form EF p
        a.  Any state that satisfies p satisfies EF p.
        b.  If state s has a successor s' such that s' satisfies EF p,
	      then s satisfies EP p as well.
        c.  Repeat step b until no new states are found -- this is a
	      "fixpoint" computation.
      3.  Labeling states for formulas of the form AG p
        a.  First label states for EF (NOT p)
	b.  Every state that is not labeled EF (NOT p) should be labeled AG p.
      4.  Labeling states for formulas of the form AF p
        a.  Any state that satisfies p satisfies AF p.
        b.  If every successor of state s satisfies AF p, then s satisfies AF p.
        c.  Repeat step b until no new states are found.
      5.  Labeling states for formulas of the form EG p
        a.  Label states for AF (NOT p)
	b.  Every state that is not labeled EF (NOT p) should be labeled EG p.

    B.  Clarke's initial model checker
      1.  Explicit state: construct the explicit state graph for the model
      2.  Apply the labeling operations described above.
      3.  Clarke and his students showed that verification problems for
	      distributed algorithms that required clever manual proofs
	      could be automatically verified by his model-checking algorithm
	      (for small, finite instances).
        a.  They found real errors in published algorithms.
        b.  Raised questions of "Is this really verification?"
	  *   Only verifies particular instances of the algorithm.
	  *   Is brute-force enumeration a "proof"?
      4.  Michael Browne added counter-example generation
        a.  If a property fails, the model-checker can generate an execution
	      trace that demonstrates the failure.
	b.  For example, if AG p fails, then the model-checker generates a
	      path from the initial state to a state where p does not hold.
	c.  For example, if AF p fails, then the model-checker generates a
	      path that ends in a cycle, where p holds nowhere on the cycle.

    C.  model checking using OBDDs (w. K. McMillan)
      1.  key properties of OBDDs
        a.  OBDDs represent boolean functions
        a.  symbolic -- can represent an exponentially large number of states
	      with a small graph
        b.  canonical -- two OBDDs for the same function are identical.
	  *   Even if the functions were written differently.
	  *   E.g. the OBDD for 'x OR (y AND z)' is identical to the OBDD
	        for '(x OR y) AND (x OR z)'
      2.  model checking with OBDDs
        a.  Simple example
	  *  initial state satisfies Q0
	  *  next state relation, R(s1, s2), holds iff s2 is a successor of
	       s1.  In other words, we're encoding the graph as a booolean
	       function.
	  *  let's say we want to verify Q0 => (AG p), for some proposition p.
        b.  We'll do this by showing that the negation is unsatisfiable
	  *  The negation is
	            NOT (Q0 => (AG p))
		<=> Q0 AND EF (NOT p)
        c.  Basic idea, check
	      Q0(s) AND (NOT p(s)) -- any choice of s that satisfies this
	      				is a counterexample to Q0 => (AG p)
	      Then check
	        Exists s0, s1. Q0(s0) AND R(s0, s1) AND (NOT p(s1)) 
		  -- again, any satisfying choices for s0 and s1 are a
		     counterexample to Q0 => (AG p)
	      We could continue in this way
	        Exists s0, s1, s2. Q0(s0) AND R(s0, s1) AND R(s1, s2)
		                          AND (NOT p(s1)) 
	      and so, on, but we need more and more variables
        d.  Better approach:
	      Q0(s) is the set of all initial states
	      Q0(s) OR Exists pre. Q0(pre) AND R(pre, s)
	        is the set of all states reachable in 0 or 1 steps.
		OBDDs provide us with universal and existential
		quantification; so, we can compute the function Q1
		given the function for Q0 and R.
	        Let Q1(s) = Q0(s) OR Exists pre. Q0(pre) AND R(pre, s)
	      Let Q2(s) = Q1(s) OR Exists pre. Q1(pre) AND R(pre, s)
	        This is the set of states reachable in 0, 1, or 2 steps.
	      Let Q3(s) = Q2(s) OR Exists pre. Q2(pre) AND R(pre, s)
	        This is the set of states reachable in 0 to 4 steps.
	      We stop when
	        Q_i = Q_{i+1}
	      OBDDs make it easy to determine that two functions are
	      the same.  So, we know when we're done.
	      Let Q* denote this fixpoint.
        e.  To check Q0 AND (EF (NOT p)), we compute Q* as described
	      above and then compute
	        Q* AND (NOT p)
	      If this function is satisfiable, then we have shown
	        Q0 AND (EF (NOT p))
	      If this function is not satisfiable, then we have shown
	        Q0 => (AG p)
        f.  Notes:
	  *   Note that it takes log(K+1) rounds of "relation squaring"
	        to find all states reachable in 2^K steps.
	  *   Counterexample generation is fairly straightforward.
	  *   The algorithm can be optimized to stop as soon as a
	        counterexample is encountered.

    D.  where do models come from
      0.  Not in the paper but a reasonable question
      1.  Hardware
        a.  state is held in flip-flops
	  *   A natural labeling is just the bits of the flip-flops
        b.  edges are determined by the logic gates
        c.  next state depends on current state and inputs
	  *   A state can have different successors for different input values
	  *   This gives rise to non-determism in the model
        d.  other sources of non-determinism
	  *   incomplete models of some modules:
	        The cache is connected to a CPU that can be executing an
		  arbitrary program, but the CPU observes a specified
		  protocol for reads and writes
	  *   communication between modules with different clocks
	  *   asynchronous design
      2.  Software
        a.  generally need some abstraction for the values of variables.
	  *   Example, "predicate abstraction", if 'i < j' appears in the
	        source code, introduce a new boolean variable for 'i < j'
	  *   Derive constraints between the boolean variables using SMT.
        b.  The program counter is another variable
        c.  edges reflect possible transitions
	  *   typically *overapproximate* the transition relation
	  *   if the abstracted variables don't rule out a transtion,
	        the edge is included.
	  *   this allows us to verify safety properties, AG p, if the
	        abstraction is safe, then the original is as well.  This
		is because the abstraction allows everything that that
		the real program can do, and more.
	  *   handling liveness properties is more complicated
    E.  partial order reduction (and other tricks)
      0.  Many techniques have been developed to enable model checking of
            larger models.
      1.  Partial order reduction is motivated by software verification
        a.  If we have N concurrent processes, each with P possible program
	      counter values, there could be as many as P^N possible
	      configurations.
	b.  Some of these are unreachable -- for example, locks can be used
	      to prevent concurrent read and write of the same data structure.
	c.  Often, a step of one process only changes variables that are
	      local to that process.
      2.  Key idea
        a.  Perform multiple actions that don't interact as a single "step"
	b.  We still have to consider the various orderings of communication and
	      synchronization actions, but this can greatly reduce the complexity
	      of the state space.
	c.  Many methods have been proposed for identifying "actions that don't
	      interact".  See the citations in the paper.
    F.  bounded model checking with SAT
      1.  SAT solvers look for *a* satisfying assignment to a formula
        a.  Unlike OBDDs, we don't have a representation of functions
	b.  This makes it harder to derive new functions from existing ones
	c.  Detecting a fixpoint is harder.
      2.  Bounded model checking asks:
              Does the property hold for the next k steps?
	a.  For example, if we want to show AG p
	  *   We check if AG p holds in the initial state, and
	  *   We check if AG p holds after each of the first k steps
	b.  If we find a counterexample, we've found an error
	c.  If we don't find a counterexample, we have greater confidence in
	      our design, but no proof.
	d.  A variation is to run the program for a while, and occasionally
	        take a snapshot of the state and run bounded model checking
	        from there.
	  *   The intuition is that many bugs are small variations of existing
	        test cases.
	e.  In some cases, we can verify a property by bounded model checking
	  *   EF p -- find a path to a state that satisfies p
	  *   EG p -- find a path to a cycle where every state along the initial
	        path and the cycle satisfies p.
      3.  Reference [37] describes how to do full model checking using a
              SAT solver.
	a.  The key idea is "interpolation" -- we'll examin this later
	b.  Such interpolation based methods are now the predominant approach
	      to model checking and remain a very active area of research.
	c.  Interpolation plays a central role in the extension of formal
	      methods to software verification.
    G.  abstraction refinement and CEGAR



III.  Joseph Sifakis "The Quest For Correctness: Challenges and Perspectives"
    A.  Big picture:
      1.  look at model checking in practice, i.e. in industry
      2.  make some guesses about the future
    B.  Specification languages
      1.  Temporal logic is used for assertions in HW and SW development
      2.  PSL assertions in hardware description languages
        a.  Combine state and path formulas
	b.  Syntax and simplifications to make it accessible to real designers
      3.  Statecharts and their derivatives for high-level software design
    C.  Executable models
      1.  An essential trait of model checking is that models must be executable
      2.  For hardware, RTL (register-transfer-level) descriptions in
	    Verilog or VHDL provide such a model.
      3.  For C, Java, Python, or the greatest-programming-language-of-the-week,
	    abstraction is essential to get a finite model.
    D.  Scalable verification
    E.  Compositionality and correct-by-construction design methods


This document was last modified on: (GMT)