Tweag

Introducing Pirouette 2: formal methods for smart contracts

1 July 2022 — by Alejandro Serrano Mena

Writing software is hard, writing correct software is very hard; but as any hard thing, it becomes easier with the right tools. At Tweag we acknowledge this fact and make our tools sharper, in this blog post we introduce the second iteration of Pirouette: a tool for finding counterexamples for properties over Plutus smart contracts. For example, we can now mechanically produce a counterexample for the Minswap vulnerability:

💸 COUNTEREXAMPLE FOUND
{ __resultTrue
  txMkTxInfo [P (MkTxOutRef (Id 42) 0) (MkTxOut (A 0) (V (KV [])))] [] (V (KV [])) (V (KV [P "currency" (KV [P "token" 1, P "" 0])])) (Id 0) }

Pirouette’s goals are larger than contracts, though. We’ve built it as a framework for transforming, evaluating, and proving specifications for any System F-based language — read “any powerful functional language” — of which Plutus is an example. Pirouette’s main analysis uses a recent technique, incorrectness logic together with a symbolic evaluation engine, which allows us to reason in a necessity fashion, in addition to the more common sufficiency style. In this blogpost we will explain in a little more detail what this technique consists in and why it is useful for reasoning over Plutus smart contracts.

The Need for Tooling

At a very high level, a smart contract is a predicate which given some input — which in this case may represent some assets — and contextual information, decides whether the contract can be executed or not.

contract :: Transaction -> Bool

Note that returning True in this case has very real consequences, as it allows the respective transaction to go through and its assets to be transferred accordingly. Most attacks on smart contracts can in fact be characterized as finding a corner case which makes the predicate return True for an unintended transaction. Furthermore, smart contracts are often immutable, making them notoriously difficult to write.

Our job as auditors is to find those corner cases before a malicious party does. While manual audits can help increase our assurance that the smart contract is good, it is always a best-effort activity. Every other week we see contracts being hacked in the Ethereum space, even audited ones, and that happens in spite of the existence of a number of tools and years of experience by the Ethereum community. This is by no means circumscribed to Ethereum; if we keep using the same approaches, contracts running in the Cardano ecosystem will inevitably fall victim to the same fate.

We need to step up our game: audits are a best-effort activity, but they can become much more directed and successful with the right tools. This doesn’t give us a 100% percent guarantee — specifications can be wrong, the tools can have bugs, the platform itself may be exploitable — but machine-checked analysis is infinitely better than manual inspection. The development of these tools must be an essential part of the development of high-assurance contracts.

Pirouette 2

The development of new tools for reasoning about code poses an important question at its very inception: what of the dozens of well-known techniques should be implement? And what should be the input language to that tool? Our past experience as formal methods engineers showed us that in many cases the answer to the latter question is just a variation of another language, so it made sense to engineer Pirouette as a common core language which can be extended by each particular need. Currently, we have a need for more machine checked guarantees in our Plutus audits, hence, our first target was Plutus IR, which is based on System F, choosing a core syntax for System F which can be extended seemed like a natural choice. Pirouette can easily be extended to handle any other System F based language.

Necessity reasoning

Once we’ve introduced our general framework, let’s focus on one particular way in which we’ve used symbolic evaluation in Pirouette to better understand and check smart contracts, based on incorrectness logic, a technique introduced in 2020 by Peter O’Hearn.

Many techniques in the formal method space use sufficiency reasoning, which follows this general pattern:

  1. Assume that some property of the inputs, or in general of the starting state, holds.
  2. Figure out what happens to the state and the result after executing the function.
  3. Check that that final state satisfies whatever property we expect.

Take the following increment function,

increment :: Integer -> Integer
increment x = x + 1

Following that outline we see that if x > 1, then result > 1. Those steps correspond to (1) assume that x > 1, (2) figure out that result == x + 1, and (3) check that x > 1 && result == x + 1 ==> result > 1. (The particularities of each step don’t matter so much here, although step 3 is usually outsourced to an external reasoning engine, like an SMT solver.) What we have proven here is that x > 1 is a sufficient condition for result > 1 to hold, but this doesn’t tell us anything about x == 0.

In the space of smart contracts, sufficiency reasoning is not enough. Suppose we use some tool to prove that if the input conditions for the contract satisfy P, then contract returns True; this can be useful in our understanding, but doesn’t tell us whether our condition P is too strong, and there’s some space which a malicious party can exploit. Going back to our example, are there other cases which also lead to result > 1 but shouldn’t?

The solution is to switch to necessity reasoning, which is the name we’ve given to our variation of incorrectness logic. We get one particular mode of necessity reasoning by simply taking the steps for sufficiency reasoning, but in the opposite direction:

  1. Assume that some property of the result, or in general of the final state, holds.
  2. Figure out what happens to the state and input while executing the function.
  3. Check that the inputs, and in general the initial state, satisfy whatever property we expect.

That mode of reasoning rules out the x > 1 as a good specification, since by (1) assuming result > 1, and (2) figuring out that result == x + 1, it does not hold that (3) x > 1. In other words, necessity reasoning forces us to consider every possible input when writing a specification; this is extremely useful when describing the attack surface of a smart contract.

We have previously mentioned the MinSwap LP tokens vulnerability, and how it stemmed from an incorrect check that allowed minting other tokens on the side. We rewrote the vulnerable minting policy in Pirouette’s example language and were able to find a counterexample:

💸 COUNTEREXAMPLE FOUND
{ __resultTrue
  txMkTxInfo [P (MkTxOutRef (Id 42) 0) (MkTxOut (A 0) (V (KV [])))] [] (V (KV [])) (V (KV [P "currency" (KV [P "token" 1, P "" 0])])) (Id 0) }

The tx value represents a transaction, with its inputs, outputs, fees, and minted assets. If we look carefully we can see that a “rogue” minting is performed,

[P "currency" (KV [P "token" 1, P "" 0])]

since we have values for both our own token and another unexpected one. Since that would not be expected by the auditors, this would have launched a more sophisticated analysis of the code in that section.

Conclusion

We’re very happy to announce the new version of our Pirouette tool, which has been envisioned as a framework for transforming, analyzing, and verifying languages with a System F core. Over that foundation we’ve implemented a recent technique — incorrectness logic.

Smart contracts provide a prime example of the need to consider all inputs to a function which lead to the desired outcome. Incorrectness logic and necessity reasoning provides us the necessary logic foundation for that kind of specification.

About the authors
Alejandro Serrano Mena

If you enjoyed this article, you might be interested in joining the Tweag team.

This article is licensed under a Creative Commons Attribution 4.0 International license.

Company

AboutOpen SourceCareersContact Us

Connect with us

© 2024 Modus Create, LLC

Privacy PolicySitemap