Tweag
Technical groups
Dropdown arrow
Open source
Careers
Research
Blog
Contact
Consulting services
Technical groups
Dropdown arrow
Open source
Careers
Research
Blog
Contact
Consulting services

Linear Constraints: the problem with scopes

23 March 2023 — by Arnaud Spiwack

This is the second of two companion blog posts to the paper Linearly Qualified Types, published at ICFP 2021 (there is also a long version, with appendices). These blog posts will dive into some subjects that were touched, but not elaborated on, in the paper. For more introductory content, you may be interested in my talk at ICFP.

  1. The problem with O(1) freeze
  2. The problem with scopes

In the example API for pure mutable arrays, the original Linear Haskell paper (Arxiv version) featured the function:

newMArray :: Int -> (MArray a %1 -> Ur b) %1 -> Ur b

This function is essential to make the API work: mutable arrays can only be pure when they are not aliased. By taking a linear function as an argument, newMArray can make sure that the MArray is, in fact, not aliased. This is because linear types only restrict how a function can use an argument, it can’t restrict how the context uses an argument or a returned value. Contrast with uniqueness types (aka ownership types) in languages like Rust or Clean, which can demand that a value not be aliased. So, whenever we use linear types, in Haskell, to implement an API with a uniqueness requirement, we end up with such a newX function that takes a linear-function argument.

There is no denying that this is not aesthetically pleasing, compared with a more direct version, such as what we can write in the ST monad:

newMArrayST :: Int -> ST s (MArray a)

To be fair, it’s not as bad as it may first appear. Linear types require a value to be used exactly once. The function argument of newMArray gives an answer to the question, “By what point should I have used this MArray?” It gives the array a scope. As such, I call this argument a scope function.

In fact, Rust has scopes for its values as well. Scopes are a central enough concept in Rust that they have their own syntax ({...}). Haskell doesn’t like ad hoc constructions, so representing scopes as functions is pretty reasonable.

In addition, we often need a known scope for linear values, because we may have to react to exceptions. Suppose we wanted to take advantage of linear types to keep the MArray outside of the GC. Linearity forces me to call free, but if an exception is raised (be it a division by 0, or an asynchronous exception) during the lifetime of the array, then we may never get to free. This would leak memory. So, somewhere, we need to catch exceptions, make sure that the resource is properly deallocated, and reraise; this is a classic bracket. I’ve done exactly that in my experimental implementation of out-of-GC values.

Sticky ends of scopes

But there is a more serious problem with these scopes. And to see why, we need to talk about Ur. Suppose we had:

newMArrayBad :: Int -> (MArray a %1 -> b) %1 -> b

Then we could write newMArrayBad 57 id :: MArray a. No more linear types in sight: we could freely alias our mutable array, to disastrous consequences. We need1 this Ur b type at the end to prevent the mutable array from escaping its scope. The whole point of Ur is that the type constructor Ur has type a -> Ur a but not a %1 -> Ur a. So that newMArray 57 Ur is not well-typed.

So Ur helps delimit our scopes. But Ur is also a problem: it makes ends of scopes “sticky”. Let me explain. Let’s give ourselves a little bit of extra API:

read :: MArray a %1 -> Int -> (MArray a, Ur a)
write :: MArray a %1 -> (Int, a) -> MArray a
sum :: MArray Int %1 -> Ur Int

The sum function is a little arbitrary, and that it consumes the array is even more arbitrary, but it’s going to be convenient to illustrate my point.

Let’s say we need to sum the first two elements of an array as part of some computation. Our code might look something like:

newMArray 57 $ \arr ->
   ... -- some initialisation/computation involving arr
   let !(Ur (arr_summed, s)) = newMArray 2 $ \prefix ->
     let !(arr1, Ur a0) = read arr 0 in
     let !(arr2, Ur a1) = read arr 1 in
     let prefix1 = write prefix (0, a0) in
     let prefix2 = write prefix1 (1, a1) in
     let !(Ur s) = sum prefix2 in
     Ur (arr2, s) -- Uh Oh!
   in
   ... -- more computation involving arr_summed

Ah, but we’ve had to put arr2, which is linear, under an Ur. This is not permissible. Now, we’ve found our problem: no linear value can escape from the newMArray scope. Not even an array that originates in an encompassing scope. So we have to extend the scope of the inner newMArray until the end of the computation. That’s the shortcoming of this scoped newMArray function: all the nested newMArray scopes want to end at the same place. They stick together.

It isn’t always possible to have scopes stick like this; it prevents you from defining an array local to an array-manipulating function. This led linear-base to introduce functions such as:

newMArrayBeside: Int -> MArray b %1 -> (MArray a, MArray b)

which lets us piggyback on an existing array’s scope. The newMArrayBeside function works because linearity is infectious: in newMArrayBeside 57 arr :: (MArray a, MArray b), the fact that arr is linear makes it so that we can only use the array pair linearly as well.

The Linearly constraint

With newMArray and newMArrayBeside we have two functions to create new arrays. This isn’t satisfactory, but it gets worse. If I have another type with uniqueness constraints, say a ring buffer, then I’ll eventually need the ability to allow a ring buffer to piggyback on an array’s scope, and an array to piggyback on a ring buffer’s scope. So I’d have to add:

newMArrayBesidesRingBuffer :: Int -> RingBuffer %1 -> (MArray a, RingBuffer)

More generally, with nn types that can only be used linearly, I’ll need n2n^2 new functions. This is simply not sustainable.

Finding a systematic approach to shared scope for uniqueness has been one of the longest standing issues in linear-base.

The initial idea was to create a single token type, Linearly, that is always linear, and use it to constrain other types. So in this world, we have a single newMArray function:

newMArray :: Linearly %1 -> Int -> MArray a

And a single scope-creating function in which we will typically call newMArray several times:

linearly :: (Linearly %1 -> Ur b) %1 -> Ur b

Now, to be able to call newMArray several times, we also need to be able to split the Linearly token. So we’ll make Linearly a comonoid (which is emphatically not the same as being unrestricted):

dup2 :: Linearly %1 -> (Linearly, Linearly)
consume :: Linearly %1 -> ()

Now, no more sticky scope issues: open a single linearly scope in which all your arrays can live. A function that creates a local array may have a Linearly argument.

But managing Linearly token promises to be very, very tiresome (which is part of why I’ve never gotten around to implement these tokens in linear-base).

Serendipitously, while working on linear constraints, we came to realise that we could treat Linearly as a constraint. This way, there’s no need to manually manage all the dup2 and the consume. We’d have:

newMArray :: Linearly %1 => Int -> MArray a
linearly :: (Linearly %1 => Ur b) %1 -> Ur b

Requiring a Linearly constraint means: “I’m in a scope where I can produce linear values.”

As a quick aside on how research is produced: it is absolutely not clear who had this idea. We entered a call and, amongst all five co-authors, with no preconception that we wanted to solve this problem, at some point the idea popped out of thin air.

It did require us to change the technical content of the paper to be able to handle constraints that support dup2 and consume. In the paper, it is materialised as a distinguished subset D\mathcal{D} of such constraints (see in particular Section 5.1). Our initial design didn’t support such constraints, but it turned out to be a modest change to implement; it took me a couple of hours to go through and adapt the entirety of the technical development.

Conclusion

The ugly truth about scope functions delimited by Ur is that they don’t really work very well. What I’d like you to take away from this post is that the problem isn’t so much the fact that they are arguably a little ugly, but that the scope endings are sticky.

The Linearly constraint seems to solve this issue completely; at least in principle, as it has yet to be implemented. It is syntactically lightweight, which is quite exciting to me. And I must confess that I do derive quite a bit of satisfaction at how simple it was to derive the mathematics behind it all.


  1. There are other ways to limit the scope of values, such as runST’s rank-2 quantification trick. They have their own, well-documented failings.

About the author

Arnaud Spiwack

Arnaud is Tweag's head of R&D. He described himself as a multi-classed Software Engineer/Constructive Mathematician. He can regularly be seen in the Paris office, but he doesn't live in Paris as he much prefers the calm and fresh air of his suburban town.

If you enjoyed this article, you might be interested in joining the Tweag team.

This article is licensed under a Creative Commons Attribution 4.0 International license.

Company

AboutOpen SourceCareersContact Us

Connect with us

© 2024 Modus Create, LLC

Privacy PolicySitemap