Tag Archives: GHC

Dependent types in Haskell: Progress Report

It was drawn to my attention that there is an active Reddit thread about the future of dependent types in Haskell. (Thanks for the heads up, @thomie!) Instead of writing a long response inline in Reddit, it seems best to address the (very knowledgeable, respectful, and all around heartening) debate here.

When can we expect dependent types in GHC?

The short answer: GHC 8.4 (2018) at the very earliest. More likely 8.6 or 8.8 (2019-20).

Here is the schedule as I see it:

  • GHC 8.2: Clean up some of the lingering issues with -XTypeInType. As I will be starting a new job (Asst. Prof. at Bryn Mawr College) this fall, I simply won’t have the opportunity to do more than some cleanup work for the next release. Also, quite frankly, I need a release cycle off from the challenge of putting in a large new feature. Polishing up -XTypeInType for release took probably an extra full month of unexpected work time! I don’t regret this in the slightest, but I could use a cycle off.
  • GHC 8.4: This depends on what other research opportunities come up in the next year and how much more attention -XTypeInType needs. If all goes well this fall and I can get a few publications out in the next year (certainly possible – I have several in the works), then I could conceivably start primary implementation of -XDependentTypes next summer. The odds that it will be in a state to merge for 8.4 are slim, however.
  • GHC 8.6: We’re now talking about a real possibility here. Assuming I start the implementation next summer, this would give me a year and a half to complete it. I desperately wish to avoid merging late in the cycle (which is what made -XTypeInType so stressful) so perhaps merging soon after GHC 8.4 comes out is a good goal. If this slips, GHC 8.8 seems like quite a likely candidate.

Regardless of the schedule, I have every intention of actually doing this work.

One major oversight in the schedule above: I have completely discounted the possibility of collaborators in coding this up. Do you wish to help make this a reality? If so, let’s talk. I’ll admit that there may be a bit of a hill for an outside collaborator to climb before I really start collaborating in earnest, so be prepared to show (or create!) evidence that you’re good at getting your hands dirty in the greasy wheels of GHC’s typechecker without very much oversight. I’ve encouraged several of you on Trac/Phab – you know who you are, I hope. For others who have not yet contributed to GHC: you’re likely best served starting smaller than implementing dependent types! But let’s talk anyway, if you’re interested.

What is the design of dependent types in Haskell?

I expect to hand in my dissertation in the next two weeks. While I am developing it in the public eye (and warmly welcome issues to be posted), there is no publicly available PDF build. Building instructions are in the README. I will post a built PDF when I hand in my draft to my thesis committee. I will be defending on September 1 and will likely make some revisions (as suggested by my committee) afterward.

Readers here will likely be most interested in Chapters 3 and 4. Chapter 3 will contain (I’m still writing it!) numerous examples of dependent types in Haskell and how they might be useful. Chapter 4 presents the design of dependent types in Haskell as a diff over current Haskell. This design is not yet up to the standard of the Haskell Reports, mostly because it is unimplemented and seems not worth the effort of formalization until we know it is implementable. For the overly curious, Chapter 5 contains a new intermediate language (to replace GHC Core) and Chapter 6 contains the type inference algorithm that will deal with dependent types. Happy summer reading!

Responses to particular comments on the Reddit thread

  • @elucidatum: How radical of a change and how much of a challenge will it be to refactor existing codebases?

    No change is necessary. All code that currently compiles with -XTypeInType will compile with -XDependentTypes. (The only reason I have to set -XTypeInType as my baseline is because of the parsing annoyance around *, which must be imported to be used with -XTypeInType.) Any refactoring will be around how much you, as the Haskell author, wish to take advantage of dependent types.

  • @jlimperg: Pervasive laziness in Haskell means that we like to use a lot of coinductive types. But this is really annoying for proof purposes because coinduction, despite the nice duality with induction, is a lot less pleasant to work with.

    Yes, this is true. My approach is, essentially, to pretend that types are inductive, not coinductive. One consequence of this decision is that proofs of type equality will have to be run. This means that including dependent types will slow down your running program. Sounds horrible, doesn’t it? It doesn’t have to, though.

    Suppose you have a function proof :: ... -> a :~: b. The function proof provides a proof that type a equals type b. If this function terminates at all, it will always produce Refl. Thus, if we knew that the function terminated, then we wouldn’t have to run it. We can’t know whether it terminates. But you, the programmer, can assert that it terminates, like this: {-# RULES "proof" proof ... = unsafeCoerce Refl #-} Now, GHC will replace any use of proof with Refl directly. Note that GHC still type-checks your proof. All this RULE does is assert termination.

    “But,” you say, “I don’t want to have to merely assert termination! If I wanted to assert correctness instead of prove it, I wouldn’t use dependent types.” My response: “Touché.” Haskell indeed is less powerful than those other dependently typed languages in this regard. Nevertheless, by using dependent types in Haskell, you still get a whole lot more compile-time guarantees than you get without dependent types. You just don’t get termination. You thus have a choice: “prove” termination at runtime by actually running your proofs, or assert it at compile time and keep your dependently typed program efficient, and still with lots of guarantees. Recall that we Haskellers have never proved termination of anything, ever, so not proving termination of a function named proof shouldn’t be all that alarming.

    Note: If RULES such as the one above become pervasive, perhaps a compiler flag can make them automatic, effectively (and concisely) assuming termination for all proofs in a module.

  • @jlimperg: Proving is hard

    Yes, it is. Typechecker plugins may help here. It is easy enough, however, to implement dependent types without this advanced support. As we work through the consequences of dependent types in Haskell, I’m confident the community (including me) will come up with ways to make it easier.

  • @jlimperg: As is well-known, a dependent type system is inconsistent (i.e. everything can be proven) unless all computations terminate, which is obviously not the case for general Haskell.

    Yes, this is true. Dependent types in Haskell will be inconsistent, when we view Haskell as a logic. Dependent types will still remain type-safe however. (This is because we run the proofs!) I just wish to point out here that @jlimperg suggests a syntactic check for termination: this, sadly, will not work. Haskell has too many ways for a computation to diverge, and a syntactic check can rule out only general recursion and partial pattern matches. Haskell also has infinite data types, non-strictly-positive datatypes, TypeRep (which can be abused to cause a loop), Type :: Type, and likely more back doors. I’d love to see a termination checker for Haskell, but I’m not holding my breath.

  • @jlimperg: This point is very speculative, but I strongly dislike the current flavour of ‘almost-dependent’ type-level programming in Haskell.

    So do I. Singletons are an abomination. I hate them. These are gone with my design for dependent types, and I think the resulting language has the niceties we all want in a dependently typed language (modulo termination checking).

  • @dogodel: I think it will take a while for the change to percolate into better design. … I think the most immediate benefit will be for (no surprise) expressive DSL, which are actually quite common in any modest codebase, but maybe not the “core” of the project.

    Agreed completely. It will take time for us to figure out the best way to use dependent types.

  • @sinyesdo: The best summary I can offer is that “totality” is really really important for DT languages

    This is true for those other dependently typed languages, but not for Haskell. See my response to the first of @jlimperg’s points quoted here.

  • @ccasin: The main reasons to care that your programming language is total are to make the type system consistent as a logic and to make type checking decidable. But we’ve already given up both these things in Haskell, and it’s possible to have dependent types without them.

    Agreed in full, and with @ccasin’s full post.

  • @jmite: If you want fully Dependent-Type types, why not use Idris, Agda, Coq, or F*?

    I’m in broad agreement with the responses to this point on Reddit: basically, that none of these languages have Haskell’s ecosystem or optimizing compiler. See also Section 3.3 of my dissertation.

  • @tactics: What does “fully dependent types” even mean for something like Haskell? You would have to rework the core language entirely.

    Yes, precisely. This is Chapter 5 of my dissertation.

Conclusion

I hope this information helps. Do keep an eye out for my finished dissertation sometime in the next few months. And, as always, this is all a work in progress and no one should take anything as set in stone. If there’s a design decision you disagree with, please speak up!

Advertisements

What are type families?

I just returned from yet another fantastic ICFP week. I always have a blast at the conference. It would be easy to take the conversations I had there and turn them into a full-blown research program that would easily take me to next year’s ICFP… and then rinse and repeat!

But in this post, I want to focus on one particlar thorn in my side that started bothering me more than it had before: type families. I’ve realized, in the course of many conversations, that I don’t really understand them. This post will be more questions than answers as I try to write down what ails me.

This post is Literate Haskell.

> {-# LANGUAGE TypeFamilies, DataKinds, UndecidableInstances, GADTs,
>              MultiParamTypeClasses, FunctionalDependencies,
>              FlexibleInstances, FlexibleContexts #-}
> module WhatAreTypeFamilies where

Total type families are OK

A total type family feels quite comfortable. By total here, I mean a type family whose equations cover all the possible cases and which is guaranteed to terminate. That is, a total type family is properly a function on types. Those other dependently typed languages have functions on types, and they seem to work nicely. I am completely unbothered by total type families.

Non-covering type families are strange

A non-covering type family is a type family whose patterns do not cover the whole space. Let’s consider closed and open families separately, because the issues that come up are different.

Closed type families

For example:

> type family F1 a where
>   F1 Int = Bool

Note that F1 is closed – we won’t ever be adding more equations. F1 is clearly non-covering. Yet, the type F1 Char is perfectly well formed.

> sillyId :: F1 Char -> F1 Char
> sillyId x = x

But sillyId can’t ever be called on a value. This doesn’t stop us from doing chicanery with F1 though.

> data SillyGadt a where
>   MkSillyGadt1 :: Int   -> SillyGadt (F1 Char)
>   MkSillyGadt2 :: Float -> SillyGadt (F1 Double)

Now, even though F1 Char is nonsense, SillyGadt (F1 Char) is inhabited. And GHC distinguishes between F1 Char and F1 Double.

In GHC, a type family that can’t reduce is considered “stuck”. Such a type is effectively a new type, equal only to itself, like empty datatypes a programmer might declare. The only thing different about something like F1 Char from something like Maybe Char is that F1 Char cannot be used in a type pattern (that is, the left-hand side of a type family equation or data family instance). That’s the only difference. So, when we declare F1, we create an infinite family of unique types containing F1 applied to all types of kind *, except for Int, because F1 Int is identically Bool.

I find this treatment of non-covering closed type families rather bizarre. Compare against term-level behavior. When a term-level function is non-covering, passing an argument for which there are no equations immediately results in a call to error. Yet the same thing in types behaves very differently.

I’m not the only one who finds this strange: Lennart Augustsson posted GHC bug #9636 about this very issue. Indeed, my current views on the subject are strongly influenced by Lennart’s comments.

But, sometimes you want stuckness

Sadly, sometimes authors want stuck type families. I know of two examples.

  1. The bowels of GHC defines this:
    type family Any :: k where {}

    Thus, Any is a closed type family, available at any kind, with no equations. GHC makes convenient use for Any. Consider the expression length []. This expression has some pointless polymorphism in it: length can be applied to a list of any type, and [] can be a list of any type. Yet, GHC must choose precisely which type these polymorphic definitions are specialized to. Not to play favorites, GHC chooses Any :: *. Such pointless polymorphism can indeed happen at any kind, so Any is polymorphic in its return kind. (Before kind polymorphism, Any existed as a special form. It’s not so special any more.)

    It’s quite important that Any is a type family, not a datatype. If it were a datatype (and we forget about the weirdness of a datatype polymorphic in its return kind), then it could be matched in type patterns, making this strangeness possible:

    type family ThreeBools (b :: Bool) :: Nat where
      ThreeBools 'True  = 1
      ThreeBools 'False = 2
      ThreeBools Any    = 3

    Bizarre! And yet this was possible in GHC 7.6 and below (modulo the closed type family, which didn’t exist yet). And it caused problems.

    Of course, with Any, we really want it to be stuck. We don’t want Any to reduce to some other type, and we don’t want uses of Any to be errors.

  2. In his work implementing a units-of-measure library via type-checker plugins, Adam Gundry uses closed type families to define type functions whose behavior is specified only via a plugin, not by GHC’s built-in type family reduction rules. He wanted empty closed type families specifically, and so implemented them within GHC. The idea is that units-of-measure with their integer powers form an abelian group, which can be solved via custom machinery but not by GHC’s rather limited left-to-right rewriting used in type family reduction.

    Once again, we want a closed type family that is stuck but is not an error. The existence of multiple such things got me ruminating on all of this some time ago. I attempted to figure this all out in a post on a GHC ticket, but that didn’t get me very far. And I won’t claim to have made any real progress since that post.

Open type families

Having a stuck open type family makes moderately more sense than a stuck closed type family. Suppose we have this:

> type family F2 a
> type instance F2 Int = Bool

Because new equations can be added to open type families at any time, talking about F2 Char isn’t utterly bogus. But it might be nice to have to assert that an equation for F2 Char is in scope before we use it in a type.

Injective type families

Injective type families are a new feature that just got merged with GHC head. Jan Stolarek, Simon Peyton Jones, and I had a paper at Haskell Symposium about them.

The idea is that you can put an injectivity annotation on a type family declaration, which is rather like a functional dependency. (The injective type family code in this post is not runnable, as it works only with GHC HEAD. The feature will be available with GHC 8.0, due out early 2016.)

type family F3 a = r | r -> a where
  F3 Int  = Bool
  F3 Bool = Int
  F3 a    = a

Note that if you know the value of F3 a, then you can discover the value of a: that’s what injectivity is all about.

Now consider this:

type family F4 a = r | r -> a where
  F4 (Maybe a) = a

Looks injective, right? Wrong. The problem is that, if this were injective, we could take a proof that F4 a ~ F4 b and get a proof of a ~ b. Now, consider F4 (Maybe (F4 Int)). This reduces to F4 Int. Thus, we have F4 (Maybe (F4 Int)) ~ F4 Int. And now, by injectivity, we get Maybe (F4 Int) ~ Int. That is, a Maybe <blah> equals an Int. Terrible!

So, in our implementation of injectivity, we exclude type families like F4. But it’s really unexpected that F4 should be rejected (at least to me). The problem, at its core, is about partiality. Note that we needed F4 Int – an un-covered case – to make the bad equality proof.

Looping type families are strange

At first blush, it seems that a looping type families cause chiefly one problem: a looping GHC. But I’m very unconcerned about a looping GHC. After all, if a user has written a non-terminating type family, that amounts to a non-terminating program to be run at compile time. No one should be very surprised when a program that contains a loop does, in fact, loop.

The problem is that looping type families can actually create effectively infinite types. GHC has various protections against actually running off to infinity, but a clever programmer can defeat these protections. And infinite types can cause unsoundness. Indeed, while writing the closed type family paper, we uncovered one such unsoundness, closing bug #8162.

However, even with #8162 gone, we still have a lingering problem. Consider

> type family F5 a b where
>   F5 a a = 'True
>   F5 a b = 'False

This is a straightforward equality test, and it works in many cases. But if we ask F5 a (Maybe a) (for some unknown a), the type family doesn’t reduce. It really should! There is no way a finite a can be the same as Maybe a. But we can have an infinite a, instantiated to MaybeInf:

> type family MaybeInf where MaybeInf = Maybe MaybeInf

MaybeInf here is a perfectly well-formed nullary (0-argument) type family. If the a in F5 a (Maybe a) were instantiated with MaybeInf, then reducing to 'True would be correct. And thus GHC can’t reduce F5 a (Maybe a) to 'False, as one might like.

And this isn’t purely an academic concern. I don’t have links I can share, but this has bitten real programmers doing real work. Once again, the problem boils down to partiality: GHC must be conservative about reducing closed type families (such as F5) because of the possibility that someone, somewhere will write a non-terminating type family. If F5 a (Maybe a) were to reduce to 'False, I’m confident that a clever enough person could use this behavior to implement unsafeCoerce. Similar concerns prevent certain type families from being considered injective – see the paper for the example. If we could somehow be sure that there were no non-terminating type families, these concerns would go away.

Contrast with functional dependencies

Simply, functional dependencies don’t suffer from these problems. Let’s look at the issue around injectivity, as it’s representative. First, recast in terms of functional dependencies:

> class F4' a r | a -> r, r -> a
> instance F4' (Maybe a) a

Note the bidirectional functional dependency. The left-to-right dependency is the built-in dependency inherent in type families. The right-to-left dependency arises from the injectivity annotation.

Our problem case above was F4 (Maybe (F4 Int)). This case maps to

fundep :: (F4' Int f4_int, F4' (Maybe f4_int) result) => ()
fundep = ()

GHC helpfully tells us

Couldn't match type ‘Maybe f4_int’ with ‘Int’
arising from a functional dependency between:
  constraint ‘F4' Int f4_int’
    arising from the type signature for
                   fundep :: (F4' Int f4_int, F4' (Maybe f4_int) result) => Int
  instance ‘F4' (Maybe a) a’

So GHC notices that there is a problem. But even if it didn’t, we’d be OK. That’s because the F4' Int f4_int constraint essentially asserts that there is some value for F4' Int. Of course, there isn’t any such value, and so fundep would never be callable. Indeed, we can’t write any more instances of F4', as anything we could write would violate the dependencies.

The bottom line here is that there is no way to get into trouble here because the constraints necessary to use bogus functional dependencies can never be satisfied.

A radical proposal

So, what should we do about all of this? I’m inspired by the non-problems with functional dependencies. I thus propose the following. (Actually, I don’t. This is too radical. But I’m trying to provoke discussion. So take this all with a heap of salt.)

  1. Remove unassociated open type families. They’re quite awkward anyway – either they should be closed and define some sort of concrete function, or they will need a class constraint nearby to be useful.
  2. Require class constraints to use associated type families. That is, if F a is associated with a class C a, writing F Int without a C Int constraint in scope is an error. This means that the use of an associated type family in the right-hand side of another associated type family will require the class constraint in the instance context.
  3. Require closed type families to be total, with an improved termination checker. Naturally, closed type families would be unable to call partial type families.
  4. Devise a mechanism to associate a closed type family with a class. This would allow for partial closed type families, which indeed are useful.

The bottom line to all of this is that any partial type family should be associated with a class constraint. If the constraint can be satisfied, this means that the type family is defined and well-behaved at the arguments provided. With this, I believe all the problems above go away.

Another way to see this proposal is that functional dependencies are the Right Way to do this, and type families (at least partial ones) are wonky. But type families have much better syntax, and I’d like to keep that.

So, where to go from here? I’m not sure. This proposal is pretty radical, but it could potentially be implemented with the requisite deprecation period, etc. But, mostly, I’m curious to see what conversation this all provokes.

Planned change to GHC: merging types and kinds

I’m proud to announce that I’m nearing completion on a major patch to GHC, merging types and kinds. This patch has been in development since late 2012 (!), with many interruptions in the meantime. But I really do think it will make it for 7.12, due out early 2016. This post is meant to generate discussion in the community about the proposed changes and to get feedback about any user-facing aspects which might be of interest.

Motivation

The real motivation for writing this is that it’s a key step toward dependent types, as described in the paper laying out the theory that underlies this patch. But other motivation is close to hand as well. This patch fixes GHC bug #7961, which concerns promotion of GADTs – after this patch is merged, all types can be used as kinds, because kinds are the same as types! This patch also contributes toward the solution of the problems outlined in the wiki page for the concurrent upgrade to Typeable, itself part of the Distributed Haskell plan.

Examples

Below are some fun examples that compile with my patch. As usual, this page is a literate Haskell file, and these examples really do compile! (I haven’t yet implemented checking for the proposed extension StarInStar, which this will require in the end.)

> {-# LANGUAGE DataKinds, PolyKinds, GADTs, TypeOperators, TypeFamilies #-}
> {-# OPTIONS_GHC -fwarn-unticked-promoted-constructors #-}
> 
> -- a Proxy type with an explicit kind
> data Proxy k (a :: k) = P
> prox :: Proxy * Bool
> prox = P
> 
> prox2 :: Proxy Bool 'True
> prox2 = P
> 
> -- implicit kinds still work
> data A
> data B :: A -> *
> data C :: B a -> *
> data D :: C b -> *
> data E :: D c -> *
> -- note that E :: forall (a :: A) (b :: B a) (c :: C b). D c -> *
> 
> -- a kind-indexed GADT
> data TypeRep (a :: k) where
>   TInt   :: TypeRep Int
>   TMaybe :: TypeRep Maybe
>   TApp   :: TypeRep a -> TypeRep b -> TypeRep (a b)
> 
> zero :: TypeRep a -> a
> zero TInt            = 0
> zero (TApp TMaybe _) = Nothing
> 
> data Nat = Zero | Succ Nat
> type family a + b where
>   'Zero     + b = b
>   ('Succ a) + b = 'Succ (a + b)
> 
> data Vec :: * -> Nat -> * where
>   Nil  :: Vec a 'Zero
>   (:>) :: a -> Vec a n -> Vec a ('Succ n)
> infixr 5 :>
> 
> -- promoted GADT, and using + as a "kind family":
> type family (x :: Vec a n) ++ (y :: Vec a m) :: Vec a (n + m) where
>   'Nil      ++ y = y
>   (h ':> t) ++ y = h ':> (t ++ y)
> 
> -- datatype that mentions *
> data U = Star *
>        | Bool Bool
> 
> -- kind synonym
> type Monadish = * -> *
> class MonadTrans (t :: Monadish -> Monadish) where
>   lift :: Monad m => m a -> t m a
> data Free :: Monadish where
>   Return :: a -> Free a
>   Bind   :: Free a -> (a -> Free b) -> Free b
> 
> -- yes, * really does have type *.
> type Star = (* :: (* :: (* :: *)))

Details

More details are in the wiki page for this redesign. As stated above, I’d love your feedback on all of this!

Coincident Overlap in Type Families

Haskell allows what I will call coincident overlap among type family instances. Coincident overlap occurs when two (or more) type family instances might be applicable to a given type family usage site, but they would evaluate to the same right-hand side. This post, inspired by Andy Adams-Moran’s comment to an earlier blog post, explores coincident overlap and how to extend it (or not!) to branched instances.

This post is a literate Haskell file (though there really isn’t that much code). Paste it into a .lhs file and load it into GHCi. Because the post uses branched instances, you will need the HEAD version of GHC. (Branched instances will be included in GHC 7.8, but not before.)

> {-# LANGUAGE TypeFamilies, DataKinds, GADTs, TypeOperators #-}

Our examples will be over Bools, and we need some way to get GHC to evaluate our type families. The easiest way is to use the following singleton GADT:

> data SBool :: Bool -> * where
>   SFalse :: SBool False
>   STrue  :: SBool True

Conflict checking with type family instances

When compiling type family instances, GHC checks the instances for conflicts. To know if two instances conflict (i.e., could both match the same usage site), GHC unifies the two left-hand sides. For example, the following code is bad and is rejected:

type family F x
type instance F Int = Bool
type instance F a   = Double

Compiling the above instances gives the following error message:

Conflicting family instance declarations:
  F Int -- Defined at ...
  F a -- Defined at ...

This check is a good thing, because otherwise it would be possible to equate two incompatible types, such as Int and Bool.

Coincident overlap among unbranched instances

Here is a nice example of how coincident overlap is useful:

> type family (x :: Bool) && (y :: Bool) :: Bool
> type instance False && a     = False   -- 1
> type instance True  && b     = b       -- 2
> type instance c     && False = False   -- 3
> type instance d     && True  = d       -- 4

Although the first two equations fully define the && operation, the last two instances allow GHC to reduce a use of && that could not otherwise be reducible. For example:

> and1 :: SBool a -> SBool (True && a)
> and1 x = x
> 
> and2 :: SBool a -> SBool (a && True)
> and2 x = x

and1 uses the second instance of &&, but and2 requires the fourth instance. If we comment that instance out, and2 fails to compile, because GHC cannot figure out that a && True must be a for all values of a. For various good reasons, perhaps to be explored in another post, GHC does not do case analysis on types during type inference.

How does GHC know that overlap is coincident? During the conflict check, GHC looks for a substitution that unifies two potentially-conflicting instances. In our case, the fourth and first instances would conflict under the substitution {a |-> True, d |-> False}. However, after finding the unifying substitution, GHC checks the right-hand sides under that same substitution. If they are the same, then GHC considers the overlap to be coincident and allows the instance pair. In our case, applies the substitution {a |-> True, d |-> False} to False and d and discovers that both are False, and so the instances are allowed.

Coincident overlap within branched instances

When thinking about branched instances and coincident overlap, there are two possibilities to consider: coincident overlap within a branched instance and coincident overlap among two separate branched instances. Let’s consider the first case here.

Imagine we define || analogously to &&, but using one branched instance:

> type family (x :: Bool) || (y :: Bool) :: Bool
> type instance where
>   False || a     = a     -- 1
>   True  || b     = True  -- 2
>   c     || False = c     -- 3
>   d     || True  = True  -- 4

Now, let’s consider simplifying the type e || False. The first two branches don’t match, but the third does. Now, following the rule for branched instance simplification (as stated in the Haskell wiki), we check to see if any previous branches might be applicable to e || False, for any possible instantiation of e. The first branch certainly might apply, and so e || False fails to simplify. This is surely counterintuitive, because the third branch matches e || False exactly!

Just to prove this behavior, I tried running this code through GHC:

bar :: SBool a -> SBool (a || False)
bar x = x

Here is the error:

Couldn't match type ‛a’ with ‛a || 'False’

At first blush, it seems I’ve missed something important here in the implementation — allowing coincident overlap within a branched instance. But, there is a major problem with such overlap in this setting. Let’s think about how coincident overlap would work in this setting. After selecting the third branch to simplify e || False (with the substitution {c |-> e}), GHC checks to see if any previous branch could be applicable to e || False. The first branch, False || a, unifies with e || False, so it might be applicable later on. The unifying substitution is {a |-> False, e |-> False}. Now, if we wanted to check for coincident overlap, we would apply both substitutions ({c |-> e} and {a |-> False, e |-> False}) to the right-hand sides. In this case, we would see that both right-hand sides would become False, and it seems we should allow the simplification of e || False to e.

Let’s try a harder case. What if we want to simplify (G f) || False, for some type family G? The third branch matches, with the substitution {c |-> G f}. Now, we check earlier branches for applicability. The first branch is potentially applicable, if G f simplifies to False. But, we can’t produce a substitution over type variables to witness to check right-hand sides. In this case, it wouldn’t be hard to imagine a substitution like {(G f) |-> False}, but that’s a slippery slope to slide down. What if f appears multiple times in the type, perhaps under different type family applications? How do we deal with this? There may well be an answer, but it would be subtle and likely quite fragile from the programmer’s point of view. So, we decided to ignore the possibility of coincident overlap within a branch. We were unable to come up with a compelling example of why anyone would want this feature, it seemed hard to get right, and we can just write || using separate instances, anyway.

Coincident overlap between branched instances

Consider the following (contrived) example:

type family F (x :: Bool) (y :: Bool) :: *
type instance where
  F False True = Int
  F a     a    = Int
  F b     c    = Bool
type instance F d True = Int

Is this set of instances allowable? Is that your final answer?

I believe that this set of instances wouldn’t produce any conflicts. Anything that matches F d True would have to match one of the first two branches of the branched instance, meaning that the right-hand sides coincide. However, it is difficult to reason about such cases, for human and GHC alike. So, for the sake of simplicity, we also forbid coincident overlap whenever even one instance is branched. This means that

type instance F Int = Bool

and

type instance where F Int = Bool

are very subtly different.

Andy Adams-Moran’s example

This post was partially inspired by Andy Adams-Moran’s comment in which Andy poses this example, paraphrased slightly:

> data T a
> type family Equiv x y :: Bool
> type instance where
>    Equiv a      a     = True        -- 1
>    Equiv (T b)  (T c) = True        -- 2
>    Equiv (t d)  (t e) = Equiv d e   -- 3
>    Equiv f      g     = False       -- 4

Alas, this does not work as well as one would like. The problem is that we cannot simplify, say, Equiv (T a) (T b) to True, because this matches the second branch but might later match the first branch. Simplifying this use would require coincident overlap checking within branched instances. We could move the first branch to the third position, and that would help, but not solve the problem. With that change, Equiv x x would not simplify, until the head of x were known.

So, is this the “compelling example” we were looking for? Perhaps. Andy, why do you want this? Can your use case be achieved with other means? Do you (anyone out there) have a suggestion for how to deal with coincident overlap within branched instances in a simple, easy-to-explain manner?

Ordered overlapping type family instances

I am pleased to announce that I have pushed my implementation of ordered overlapping type family instances to GHC HEAD.

This blog post is a literate Haskell file. Copy and paste into a .lhs file to try out this code. This file will only compile with GHC HEAD, however.

We need some header formalities:

> {-# LANGUAGE TypeFamilies, DataKinds, PolyKinds, TypeOperators #-}
> import Prelude hiding (zipWith)

The Problem

When writing term-level functions, it is natural to write a series of equations, each using a sequence of patterns to select which equation should be triggered when calling the function. Critically for this discussion, the first matching equation is used. Let’s use a particularly favorite function of mine as an example:

> import Prelude hiding (zipWith)
> 
> zipWith :: (a -> b -> c) -> [a] -> [b] -> [c]
> zipWith f (a:as) (b:bs) = (f a b):(zipWith f as bs)
> zipWith _ _      _      = []

Let’s try to naively write this function at the type level on promoted lists:

type family ZipWith (f :: a -> b -> c) (as :: [a]) (bs :: [b]) :: [c]
type instance ZipWith f (a ': as) (b ': bs) = (f a b) ': (ZipWith f as bs)
type instance ZipWith f as        bs        = '[]

Urk. We get the following error:

Conflicting family instance declarations:
  ZipWith k k k f ((':) k a as) ((':) k b bs)
  ZipWith k k k f as bs

(The repetition of the variable k is incorrect, and has been reported as GHC bug #7524. This is not the issue we are pursuing here, however.)

The problem is really that type instance declarations are essentially unordered. The order in which they appear in a file is irrelevant to GHC. Relatedly, a programmer can define instances of the same type family in multiple modules. With separate compilation, the lack of ordering and the overlap check are necessary for type soundness. This is quite different from term-level function definition equations. All equations defining the same function not only have to be in the same module, but they must be one right after another.

The particular example here has an easy solution. Because we are matching over a closed kind ([a] at the kind level), we could simply expand out the different cases we wish to match against. However, this solution is not possible when matching over an open kind, such as *. We’ll see a useful example of overlap involving * shortly.

The Solution

GHC HEAD now contains an implementation for ordered overlapping type family instances. The example above can be written thus:

> type family ZipWith (f :: a -> b -> c) (as :: [a]) (bs :: [b]) :: [c]
> type instance where
>   ZipWith f (a ': as) (b ': bs) = (f a b) ': (ZipWith f as bs)
>   ZipWith f as        bs        = '[]

More interestingly, we can now define this:

> type family Equals (a :: k) (b :: k) :: Bool
> type instance where
>   Equals a a = True
>   Equals a b = False

Ordered overlapping type family instances allow us to define a general, write-once use-everywhere Boolean equality at the type level. Yay!

This new form of type family instance also seems to close the biggest known gap between the expressivity of functional dependencies and type families: functional dependencies have long supported overlap (through OverlappingInstances or IncoherentInstances) that type families could not. Although functional dependencies’ overlap requires ordering based on specificity and type families’ overlap is based on an explicit ordering, it would seem that any code that took advantage of functional dependencies and overlap can now be translated to use overlapping type families.

Details

  • type instance where does not work with associated types. Class instances can be sprinkled across modules, and having this form of overlap appear across modules would not be type safe in the presence of separate compilation.
  • type instance where does not work with associated types, even when the overlap is intended to exist just within one instance. There is no great reason for this restriction, but it seemed unimportant. Yell if this matters to you.
  • Choosing which equation in a group to use is somewhat delicate. For example, consider the Equals type family. What if we want to simplify Equals a Int? Well, we can’t. That’s because a might (sometimes) be instantiated to Int, and if we simplified Equals a Int to False, we would have a type soundness issue. So, perhaps counterintuitively, we can’t simplify even Equals a b to False until a and b are known.

This GHC wiki page gives an outline of how to get GHC compiling on your machine so you can play with this feature in HEAD. I don’t imagine it will be in 7.6.2, but I believe it will be in 7.8.1, whenever that is released. Enjoy, and please post any feedback!

Acknowledgments

Many thanks to Simon Peyton Jones, Dimitrios Vytiniotis, and Stephanie Weirich for getting me started and helping me think through some of the finer points.

A formalization of GHC’s core language

There have been a handful of papers about System FC, the internal language used within GHC ([1] [2] [3] [4], etc.). Each of these papers uses a different characterization of FC, with variations among the definitions and judgments. Unsurprisingly, each of these formalisms differs also from the actual implementation of FC within GHC. At ICFP, Simon PJ asked me to look at GHC’s implementation of System FC, and write it up using formal notation. The result of this work (developed using Ott) now lives in the GHC repo, in docs/core-spec. There are comments in various places reminding GHC contributors to update the formalization as they update the implementation.

Do you have a stake in this work? Are you planning on reasoning about GHC’s core language? Check out the formalism here. Any feedback is welcome!

Decidable Propositional Equality in Haskell

A recent discussion on the cvs-ghc mailing list touched on propositional decidable equality in Haskell. This post will explain propositional equality and consider different encodings of this idea in Haskell.

Preface

This blog post is a literate Haskell file, compatible with GHC 7.6.1. As usual, we need some initial declarations to get off the ground.

> {-# LANGUAGE DataKinds, PolyKinds, GADTs, TypeFamilies,
>              LambdaCase #-}
> {-# OPTIONS_GHC -fwarn-incomplete-patterns #-}
> 
> data Nat = Zero | Succ Nat

Throughout the post, I will be talking about equality on Nats, but the ideas here extend to any types that would admit a good instance of Eq.

We will need length-indexed vectors to make interesting use of the ideas here:

> data Vec :: * -> Nat -> * where
>   VNil  :: Vec a Zero
>   VCons :: a -> Vec a n -> Vec a (Succ n)
> 
> safeHead :: Vec a (Succ n) -> a
> safeHead (VCons h _) = h
> 
> safeTail :: Vec a (Succ n) -> Vec a n
> safeTail (VCons _ t) = t

Note that there is no need for another clause for safeHead or safeTail because the types forbid, for example, safeHead VNil.

Propositional Equality

We are all (hopefully?) familiar with Boolean equality:

> boolEq :: Nat -> Nat -> Bool
> boolEq Zero     Zero     = True
> boolEq (Succ a) (Succ b) = boolEq a b
> boolEq _        _        = False

We can even lift this idea to the type level:

> type family BoolEq (a :: Nat) (b :: Nat) :: Bool
> type instance BoolEq Zero Zero = True
> type instance BoolEq (Succ a) (Succ b) = BoolEq a b
> type instance BoolEq Zero (Succ x) = False
> type instance BoolEq (Succ x) Zero = False

Let’s try to write a function to retrieve the second element from a vector:

cadr :: Vec a n -> a
cadr v = safeHead (safeTail v)

We get an error:

Couldn't match type `n' with 'Succ ('Succ n0)

Naturally, GHC can’t confidently apply safeHead and safeTail because we don’t know that v has at least 2 elements.

Let’s try again:

cadr :: (BoolEq n (Succ (Succ n')) ~ True) => Vec a n -> a
cadr v = safeHead (safeTail v)

Still doesn’t work:

Could not deduce (n ~ 'Succ ('Succ n0))
from the context (BoolEq n ('Succ ('Succ n')) ~ 'True)

The problem is that GHC doesn’t know that our Boolean equality function really shows the equality of two types.

This is a contrived example, though. Instead, let’s consider a program passing around explicit evidence of whether or not a list has at least two elements. If the list doesn’t, the function should return a supplied default.

To pass around evidence of a type-level Boolean quantity, we need the singleton type for Booleans:

> data SBool :: Bool -> * where
>   STrue  :: SBool True
>   SFalse :: SBool False

(If you’ve never seen singleton types before, my singletons paper on the subject contains helpful information.)

cadr :: SBool (BoolEq n (Succ (Succ n'))) -> a -> Vec a n -> a
cadr evidence deflt v = case evidence of
  STrue -> safeHead (safeTail v)
  SFalse -> deflt

Still, no go:

Could not deduce (n ~ 'Succ ('Succ n0))
from the context (BoolEq n ('Succ ('Succ n')) ~ 'True)
  bound by a pattern with constructor
             STrue :: SBool 'True,

In the end, this last example is the same as the previous. Pattern-matching on the SBool just brings the equality (BoolEq n (Succ (Succ n'))) ~ True into the context.

We need to convert Boolean equality to propositional equality, which is denoted by ~. Propositional equality is an equality among types that GHC can make use of in type checking code. To work with propositional equality, we need to make it first class, instead of just a constraint.

> data PropEq :: k -> k -> * where
>   Refl :: PropEq x x

Let’s now try to write a conversion function from Boolean equality to propositional equality:

boolToProp :: (BoolEq a b ~ True) => PropEq a b
boolToProp = Refl

Same old problem:

Could not deduce (a ~ b)
from the context (BoolEq a b ~ 'True)

What we need to do is to build up the propositional equality from pieces that GHC can easily verify are indeed equal. We need an inductive proof that our definition of Boolean equality is correct for any natural number. To write such a proof, we will need to do case analysis on a and b. To do that, in turn, we will need a singleton over natural numbers.

> data SNat :: Nat -> * where
>   SZero :: SNat Zero
>   SSucc :: SNat n -> SNat (Succ n)

Now, let’s write the inductive proof:

boolToProp :: (BoolEq a b ~ True) => SNat a -> SNat b -> PropEq a b
boolToProp SZero SZero = Refl
boolToProp (SSucc x') (SSucc y') = boolToProp x' y'

Oops:

Could not deduce (n ~ 'Succ n)
...
Could not deduce (n1 ~ 'Succ n1)

The problem is that we are returning the result of boolToProp x' y' directly from boolToProp, even though x' and y' have different types than SSucc x and SSucc y. The solution is to use a pattern match on the result from the recursive call. Let’s call the type index associated with x' to be a' and that with y' to be b'. Then, the recursive call gives us (Refl :: PropEq a' b'). If we pattern match on this, we get the propositional equality a' ~ b' into the context. This can be used to show Succ a' ~ Succ b' (which is really just a ~ b), so we can now use Refl once again, though at a different type:

> boolToProp :: (BoolEq a b ~ True) => SNat a -> SNat b -> PropEq a b
> boolToProp SZero SZero = Refl
> boolToProp (SSucc a') (SSucc b') =
>   case boolToProp a' b' of
>     Refl -> Refl

Great. Except now we get this warning:

Pattern match(es) are non-exhaustive
In an equation for `boolToProp':
    Patterns not matched:
        SZero (SSucc _)
        (SSucc _) SZero

The problem is that there is no possible way out in these cases, so we’ll just have to put undefined:

boolToProp SZero (SSucc _) = undefined
boolToProp (SSucc _) SZero = undefined

Wait. Now there’s a new problem:

Couldn't match type 'False with 'True
Inaccessible code in ...
Couldn't match type 'False with 'True
Inaccessible code in ...

GHC rightly determines that these cases are impossible. Why are they impossible? Because we know that BoolEq a b ~ True. In these cases, that wouldn’t be the case, so GHC can’t match False with True.

But now we are in a quandary. Without the extra matches, we get a warning (due to -fwarn-incomplete-patterns, which you should never leave home without). With the matches, we get an error. That’s silly. And others agree that it’s silly, filing bug report #3927. According to the commentary on the bug report, Simon PJ says, “The more complaints the more likely I am to put off other things to do this one!” So, at the risk of pestering dear Simon, if you are annoyed by this, please complain! The best way to complain is simply to add yourself to the Cc list of the bug report. If enough people do this, the bug will get fixed sooner. Or, even better, try to fix it yourself!

So, where does this leave us? I can’t stand a warning in my code, so we’ll suppress it with this:

> boolToProp _ _ = error "bug 3927"

Let’s try to write cadr one last time, this time armed with boolToProp:

> cadr :: SBool (BoolEq n (Succ (Succ n'))) -> SNat n
>      -> SNat n' -> a -> Vec a n -> a
> cadr evidence n n' deflt v = case evidence of
>   STrue -> case boolToProp n (SSucc (SSucc n')) of
>     Refl -> safeHead (safeTail v)
>   SFalse -> deflt

It works! Hurrah!

The sad part here is that, to make it work, we needed to pass around two SNats and perform an O(n) operation (at runtime – the boolToProp “proof” runs!) to prove to GHC that the operation is valid. Can we do better?

Decidable Propositional Equality

Yes, we can.

The problem lies in the fact that we branch on a witness of Boolean equality. There is an alternative: decidable propositional equality. The idea is that instead of just type-level Booleans, decidable equality stores either evidence that two types are equal or evidence that they are not. We know how to write evidence that two types a and b are equal: PropEq a b. What’s the opposite of PropEq a b? It’s the statement that PropEq a b implies falsehood. In Haskell, we can represent falsehood with an empty type.

> data Falsehood
> type Not a = a -> Falsehood

Now, we can define decidable equality in Haskell:

> type DecidableEquality (a :: k) (b :: k) = Either (PropEq a b) (Not (PropEq a b))

We can even write a function to decide equality over Nats. Because this function produces runtime evidence, it uses singletons to work with types.

> decEq :: SNat a -> SNat b -> DecidableEquality a b
> decEq SZero SZero = Left Refl
> decEq (SSucc x') (SSucc y') = case decEq x' y' of
>   Left Refl -> Left Refl
>   Right contra -> Right (\case Refl -> contra Refl)

There’s a little magic going on here, so let’s pause and reflect. The first equation is straightforward. In the second, we recur on a' and b'. If those in fact are equal, we still unpack the Left Refl result and create a new Left Refl at a different type. We’ve seen this before, so it’s OK.

But what’s going on with Right? Once again, let’s call the type index associated with x' to be a', and likewise with y' and b'. The return type of decEq x' y' is DecidableEquality a' b'. Because we’ve matched with Right, we know that contra must have type Not (PropEq a' b'), synonymous with PropEq a' b' -> Falsehood. We must produce something of type PropEq a b -> Falsehood. So, we write a lambda-case pattern match on the PropEq a b to get the equality a ~ b. Because a is Succ a' and b is Succ b', GHC can use a ~ b to derive a' ~ b', and thus we can call contra with Refl :: PropEq a' b'. Whew.

Now, we deal with the failure cases. If we know that, say, a is Zero and b is Succ b', then GHC rightly figures out that PropEq a b (which expands to PropEq Zero (Succ b')) is an empty type.

decEq SZero (SSucc _) = Right (\case {})
decEq (SSucc _) SZero = Right (\case {})

No go:

parse error on input `}'

GHC does not support empty pattern matches. (UPDATE [Jan 4, 2013]: Simon PJ has implemented empty pattern matches in HEAD. Yay!) A feature request to support these was submitted as bug report #2431. Drat. Explicitly pattern matching on Refl gives us an inaccessible code error (correctly), so we are forced to do this dreadful workaround:

> decEq SZero (SSucc _) = Right (\_ -> error "bug 2431")
> decEq (SSucc _) SZero = Right (\_ -> error "bug 2431")

(Incidentally, this bug seems much easier to fix than #3927, so if you have a little time, go for it!)

So, now that we’ve defined decEq, how can we use it? Let’s write a wrapper around safeHead with evidence. We’ll first need a way to eliminate Falsehood. From logic, we learn that falsehood implies anything, neatly expressed in this type:

> exFalsoQuodlibet :: Falsehood -> a

Unfortunately, even with a manifestly empty type, we can’t use an empty pattern match. So, we do this:

> exFalsoQuodlibet = \case _ -> error "bug 2431"

Here is the type for our new head operation:

> safeHead' :: DecidableEquality n Zero -> SNat n -> a -> Vec a n -> a

In this example, as opposed to above, we reason about whether n is Zero. Because we set the example up this way, the evidence when n is not Zero will be necessary for a complete definition of the safeHead' operation.

> safeHead' (Left Refl) _ deflt _ = deflt
> safeHead' (Right contra) n _ v = case n of
>   SZero -> exFalsoQuodlibet (contra Refl)
>   SSucc _ -> safeHead v

Note that this definition is complete: we can use the primitives that we have built up to eliminate the impossible case match. Of course, those primitives had to avoid empty pattern matches. However, it is easy to imagine a future where bugs #2431 and #3927 are gone and we can define this without any partial features of Haskell.

I should also note that we can’t use exFalsoQuodlibet (contra Refl) in the SSucc _ case. GHC rightly complains Couldn't match type 'Succ n1 with 'Zero in the use of Refl.

Fun stuff, no?