Wrapping My Head Around Cubical Path Types

Last weekend, annoyed yet again at a lack of proper quotient types, I decided to dig into cubical type theory (content warning: nlab). A weekend's reading and pondering does not an expert make, but I wanted to highlight some surprising structural ways in which these theories differ from previous type theories, which none of my sources noted1.

My main sources for this are Cubical Type Theory: a constructive interpretation of the univalence axiom by Cohen, Coquand, Huber, and Mörtberg, and Computational higher-dimensional type theory (and the accompanying preprint) by Anguili, Harper, and Wilson. Cohen et al. present a cubical type theory where terms, types, and their judgments (including equality) are defined simultaneously (à la coq), while Anguili et al. present one with untyped terms and the operational semantics of their reduction first, followed by type and term equality judgments on top of those untyped programs (à la Nuprl). I assume familiarity with dependent type theories and how they're typically presented, but no prior knowledge of cubical types or HoTT (or homotophy theory, category theory, cubical sets…).

What is cubical type theory, anyway?

Cubical type theory is a computational interpretation of Voevodsky's homotopy type theory based on a cubical set model.

Simple, right?

OK, a very very condensed background that misses all the actually important points and probably gets things wrong: Relatively recent developments in mathematical foundations have led to extending intuitionistic type theory (ITT) with concepts from homotopy theory to yield homotopy type theory (HoTT). For our purposes, it doesn't matter what homotopy theory is or what motivates this extension from a foundational perspective; what matters to us are these key points:

Cubical type theories (CTTs) arise from a desire to implement these concepts from HoTT in a computational context, i.e. providing the typical reduction, type formation, type ascription, etc. judgments for path types in a way that provides the appropriate definitional equalities for eliminations of paths and HITs. I ignore important notions like kan composition and coercion here as not needed for the points I want to raise.

Dimensions

The first innovation is a notion of dimension: we have as terms 0, 1, and dimension variables i, j, etc. Roughly the intuition here is that dimension terms represent points on some abstract interval between 0 and 1; due to the way they're used in paths I prefer to think of the endpoints as start and end but I'll stick with the typical notation here.

Different variants have different operations on dimensions (e.g. in Cohen et al. dimensions are a free De Morgan lattice generated by dimension names, so e.g. we can directly talk about ¬r, r ∨ s, etc.) and different ways of relating dimension names and restrictions to the traditional context Γ, but we won't need to worry about those. In any case we have the typical notions of substitution for dimension variables.

Values with n free dimension variables can be thought of as n-dimensional cubes, by noting that if we sweep out from 0 to 1 in n orthogonal dimensions we cover an n cube (including its interior). The top of page 6 in Cohen et al. has a decent figure illustrating this, but don't worry if it doesn't click for you; this is just meant to be a geometric intuition for those who find that helpful.

Path abstractions

Much like λ abstractions allow introducing a new variable and extending the context of the body of the abstraction, CTTs add a notion of path abstraction to introduce new dimension names. The typing rules follow the same structure as for Π types: If in some context A is a type depending2 on dimension variable i and t : A is a value of that type depending on i, then Pathi A t(i/0) t(i/1) 3 is a type and ⟨i⟩ t : Pathi A t(i/0) t(i/1) is a value of that type in a context without i. Path application (sometimes denoted p @ r) follows the usual β reduction rules when the function is an abstraction, but we also have equalities when the argument is 0 or 1: if p : Pathi A a b then p @ 0 = a and p @ 1 = b. You can convince yourself that these rules are confluent when they overlap.

We're not in Kansas anymore

There are a lot of new concepts here, but I'm still digesting some fundamental changes to the nature of judgments themselves in the new theory.

In ITTs, typing judgments can depend on type equalities but never directly on value equalities; e.g. an application is well-typed when the argument type of the function is equal to the type of the argument. Of course, types can depend on values and so type equalities reflect value equalities, but a typechecker never has to compute the value of the terms being checked directly. But this is no longer true with the path abstraction typing rule! The type of ⟨i⟩ t depends on the value of t at i = 0 and i = 1, which means not only must we compute but we must compute with terms (0 and 1) completely absent from the term we're checking!

But path applications are even weirder. To see why, it's helpful to look at the operational semantics for paths (they call them "identifications") on page 6 of the preprint for Anguili et al. You'll note there's no reduction rule for application when the argument is 0 or 1. And how could there be? The operational semantics are on untyped terms, so we have no way to know what to reduce M @ 1 to! But if you look at the judgments for paths in figure 4 of Anguili et al., you will see an equality for M @ 0 = P0 and M @ 1 = P1 when M : Patht A P0 P1 . So our term equality judgment is looser than the operational semantics of the underlying untyped terms! We have values that we call "equal" which do not reduce to the same normal form.

I've thought of three options to addressing this:

  1. Do as Anguili et al. do and decouple reduction from term equality. Closed terms still compute in the expected ways but this does seem like a sad step back from the notion of computation we have in ITT.

  2. Do as Cohen et al. do and only define reduction over typed terms. While ITT with universes does have types at the value level, this is a new regime of actually computing with types; in ITT the typed reduction rule approach doesn't actually reference the types in the values being marked equal, just in their types.

  3. Change our (untyped) notion of path variables and/or path application to carry more information. Basically, we want the expression p @ r, where p is a variable, to hold within it the two sides of the path, either because variables of path types have to carry around the two sides as part of the term, or because application is actually just shortand for some other fundamental term pathapp p r a b, or some combination of these, and then have the typing rules (and, if relevant, substitution rules) adapt accordingly.

I'd like to explore 3 more, which will require actually implementing CTT for me to play around with.


1

Perhaps because it's obvious to the experts.

2

Note that this is a difference from the normal intensional equality type: in a = b a and b must have the same type, whereas here their types merely must be connected by i.

3

t(i/r) is just normal substitution: replace all occurrences of i in t with r, with appropriate renaming.

comments powered by Disqus