Welcome to Agda’s documentation!¶
Overview¶
Note
The Agda User Manual is a workinprogress and is still incomplete. Contributions, additions and corrections to the Agda manual are greatly appreciated. To do so, please open a pull request or issue on the GitHub Agda page.
This is the manual for the Agda programming language, its type checking, compilation and editing system and related resources/tools. The latest PDF version of this manual can be downloaded from GitHub Actions page (instruction on how to find them).
You can find a lot of useful resources on Agda Wiki site, like tutorials, introductions, publications and books. If you’re new to Agda, you should make use of the resources on Agda Wiki and chapter Getting Started instead of chapter Language Reference.
A description of the Agda language is given in chapter Language Reference. Guidance on how the Agda editing and compilation system can be used can be found in chapter Tools.
Getting Started¶
What is Agda?¶
Agda is a dependently typed programming language. It is an extension of MartinLöf’s type theory and is the latest in the tradition of languages developed in the programming logic group at Chalmers. Other languages in this tradition are Alf, Alfa, Agda 1, Cayenne. Some other loosely related languages are Coq, Epigram, and Idris.
Because of strong typing and dependent types, Agda can be used as a proof assistant, allowing to prove mathematical theorems (in a constructive setting) and to run such proofs as algorithms.
Dependent types¶
Typing for programmers¶
Type theory is concerned
both with programming and logic. We see the type system as a way to
express syntactic correctness. A type correct program has a meaning.
Lisp
is a totally untyped programming language, and so are its derivatives
like
Scheme. In
such languages, if f
is a function, one can apply it to anything,
including itself. This makes it easy to write programs (almost all
programs are well formed), but it also makes it easy to write erroneous
programs. Programs will raise exceptions or loop forever. And it is
very difficult to analyze where the problems are.
Haskell or ML and its derivatives like Standard ML and Caml are typed languages, where functions come with a type expressing what type of arguments the program expects and what the result type is.
Between these two families of languages come languages, which may or may not have a typing discipline. Most imperative languages do not come with a rich type system. For example, C is typed, but very loosely (almost everything is an integer or a variant thereof). Moreover, the typing system does not allow the definition of trees or graphs without using pointers.
All these languages are examples of partial languages, i.e., the
result of computing the value of an expression e
of type T
is
one of the following:
 the program terminates with a value in the type
T
 the program
e
does not terminate  the program raises an exception (which has been caused by an incomplete definition – for instance, a function is only defined for positive integers but is applied to a negative integer.
Agda and other languages based on type theory are total languages
in the sense that a program e
of type T
will always terminate
with a value in T
. No runtime error can occur, and no
nonterminating programs can be written (unless explicitly requested by
the programmer).
Dependent types¶
Dependent types are
introduced by having families of types indexed by objects in another type.
For instance, we can define the type Vec n
of vectors of length n
.
This is a family of types indexed by objects in Nat
(a type
parameterized by natural numbers).
Having dependent types, we must generalize the type of functions and the type of pairs.
The dependent function space (a : A) > (B a)
is the type of the
functions taking an argument a
in a type A
and a result in B
a
. Here, A
is a type, and B
is a family of types indexed by
elements in A
.
For example, we could define the type of n x m
matrices as a type
indexed by two natural numbers. Call this type Mat n m
. The
function identity
, which takes a natural number n
as an argument
and produces the n x n
identity matrix, is then a function of type
identity : (n : Nat) > (Mat n n)
.
Remark: We could, of course, just specify the identity
function
with the type Nat > Nat > Mat
, where Mat
is the type of
matrices, but this is not as precise as the dependent version.
The advantage of using dependent types is that it makes it possible to
express properties of programs in the typing system. We saw above that
it is possible to express the type of square matrices of length n
.
It is also possible to define the type of operations on matrices so
that the lengths are correct. For instance, the type of matrix
multiplication is
∀ {i j k} → (Mat i j) > (Mat j k) > (Mat i k)
and the type system can check that a program for matrix multiplication really takes arguments of the correct size. It can also check that matrix multiplication is only applied to matrices, where the number of columns of the first argument is the same as the number of rows in the second argument.
Dependent types and logic¶
Thanks to the CurryHoward correspondence, one can express a logical specification using dependent types. For example, using only typing it is possible to define:
 equality on natural numbers
 properties of arithmetical operations
 the type
(n : Nat) > (PrimRoot n)
consisting of functions computing primitive root in modular arithmetic.
Of course, a program of the above type will be more difficult to write
than the corresponding program of type Nat > Nat
, which produces a
natural number which is a primitive root. However, the difficulty can
be compensated by the fact that the program is guaranteed to work: it
cannot produce something which is not a primitive root.
On a more mathematical level, we can express formulas and prove them
using an algorithm. For example, a function of type (n : Nat) >
(PrimRoot n)
is also a proof that every natural number has a
primitive root.
Installation¶
There are several ways to install Agda:
 Using a released source package from Hackage
 Using a binary package prepared for your platform
 Using the development version from the Git repository
Agda can be installed using different flags (see Installation Flags).
Hint
If you want a sneak peek of Agda without installing it, try the Agda Pad
Installation from source¶
Prerequisites¶
You need recent versions of the following programs to compile Agda:
 GHC: https://www.haskell.org/ghc/
 Agda has been tested with GHC 8.0.2, 8.2.2, 8.4.4, 8.6.5, 8.8.4, 8.10.7, 9.0.1 and 9.2.1.
 cabalinstall: https://www.haskell.org/cabal/
 Alex: https://www.haskell.org/alex/
 Happy: https://www.haskell.org/happy/
 GNU Emacs: http://www.gnu.org/software/emacs/
You should also make sure that programs installed by cabalinstall are on your shell’s search path.
NonWindows users need to ensure that the development files for the C libraries zlib and ncurses are installed (see http://zlib.net and http://www.gnu.org/software/ncurses/). Your package manager may be able to install these files for you. For instance, on Debian or Ubuntu it should suffice to run
aptget install zlib1gdev libncurses5dev
as root to get the correct files installed.
Optionally one can also install the ICU library, which is used to implement
the countclusters
flag. Under Debian or Ubuntu it may suffice
to install libicudev. Once the ICU library is installed one can
hopefully enable the countclusters
flag by giving the
enableclustercounting
flag to cabal install.
Installing the agda
and the agdamode
programs¶
After installing the prerequisites you can install the latest released version of Agda from Hackage.
Using cabal
¶
For installing the agda
and the agdamode
programs using
cabal
run the following commands:
cabal update
cabal install Agda
If you use Nixstyle Local Builds,
by using Cabal ≥ 3.0 or by running cabal v2install
, you’ll get the
following error when compiling with the GHC backend:
Compilation error:
MAlonzo/RTE.hs:13:1: error:
Failed to load interface for ‘Numeric.IEEE’
Use v to see a list of the files searched for.
This is because packages are sandboxed in $HOME/.cabal/store
and you have to explicitly register required packaged in a GHC environment.
This can be done by running the following command:
cabal v2install lib Agda ieee754
This will register ieee754 in the GHC default environment.
You may want to keep the default environment clean, e.g. to avoid conflicts with other installed packages. In this case you can a create separate Agda environment by running:
cabal v2install packageenv agda lib Agda ieee754
You then have to set the GHC_ENVIRONMENT
when you invoke Agda:
GHC_ENVIRONMENT=agda agda c helloworld.agda
Note
Actually it is not necessary to register the Agda library, but doing so forces Cabal to install the same version of ieee754 as used by Agda.
Warning
If you are installing Agda using Cabal on Windows, depending on your
system locale setting, cabal install Agda
may fail with an error
message:
hGetContents: invalid argument (invalid byte sequence)
If this happens, you can try changing the console code page to UTF8 using the command:
CHCP 65001
Using stack
¶
For installing the agda
and the agdamode
programs using
stack
run the following commands:
cabal get AgdaX.Y.Z
cd AgdaX.Y.Z
stack stackyaml stacka.b.c.yaml install
replacing X.Y.Z and a.b.c for the Agda version on Hackage and your GHC version, respectively.
Running the agdamode
program¶
Warning: Intalling agdamode
via melpa
is discouraged.
It is stronly advised to install agdamode
for emacs
as described below:
After installing the agdamode
program using cabal
or
stack
run the following command:
agdamode setup
The above command tries to set up Emacs for use with Agda via the Emacs mode. As an alternative you can copy the following text to your .emacs file:
(loadfile (let ((codingsystemforread 'utf8))
(shellcommandtostring "agdamode locate")))
It is also possible (but not necessary) to compile the Emacs mode’s files:
agdamode compile
This can, in some cases, give a noticeable speedup.
Warning: If you reinstall the Agda mode without recompiling the Emacs Lisp files, then Emacs may continue using the old, compiled files.
Installing the standard library¶
Installing the standard library, should you choose to use it, is an additional step using a separate repository.
Prebuilt Packages and SystemSpecific Instructions¶
Arch Linux¶
The following prebuilt packages are available:
However, due to significant packaging bugs such as this, you might want to use alternative installation methods.
Debian / Ubuntu¶
Prebuilt packages are available for Debian and Ubuntu from Karmic onwards. To install:
aptget install agdamode
This should install Agda and the Emacs mode.
The standard library is available in Debian and Ubuntu from Lucid onwards. To install:
aptget install agdastdlib
More information:
Reporting bugs:
Please report any bugs to Debian, using:
reportbug B debian agda
reportbug B debian agdastdlib
Fedora¶
Agda is packaged in Fedora (since before Fedora 18).
yum install Agda
will pull in emacsagdamode and ghcAgdadevel.
FreeBSD¶
Packages are available from FreshPorts for Agda and Agda standard library.
Nix or NixOS¶
Agda is part of the Nixpkgs collection that is used by https://nixos.org/nixos. There are two ways to install Agda from nix:
The new way: If you are tracking
nixosunstable
ornixpkgsunstable
(the default on MacOS) or you are using NixOS version 20.09 or above then you should be able to install Agda (and the standard library) via:nixenv f "<nixpkgs>" iE "nixpkgs: (nixpkgs {}).agda.withPackages (p: [ p.standardlibrary ])" agdamode setup echo "standardlibrary" > ~/.agda/defaults
The second command tries to set up the Agda emacs mode. Skip this if you don’t want to set up the emacs mode. See Installation from source above for more details about
agdamode setup
. The third command sets thestandardlibrary
as a default library so it is always available to Agda. If you don’t want to do this you can omit this step and control library imports on a per project basis using an.agdalib
file in each project root.If you don’t want to install the standard library via nix then you can just run:
nixenv f "<nixpkgs>" iA agda agdamode setup
For more information on the Agda infrastructure in nix, and how to manage and develop Agda libraries with nix, see https://nixos.org/manual/nixpkgs/unstable/#agda. In particular, the
agda.withPackages
function can install more libraries than just the standard library. Alternatively, see Library Management for how to manage libraries manually.The old way (deprecated): As Agda is a Haskell package available from Hackage you can install it like any other Haskell package:
nixenv f "<nixpkgs>" iA haskellPackages.Agda agdamode setup
This approach does not provide any additional support for working with Agda libraries. See Library Management for how to manage libraries manually. It also suffers from this open issue which the ‘new way’ does not.
Nix is extremely flexible and we have only described how to install
Agda globally using nixenv
. One can also declare which packages
to install globally in a configuration file or pull in Agda and some
relevant libraries for a particular project using nixshell
.
The Agda git repository is a Nix flake to allow using a development version with Nix. The flake has the following outputs:
overlay
: Anixpkgs
overlay which makeshaskellPackages.Agda
(which the toplevelagda
package depends on) be the build of the relevant checkout.haskellOverlay
: An overlay forhaskellPackages
which overrides theAgda
attribute to point to the build of the relevant checkout. This can be used to make the development version available at a different attribute name, or to override Agda for an alternative haskell package set.
OS X¶
Homebrew is a free and opensource software package management system that provides prebuilt packages for OS X. Once it is installed in your system, you are ready to install agda. Open the Terminal app and run the following commands:
brew install agda
agdamode setup
This process should take less than a minute, and it installs Agda together with
its Emacs mode and its standard library. For more information about the brew
command, please refer to the Homebrew documentation
and Homebrew FAQ.
By default, the standard library is installed in the folder
/usr/local/lib/agda/
. To use the standard library, it is
convenient to add the location of the agdalib file /usr/local/lib/agda/standardlibrary.agdalib
to the ~/.agda/libraries
file, and write the line standardlibrary
in
the ~/.agda/defaults
file. To do this, run the following commands:
mkdir p ~/.agda
echo $(brew prefix)/lib/agda/standardlibrary.agdalib >>~/.agda/libraries
echo standardlibrary >>~/.agda/defaults
Please note that this configuration is not performed automatically. You can learn more about using the standard library or using a library in general.
It is also possible to install with the commandline option keyword HEAD
.
This requires building Agda from source.
To configure the way of editing agda files, follow the section Emacs mode.
Note
If Emacs cannot find the agdamode
executable, it might help to
install the execpathfromshell package by doing Mx
packageinstall RET execpathfromshell RET
and adding the line
(execpathfromshellinitialize)
to your .emacs
file.
Windows¶
A precompiled version of Agda 2.6.0.1 bundled with Emacs 26.1 with the necessary mathematical fonts, is available at http://www.cs.uiowa.edu/~astump/agda.
Installation of the Development Version¶
After getting the development version from the Git repository
Install the prerequisites
In the toplevel directory of the Agda source tree, run:
cabal update make install
Note that on a Mac, because ICU is installed in a nonstandard location, you need to specify this location on the command line:
make install CABAL_OPTS='extralibdirs=/usr/local/opt/icu4c/lib extraincludedirs=/usr/local/opt/icu4c/include'
You can also add the
CABAL_OPTS
variable tomk/config.mk
(seeHACKING.md
) instead of passing it via the command line.To install via
stack
instead ofcabal
, copy one of thestackx.x.x.yaml
files of your choice to astack.yaml
file before runningmake
. For example:cp stack8.10.1.yaml stack.yaml make install
Installation Flags¶
When installing Agda the following flags can be used:

debug
¶
Enable debugging features that may slow Agda down. Default: off.

enableclustercounting
¶
Enable the
countclusters
flag. Note that ifenableclustercounting
isFalse
, then thecountclusters
flag triggers an error message. Default: off.

optimiseheavily
¶
Optimise Agda heavily. (In this case it might make sense to limit GHC’s memory usage.) Default: off.
Installing multiple versions of Agda¶
Multiple versions of Agda can be installed concurrently by using the –programsuffix flag. For example:
cabal install Agda2.6.1 programsuffix=2.6.1
will install version 2.6.1 under the name agda2.6.1. You can then switch to this version of Agda in Emacs via
Cc Cx Cs 2.6.1 RETURN
Switching back to the standard version of Agda is then done by:
Cc Cx Cs RETURN
‘Hello world’ in Agda¶
This section contains two minimal Agda programs that can be used to test if you have installed Agda correctly: one for using Agda interactively as a proof assistant, and one for compiling Agda programs to an executable binary. For a more indepth introduction to using Agda, see A taste of Agda or the list of tutorials.
Hello, Agda!¶
Below is is a small ‘hello world’ program in Agda (defined in a file
hello.agda
).
data Greeting : Set where
hello : Greeting
greet : Greeting
greet = hello
This program defines a data type called
Greeting
with one constructor hello
, and a function
definition greet
of type Greeting
that
returns hello
.
To load the Agda file, open it in Emacs and load it by pressing Cc
Cl
(Ctrl+c
followed by Ctrl+l
). You should now see that the
code is highlighted and there should be a message *All done*
. If
this is the case, congratulations! You have correctly installed Agda
and the Agda mode for Emacs. If you also want to compile your Agda
programs, continue with the next section.
Hello, World!¶
Below is a complete executable ‘hello world’ program in Agda (defined
in a file helloworld.agda
)
module helloworld where
open import Agda.Builtin.IO using (IO)
open import Agda.Builtin.Unit using (⊤)
open import Agda.Builtin.String using (String)
postulate putStrLn : String → IO ⊤
{# FOREIGN GHC import qualified Data.Text as T #}
{# COMPILE GHC putStrLn = putStrLn . T.unpack #}
main : IO ⊤
main = putStrLn "Hello world!"
This code is selfcontained and has several declarations:
 Imports of the
ÌO
,⊤
andString
types from the Agda Builtin library.  A postulate of the function type
putStrLn
.  Two pragmas that tell Agda how to compile the function
putStrLn
.  A definition of the function
main
.
To compile the Agda file, either open it in Emacs and press Cc Cx
Cc
or run agda compile helloworld.agda
from the command
line. This will create a binary helloworld
in the current
directory that prints Hello world!
. To find out more about the
agda
command, use agda help
.
Note
As you can see from this example, by default Agda includes only
minimal library support through the Builtin
modules. The Agda
Standard Library provides bindings for most commonly
used Haskell functions, including putStrLn
. For a version of
this ‘hello world’ program that uses the standard library, see
Building an Executable Agda Program.
A Taste of Agda¶
The objective of this section is to provide a first glimpse of Agda with some small examples. The first one is a demonstration of dependently typed programming, and the second shows how to use Agda as a proof assistant. Finally, we build a complete program and compile it to an executable program with the GHC and Javascript backends.
Preliminaries¶
Before proceeding, make sure that you installed Agda and a compatible version of the standard library.
Agda programs are typically developed interactively, which means that one can type check code which is not yet complete but contain “holes” which can be filled in later. Editors with support for interactive development of Agda programs include Emacs via the Emacs mode, Atom via the agda mode for Atom, Visual Studio Code via the agda mode for VSCode, and Vim via agdavim.
Hint
If you want a sneak peek of Agda without installing it, try the Agda Pad
Note
In this introduction we use several of Agda’s interactive commands to get information from the typechecker and manipulate code with holes. Here is a list of the commands that will be used in this tutorial:
Cc Cl
: Load the file and typecheck it.Cc Cd
: Deduce the type of a given expression.Cc Cn
: Normalise a given expression.Cc C,
: Shows the type expected in the current hole, along with the types of any local variables.Cc Cc
: Case split on a given variable.Cc CSPC
: Replace the hole with a given expression, if it has the correct type.Cc Cr
: Refine the hole by replacing it with a given expression applied to an appropriate number of new holes.Cc Cx Cc
(Cx Cc
in VS Code): Compile an Agda program.
See Notation for key combinations for a full list of interactive commands (keybindings).
Programming With Dependent Types: Vectors¶
In the code below, we model the notion of vectors (in the sense of computer science, not in the mathematical sense) in Agda. Roughly speaking, a vector is a list of objects with a determined length.
module helloworlddep where
open import Data.Nat using (ℕ; zero; suc)
data Vec (A : Set) : ℕ → Set where
[] : Vec A zero
_∷_ : ∀ {n} (x : A) (xs : Vec A n) → Vec A (suc n)
infixr 5 _∷_
Paste or type the code above in a new file with name
helloworlddep.agda
. Load the file (in Emacs Cc Cl
). This
also saves the file. If the agda source code was loaded correctly, you
should see that the code is highlighted and see a message *All
done* .
Note
If a file does not type check Agda will complain. Often the cursor will jump to the position of the error, and the error will (by default) be underlined. Some errors are treated a bit differently, though. If Agda cannot see that a definition is terminating/productive it will highlight it in light salmon, and if some metavariable other than the goals cannot be solved the code will be highlighted in yellow (the highlighting may not appear until after you have reloaded the file). In case of the latter kinds of errors you can still work with the file, but Agda will (by default) refuse to import it into another module, and if your functions are not terminating Agda may hang. See Background highlighting for a full list of the different background colors used by Agda.
Tip
If you do not like the way Agda syntax or errors are
highlighted (if you are colourblind, for instance), then you can
tweak the settings by typing Mx customizegroup RET
agda2highlight RET
in Emacs (after loading an Agda file) and
following the instructions.
Agda programs are structured into modules. Each Agda
file has one toplevel module whose name must match the name of the file, and
zero or more nested modules. Each module contains a list of
declarations. This example has a single toplevel module called
helloworlddep
, which has three declarations:
 An
open import
statement that imports the datatypeℕ
and its constructorszero
andsuc
from the moduleData.Nat
of the standard library and brings them into scope,  A
data
declaration defining the datatypeVec
with two constructors: the empty vector constructor[]
and the cons constructor_∷_
,  And finally an
infixr
declaration specifying the precedence for the cons operation.
Tip
Agda uses Unicode
characters in source files (more specifically: the UTF8 character encoding), such as
ℕ
, →
, and ∷
in this example.
Many mathematical symbols can be typed using the corresponding
LaTeX command names. To
learn how to enter a unicode character, move the cursor over it and
enter Mx describechar
or Cu Cx =
. This displays all
information on the character, including how to input it with the
Agda input method. For example, to input ℕ
you can type either
\Bbb{N}
or \bN
. See Unicode input for
more details on entering unicode characters.
The datatype Vec
¶
Let us start by looking at the first line of the definition of
Vec
:
data Vec (A : Set) : ℕ → Set where
This line declares a new datatype and names it Vec
. The words data
and
where
are keywords, while the part Vec (A : Set) : ℕ → Set
determines
the type of Vec
.
Vec
is not a single type but rather a family of types. This family of
types has one parameter A
of type Set
(which is the sort of small types, such as ℕ
,
Bool
, …) and one index of type ℕ
(the type of
natural numbers). The parameter A
represents the type of the objects of
the vector. Meanwhile, the index represents the length of the vector, i.e. the
number of objects it contains.
Together, this line tells us that, for any concrete type B : Set
and any natural number m : ℕ
, we are declaring a new
type Vec B m
, which also belongs to Set
.
The constructors []
and _∷_
¶
Each constructors of a datatype is declared on a separate line and indented with a strictly positive number of spaces (in this case two).
We chose the name []
for the first constructor. It
represents the empty vector, and its type is Vec A 0
, i.e. it is a
vector of length 0
.
The second constructor is a mixfix operator
named _∷_
(pronounced cons). For any number n : ℕ
, it
takes as input an object of A
and a vector of length n
. As
output, it produces a vector with length suc n
, the successor of
n
. The number n
itself is an implicit argument
to the constructor _∷_
.
The final declaration with keyword ìnfixr
does not belong to the
datatype declaration itself; therefore it is not indented. It
establishes the precedence of the operator _∷_
.
Tip
You can let Agda infer the type of an expression using the ‘Deduce
type’ command (Cc Cd
). First press Cc Cd
to open a prompt, enter a
term, for instance 3 ∷ 2 ∷ 1 ∷ []
, and press return. Agda infers its
type and return the type Vec ℕ 3
, meaning that the given term is
a vector with 3 objects of type ℕ
.
Note
Almost any character can be used in an identifier (like
α
, ∧
, or ♠
, for example). It is therefore
necessary to have spaces between most lexical units. For example
3∷2∷1∷[]
is a valid identifier, so we need to write 3 ∷ 2 ∷ 1
∷ []
instead to make Agda parse it successfully.
The total function lookup
¶
Now that Vec
is defined, we continue by defining the lookup
function
that given a vector and a position, returns the object of the
vector at the given position. In contrast to the lookup
function
we could define in most (nondependently typed) programming languages,
this version of the function is total: all calls to it are
guaranteed to return a value in finite time, with no possibility for
errors.
To define this function, we use the Fin
datatype from the standard
library. Fin n
is a type with n
objects: the numbers 0
to
n1
(in unary notation zero
, suc zero
, …), which we use to
model the n
possible positions in a vector of length n
.
Now create a new file called helloworlddeplookup.agda
file and type or paste:
module helloworlddeplookup where
open import Data.Nat using (ℕ)
open import Data.Vec using (Vec; _∷_)
open import Data.Fin using (Fin; zero; suc)
variable
A : Set
n : ℕ
lookup : Vec A n → Fin n → A
lookup (a ∷ as) zero = a
lookup (a ∷ as) (suc i) = lookup as i
The Vec
type that we saw before is actually already in the module
Data.Vec
of the standard library, so we import it instead of
copying the previous definition.
We have declared A
and n
as generalizable variables to avoid the declaration of
implicit arguments. This allows us to use A
and n
in the type
of lookup
without binding the names explicitly. More explicitly,
the full type of lookup
(which we can get by using Cc Cd
) is:
lookup : {A : Set} {n : ℕ} → Vec A n → Fin n → A
Warning
zero
and suc
are not the constructors of ℕ
that we
saw before, but rather the constructors of Fin
. Agda allows overloading of
constructor names, and disambiguates between them based on the expected type
where they are used.
The definition of the lookup
function specifies two cases:
 Either the vector is
a ∷ as
and the position iszero
, so we return the first objecta
of the vector.  Or the vector is
a ∷ as
and the position issuc i
, so we recursively look up the object at positioni
in the tailas
of the vector.
There are no cases for the empty vector []
. This is no
mistake: Agda can determine from the type of lookup
that it is
impossible to look up an object in the empty vector, since there is
no possible index of type Fin 0
. For more details, see the section
on coverage checking.
Agda as a Proof Assistant: Proving Associativity of Addition¶
In this section we state and prove the associativity of addition on the natural
numbers in Agda. In contrast to the previous section, we build the code line by
line. To follow along with this example in Emacs, reload the file
after adding each step by pressing Cc Cl
.
Statement of associativity¶
We start by creating a new file named helloworldproof.agda
.
Paste or type the following code:
module helloworldproof where
Now we import the datatype ℕ
and the addition operation
_+_
, both defined in the Agda Builtin library.
open import Data.Nat using (ℕ; _+_)
Next, we import the propositional equality type _≡_
from the module
Relation.Binary.PropositionalEquality
.
open import Relation.Binary.PropositionalEquality using (_≡_)
Under the CurryHoward correspondence, the type
x ≡ y
corresponds to the proposition stating that x
and y
are equal
objects. By writing a function that returns an object of type x ≡ y
, we
are proving that the two terms are equal.
Now we can state associativity: given three (possibly different) natural
numbers, adding the first to the addition of the second and the third
computes to the same value as adding the addition of the first and the second
to the third. We name this statement +assoc
.
+assoc : Set
+assoc = ∀ (x y z : ℕ) → x + (y + z) ≡ (x + y) + z
This is not yet a proof, we have merely written down the statement (or enunciation) of associativity.
Proof of associativity¶
The statement +assoc
is a member of Set
, i.e. it is a
type. Now that we have stated the property in a way that Agda
understands, our objective is to prove it. To do so, we have to
construct a function of type +assoc
.
First, we need to import the constructors zero
and suc
of the
already imported datatype ℕ
and the constructor refl
(short for
reflexivity) and function cong
(short for congruence) from the
standard library.
open import Data.Nat using (zero; suc)
open import Relation.Binary.PropositionalEquality using (refl; cong)
To prove +assoc
we need to find an object of that
type. Here, we name this object +assocproof
.
+assocproof : ∀ (x y z : ℕ) → x + (y + z) ≡ (x + y) + z
If we load now the file, Agda gives an error: “The following names are
declared but not accompanied by a definition: +assocproof
”. Indeed, we have only
declared the type of +assocproof
but not yet given a definition. To build the
definition, we need to know more about holes and case splitting.
Holes and case splitting¶
We can let Agda help us to write the proof by using its interactive mode. To start, we
first write a simple clause so the file can be loaded even if we still do
not know the proof. The clause consists of the name of the property, the input
variables, the equals symbol =
and the question mark ?
.
+assocproof x y z = ?
When we reload the file, Agda no longer throws an error, but instead shows the
message *All Goals* with a list of goals. We have now entered the interactive
proving mode. Agda turns our question mark into what is called a hole { }0
with a label 0
. Each hole stands as a placeholder for a part of the program
that is still incomplete and can be refined or resolved interactively.
Note
You are not supposed to enter a hole such as { }0
manually,
Agda takes care of the numbering when you load the file. To insert a hole,
write either ?
or {! !}
and load the file to make Agda assign
a unique number to it.
To get detailed information about a
specific hole, put the cursor in it and press Cc C,
. This displays
the type of the hole, as well as the types of all the variables in scope.
In this example we get the information that the goal type is
x + (y + z) ≡ x + y + z
, and there are three variables x
, y
,
and z
in scope, all of type ℕ
.
Note
You might wonder why Agda displays the term (x + y) + z
as x +
y + z
(without parenthesis). This is done because of the infix statement
infixl 6 _+_
that was declared in the imported Agda.Builtin.Nat
module.
This declaration means that the _+_
operation is leftassociative. More
information about mixfix operator like the arithmetic
operations. You can also check this associativity example.
To continue writing our proof, we now pick a variable and perform a case
split on it. To do so, put the cursor inside the hole and press Cc Cc
.
Agda asks for the name of the pattern variable to case on. Let’s
write x
and press return. This replaces the previous clause with
two new clauses, one where x
has been replaced by zero
and another
where it has been replaced by suc x
:
+assocproof zero y z = { }0
+assocproof (suc x) y z = { }1
Important
The x
in the type signature of +assocproof
is not the same as the
x
pattern variable in the last clause where suc x
is written. The
following would also work: +assocproof (suc x₁) y z = { }1
.
The scope of a variable declared in a signature is restricted to the
signature itself.
Instead of one hole, we now have two.
The first hole has type y + z ≡ y + z
, which is easy to resolve. To do so,
put the cursor inside the first hole labeled 0
and press Cc Cr
. This
replaces the hole by the term refl
, which stands for reflexivity and
can be used any time we want to construct a term of type w ≡ w
for some
term w
.
+assocproof zero y z = refl
+assocproof (suc x) y z = { }1
Now we have one hole left to resolve. By putting the cursor in it and pressing
Cc C,
again, we get the type of the hole: suc x + (y + z) ≡ suc x + y +
z
. Agda has already applied the definition of _+_
to replace
the lefthand side (suc x + y) + z
of the equation by suc (x + y + z)
,
and similarly replaced the righthand side suc x + (y + z)
by suc (x + (y
+ z))
.
Tip
You can use the gotodefinition
command by selecting the
definition that you want to check eg. _+_
and pressing M.
in Emacs or
CM\
in Atom. This takes you to the definition of _+_
, which is
originally defined in the builtin module Agda.Builtin.Nat
.
Tip
You can ask Agda to compute the normal form of a term. To do so,
place the cursor in the remaining hole (which should not contain any text at
this point) and press Cc Cn
. This prompts you for an expression to
normalize. For example, if we enter (suc x + y) + z
we get back
suc (x + y + z)
as a result.
Proof by induction¶
If we now look at the type of the remaining hole, we see that both the
lefthand side and the righthand side start with an application of the
constructor suc
. In this kind of situation it suffices to prove that the
two arguments to suc
are equal. This principle is called congruence of
equality _≡_
, and it is expressed by the Agda function cong
.
To use cong
we need to apply it to a function or constructor, in this case
suc
. If we ask Agda to infer the type of cong suc
by pressing Cc
Cd
and entering the term, we get back the type {x y : ℕ} → x ≡ y →
suc x ≡ suc y
. In other words, cong suc
takes as input a proof of an
equality between x
and y
and produces a new proof of equality between
suc x
and suc y
. We write cong suc
in the hole and again press
Cc Cr
to refine the hole. This results in the new line
+assocproof (suc x) y z = cong suc { }2
where the new hole with number 2 is of type x + (y + z) ≡ x + y + z
.
To finish the proof, we now make a recursive call +assocproof x y z
. Note
that this has type x + (y + z) ≡ (x + y) + z
, which is exactly what we need.
To complete the proof, we type +assocproof x y z
into the hole and solve it with Cc Cspace
.
This replaces the hole with the given term and completes the proof.
Note
When we define a recursive function like this, Agda performs termination
checking on it. This is important to ensure the
recursion is wellfounded, and hence will not result in an invalid (circular)
proof. In this case, the first argument x
is structurally smaller than the
first argument suc x
on the lefthand side of the clause, hence Agda
allows us to make the recursive call. Because termination is an
undecidable property, Agda will not accept all terminating functions, but only
the ones that are mechanically proved to terminate.
The final proof +assocproof
is defined as follows:
+assocproof zero y z = refl
+assocproof (suc x) y z = cong suc (+assocproof x y z)
When we reload the file, we see *All Done*. This means that
+assocproof
is indeed a proof of the statement +assoc
.
Here is the final code of the ‘Hello world’ proof example, with all imports together at the top of the file:
module helloworldproof where
open import Data.Nat using (ℕ; zero; suc; _+_)
open import Relation.Binary.PropositionalEquality using (_≡_; refl; cong)
+assoc : Set
+assoc = ∀ (x y z : ℕ) → x + (y + z) ≡ (x + y) + z
+assocproof : ∀ (x y z : ℕ) → x + (y + z) ≡ (x + y) + z
+assocproof zero y z = refl
+assocproof (suc x) y z = cong suc (+assocproof x y z)
Tip
You can learn more details about proving in the chapter Proof by Induction of the online book Programming Language Foundations in Agda.
Building an Executable Agda Program¶
Agda is a dependently typed functional programming language. This means that we can write programs in Agda that interact with the world. In this section, we write a small ‘Hello world’ program in Agda, compile it, and execute it. In contrast to the standalone example on the Hello World page, here we make use of the standard library to write a shorter version of the same program.
Agda Source Code¶
First, we create a new file named helloworldprog.agda
with Emacs or Atom
in a folder that we refer to as our toplevel folder.
{# OPTIONS guardedness #}
module helloworldprog where
open import IO
main : Main
main = run (putStrLn "Hello, World!")
A quick linebyline explanation:
 The first line is a pragma (a special comment) that specifies some options at the top of the file.
 The second line declares the toplevel module, named
helloworldprog
.  The third line imports the
IO
module from the standard library and brings its contents into scope.  A module exporting a function
main
of typeMain
(defined in theIO
module of the standard library) can be compiled to a standalone executable. For example:main = run (putStrLn "Hello, World!")
runs theIO
commandputStrLn "Hello, World!"
and then quits the program.
Compilation with GHC Backend¶
Once we have loaded the program in Emacs or Atom, we can compile it directly by
pressing Cc Cx Cc
and entering GHC
. Alternatively, we can open a
terminal session, navigate to the toplevel folder and run:
agda compile helloworldprog.agda
The compile
flag here creates via the GHC backend
a binary file in the toplevel folder that the computer can execute.
Finally, we can then run the executable (./helloworldprog
on Unix
systems, helloworldprog.exe
on Windows) from the command line:
$ cd <your toplevel folder>
$ ./helloworldprog
Hello, World!
Compilation with JavaScript Backend¶
The JavaScript backend translates the Agda
source code of the helloworldprog.agda
file to JavaScript code.
From Emacs or Atom, press Cc Cx Cc
and enter JS
to compile the
module to JavaScript. Alternatively, open a terminal session, navigate to the
toplevel folder and run:
agda js helloworldprog.agda
This creates several .js
files in the toplevel folder. The file
corresponding to our source code has the name
jAgda.helloworldprog.js
.
Hint
The additional jsoptimize
flag can be used to make the generated
JavaScript code faster but less readable. Moreover, the
jsminify
flag makes the generated JavaScript code smaller and even
less readable.
Where to go from here?¶
There are many books and tutorials on Agda. We recommend this list of tutorials.
Join the Agda Community!¶
Get in touch and join the Agda community, or join the conversation on the Agda Zulip.
A List of Tutorials¶
Note
Some of the materials linked on this page have been created for older versions of Agda and might no longer apply directly to the latest release.
Books on Agda¶
 Phil Wadler, Wen Kokke, and Jeremy G. Siek (2019). Programming Languages Foundations in Agda
 Aaron Stump (2016). Verified Functional Programming in Agda
Tutorials and lecture notes¶
 Jesper Cockx (2021). Programming and Proving in Agda. An introduction to Agda for a general audience of functional programmers. It starts from basic knowledge of Haskell and builds up to using equational reasoning to formally prove correctness of functional programs.
 Musa Alhassy (2019). A slowpaced introduction to reflection in Agda.
 Jesper Cockx (2019). Formalize all the things (in Agda).
 Jan Malakhovski (2013). Brutal [Meta]Introduction to Dependent Types in Agda.
 Diviánszky Péter (2012). Agda Tutorial.
 Ana Bove, Peter Dybjer, and Ulf Norell (2009). A Brief Overview of Agda  A Functional Language with Dependent Types (in TPHOLs 2009) with an example of reflection. Code.
 Andreas Abel (2009). An Introduction to Dependent Types and Agda. Lecture notes used in teaching functional programming: basic introduction to Agda, CurryHoward, equality, and verification of optimizations like fusion.
 Ulf Norell and James Chapman (2008). Dependently Typed Programming in Agda. This is aimed at functional programmers.
 Ana Bove and Peter Dybjer (2008). Dependent Types at Work. A gentle introduction including logic and proofs of programs.
 Anton Setzer (2008). Lecture notes on Interactive Theorem Proving. Swansea University. These lecture notes are based on Agda and contain an introduction of Agda for students with a very basic background in logic and functional programming.
Videos on Agda¶
 Conor McBride (2014). Introduction to Dependently Typed Programming using Agda. (videos of lectures). Associated source files, with exercises.
 Daniel Licata (2013). Dependently Typed Programming in Agda (at OPLSS 2013).
 Daniel Peebles (2011). Introduction to Agda. Video of talk from the January 2011 Boston Haskell session at MIT.
Courses using Agda¶
 Computer Aided Reasoning Material for a 3rd / 4th year course (g53cfr, g54 cfr) at the university of Nottingham 2010 by Thorsten Altenkirch
 Type Theory in Rosario Material for an Agda course in Rosario, Argentina in 2011 by Thorsten Altenkirch
 Software System Design and Implementation, undergrad(?) course at the University of New South Wales by Manuel Chakravarty.
 Tüübiteooria / Type Theory, graduate course at the University of Tartu by Varmo Vene and James Chapman.
 Advanced Topics in Programming Languages: Dependent Type Systems, course at the University of Pennsylvania by Stephanie Weirich.
 Categorical Logic, course at the University of Cambridge by Samuel Staton.
 Dependently typed functional languages, master level course at EAFIT University by Andrés SicardRamírez.
 Introduction to Dependently Typed Programming using Agda, research level course at the University of Edinburgh by Conor McBride.
 Agda, introductory course for master students at ELTE Eötvös Collegium in Budapest by Péter Diviánszky and Ambrus Kaposi.
 Types for Programs and Proofs, course at Chalmers University of Technology.
 Advanced Functional Programming (in German), course at LudwigMaximiliansUniversity Munich.
 Dependently typed metaprogramming (in Agda), Summer (2013) course at the University of Cambridge by Conor McBride.
 ComputerChecked Programs and Proofs (COMP 3601), Dan Licata, Wesleyan, Fall 2013.
 Advanced Functional Programming Fall 2013 (CS410), Conor McBride, Strathclyde, notes from 2015, videos from 2017.
 Interactive Theorem proving (CS__336), Anton Setzer, Swansea University, Lent 2008.
 Inductive and inductiverecursive definitions in Intuitionistic Type Theory, lectures by Peter Dybjer at the Oregon Programming Languages Summer School 2015.
 Introduction to Univalent Foundations of Mathematics with Agda , MGS 2019 Martín Hötzel Escardó
 HigherDimensional Type Theory (CSCI 8980), courses on homotopy type theory and cubical type theory, Favonia, the University of Minnesota, Spring 2020
 Correctbyconstruction Programming in Agda, a course at the EUTYPES Summer School ‘19 in Ohrid.
 Lectures on Agda, a course by Peter Selinger at Dalhousie University, Winter 2021.
Miscellaneous¶
 Agda has a Wikipedia page
Language Reference¶
Abstract definitions¶
Definitions can be marked as abstract, for the purpose of hiding implementation details, or to speed up typechecking of other parts. In essence, abstract definitions behave like postulates, thus, do not reduce/compute. For instance, proofs whose content does not matter could be marked abstract, to prevent Agda from unfolding them (which might slow down typechecking).
As a guiding principle, all the rules concerning abstract
are
designed to prevent the leaking of implementation details of abstract
definitions. Similar concepts of other programming language include
(nonrepresentative sample):
UCSD Pascal’s and Java’s interfaces and ML’s signatures.
(Especially when abstract definitions are used in combination with modules.)
Synopsis¶
Declarations can be marked as abstract using the block keyword
abstract
.Outside of abstract blocks, abstract definitions do not reduce, they are treated as postulates, in particular:
 Abstract functions never match, thus, do not reduce.
 Abstract data types do not expose their constructors.
 Abstract record types do not expose their fields nor constructor.
 Other declarations cannot be abstract.
Inside abstract blocks, abstract definitions reduce while type checking definitions, but not while checking their type signatures. Otherwise, due to dependent types, one could leak implementation details (e.g. expose reduction behavior by using propositional equality).
Consequently information from checking the body of a definition cannot leak into its type signature, effectively disabling type inference for abstract definitions. This means that all abstract definitions need a complete type signature.
The reach of the
abstract
keyword block extends recursively to thewhere
blocks of a function and the declarations inside of arecord
declaration, but not inside modules declared in an abstract block.
Examples¶
Integers can be implemented in various ways, e.g. as difference of two natural numbers:
module Integer where
abstract
ℤ : Set
ℤ = Nat × Nat
0ℤ : ℤ
0ℤ = 0 , 0
1ℤ : ℤ
1ℤ = 1 , 0
_+ℤ_ : (x y : ℤ) → ℤ
(p , n) +ℤ (p' , n') = (p + p') , (n + n')
ℤ_ : ℤ → ℤ
ℤ (p , n) = (n , p)
_≡ℤ_ : (x y : ℤ) → Set
(p , n) ≡ℤ (p' , n') = (p + n') ≡ (p' + n)
private
postulate
+comm : ∀ n m → (n + m) ≡ (m + n)
invℤ : ∀ x → (x +ℤ (ℤ x)) ≡ℤ 0ℤ
invℤ (p , n) rewrite +comm (p + n) 0  +comm p n = refl
Using abstract
we do not give away the actual representation of
integers, nor the implementation of the operations. We can construct
them from 0ℤ
, 1ℤ
, _+ℤ_
, and ℤ
, but only reason about
equality ≡ℤ
with the provided lemma invℤ
.
The following property shapeof0ℤ
of the integer zero exposes the
representation of integers as pairs. As such, it is rejected by Agda:
when checking its type signature, proj₁ x
fails to type check
since x
is of abstract type ℤ
. Remember that the abstract
definition of ℤ
does not unfold in type signatures, even when in
an abstract block! To work around this we have to define aliases for
the projections functions:
 A property about the representation of zero integers:
abstract
private
posZ : ℤ → Nat
posZ = proj₁
negZ : ℤ → Nat
negZ = proj₂
shapeof0ℤ : ∀ (x : ℤ) (is0ℤ : x ≡ℤ 0ℤ) → posZ x ≡ negZ x
shapeof0ℤ (p , n) refl rewrite +comm p 0 = refl
By requiring shapeof0ℤ
to be private to typecheck, leaking of
representation details is prevented.
Scope of abstraction¶
In child modules, when checking an abstract definition, the abstract definitions of the parent module are transparent:
module M1 where
abstract
x : Nat
x = 0
module M2 where
abstract
xis0 : x ≡ 0
xis0 = refl
Thus, child modules can see into the representation choices of their parent modules. However, parent modules cannot see like this into child modules, nor can sibling modules see through each others abstract definitions. An exception to this is anonymous modules, which share abstract scope with their parent module, allowing parent or sibling modules to see inside their abstract definitions.
The reach of the abstract
keyword does not extend into modules:
module Parent where
abstract
module Child where
y : Nat
y = 0
x : Nat
x = 0  to avoid "useless abstract" error
yis0 : Child.y ≡ 0
yis0 = refl
The declarations in module Child
are not abstract!
Abstract definitions with whereblocks¶
Definitions in a where
block of an abstract definition are abstract
as well. This means, they can see through the abstractions of their
uncles:
module Where where
abstract
x : Nat
x = 0
y : Nat
y = x
where
x≡y : x ≡ 0
x≡y = refl
Builtins¶
The Agda type checker knows about, and has special treatment for, a number of
different concepts. The most prominent is natural numbers, which has a special
representation as Haskell integers and support for fast arithmetic. The surface
syntax of these concepts are not fixed, however, so in order to use the special
treatment of natural numbers (say) you define an appropriate data type and then
bind that type to the natural number concept using a BUILTIN
pragma.
Some builtin types support primitive functions that have no corresponding Agda
definition. These functions are declared using the primitive
keyword by
giving their type signature.
Using the builtin types¶
While it is possible to define your own versions of the builtin types and bind
them using BUILTIN
pragmas, it is recommended to use the definitions in the
Agda.Builtin
modules. These modules are installed when you install Agda and
so are always available. For instance, builtin natural numbers are defined in
Agda.Builtin.Nat
. The standard library and the agdaprelude
reexport the definitions from these modules.
The unit type¶
module Agda.Builtin.Unit
The unit type is bound to the builtin UNIT
as follows:
record ⊤ : Set where
{# BUILTIN UNIT ⊤ #}
Agda needs to know about the unit type since some of the primitive operations in the reflected type checking monad return values in the unit type.
The Σtype¶
module Agda.Builtin.Sigma
The builtin Σ
type of dependent pairs is defined as follows:
record Σ {a b} (A : Set a) (B : A → Set b) : Set (a ⊔ b) where
constructor _,_
field
fst : A
snd : B fst
open Σ public
infixr 4 _,_
{# BUILTIN SIGMA Σ #}
Lists¶
module Agda.Builtin.List
Builtin lists are bound using the LIST
builtin:
data List {a} (A : Set a) : Set a where
[] : List A
_∷_ : (x : A) (xs : List A) → List A
{# BUILTIN LIST List #}
infixr 5 _∷_
The constructors are bound automatically when binding the type. Lists are not
required to be level polymorphic; List : Set → Set
is also accepted.
As with booleans, the effect of binding the LIST
builtin is to let
you use primitive functions working with lists, such as primStringToList
and primStringFromList
, and letting the GHC backend
know to compile the List type to Haskell lists.
Maybe¶
module Agda.Builtin.Maybe
Builtin maybe type is bound using the MAYBE
builtin:
data Maybe {a} (A : Set a) : Set a where
nothing : Maybe A
just : A → Maybe A
{# BUILTIN MAYBE Maybe #}
The constructors are bound automatically when binding the type. Maybe is not
required to be level polymorphic; Maybe : Set → Set
is also accepted.
As with list, the effect of binding the MAYBE
builtin is to let
you use primitive functions working with maybes, such as primStringUncons
that returns the head and tail of a string (if it is non empty), and letting
the GHC backend know to compile the Maybe type to Haskell
maybes.
Booleans¶
module Agda.Builtin.Bool where
Builtin booleans are bound using the BOOL
, TRUE
and FALSE
builtins:
data Bool : Set where
false true : Bool
{# BUILTIN BOOL Bool #}
{# BUILTIN TRUE true #}
{# BUILTIN FALSE false #}
Note that unlike for natural numbers, you need to bind the constructors separately. The reason for this is that Agda cannot tell which constructor should correspond to true and which to false, since you are free to name them whatever you like.
The effect of binding the boolean type is that you can then use primitive
functions returning booleans, such as builtin NATEQUALS
, and letting the
GHC backend know to compile the type to Haskell Bool.
Natural numbers¶
module Agda.Builtin.Nat
Builtin natural numbers are bound using the NATURAL
builtin as follows:
data Nat : Set where
zero : Nat
suc : Nat → Nat
{# BUILTIN NATURAL Nat #}
The names of the data type and the constructors can be chosen freely, but the shape of the datatype needs to match the one given above (modulo the order of the constructors). Note that the constructors need not be bound explicitly.
Binding the builtin natural numbers as above has the following effects:
 The use of natural number literals is
enabled. By default the type of a natural number literal will be
Nat
, but it can be overloaded to include other types as well.  Closed natural numbers are represented as Haskell integers at compiletime.
 The compiler backends compile natural numbers to the appropriate number type in the target language.
 Enabled binding the builtin natural number functions described below.
Functions on natural numbers¶
There are a number of builtin functions on natural numbers. These are special in that they have both an Agda definition and a primitive implementation. The primitive implementation is used to evaluate applications to closed terms, and the Agda definition is used otherwise. This lets you prove things about the functions while still enjoying good performance of compiletime evaluation. The builtin functions are the following:
_+_ : Nat → Nat → Nat
zero + m = m
suc n + m = suc (n + m)
{# BUILTIN NATPLUS _+_ #}
__ : Nat → Nat → Nat
n  zero = n
zero  suc m = zero
suc n  suc m = n  m
{# BUILTIN NATMINUS __ #}
_*_ : Nat → Nat → Nat
zero * m = zero
suc n * m = (n * m) + m
{# BUILTIN NATTIMES _*_ #}
_==_ : Nat → Nat → Bool
zero == zero = true
suc n == suc m = n == m
_ == _ = false
{# BUILTIN NATEQUALS _==_ #}
_<_ : Nat → Nat → Bool
_ < zero = false
zero < suc _ = true
suc n < suc m = n < m
{# BUILTIN NATLESS _<_ #}
divhelper : Nat → Nat → Nat → Nat → Nat
divhelper k m zero j = k
divhelper k m (suc n) zero = divhelper (suc k) m n m
divhelper k m (suc n) (suc j) = divhelper k m n j
{# BUILTIN NATDIVSUCAUX divhelper #}
modhelper : Nat → Nat → Nat → Nat → Nat
modhelper k m zero j = k
modhelper k m (suc n) zero = modhelper 0 m n m
modhelper k m (suc n) (suc j) = modhelper (suc k) m n j
{# BUILTIN NATMODSUCAUX modhelper #}
The Agda definitions are checked to make sure that they really define the corresponding builtin function. The definitions are not required to be exactly those given above, for instance, addition and multiplication can be defined by recursion on either argument, and you can swap the arguments to the addition in the recursive case of multiplication.
The NATDIVSUCAUX
and NATMODSUCAUX
are builtins bind helper functions
for defining natural number division and modulo operations, and satisfy the
properties
div n (suc m) ≡ divhelper 0 m n m
mod n (suc m) ≡ modhelper 0 m n m
Machine words¶
module Agda.Builtin.Word
module Agda.Builtin.Word.Properties
Agda supports builtin 64bit machine words, bound with the WORD64
builtin:
postulate Word64 : Set
{# BUILTIN WORD64 Word64 #}
Machine words can be converted to and from natural numbers using the following primitives:
primitive
primWord64ToNat : Word64 → Nat
primWord64FromNat : Nat → Word64
Converting to a natural number is the trivial embedding, and converting from a natural number gives you the remainder modulo . The proof of the former theorem:
primitive
primWord64ToNatInjective : ∀ a b → primWord64ToNat a ≡ primWord64ToNat b → a ≡ b
is in the Properties
module. The proof of the latter theorem is not primitive,
but can be defined in a library using primTrustMe.
Basic arithmetic operations can be defined on Word64
by converting to
natural numbers, performing the corresponding operation, and then converting
back. The compiler will optimise these to use 64bit arithmetic. For
instance:
addWord : Word64 → Word64 → Word64
addWord a b = primWord64FromNat (primWord64ToNat a + primWord64ToNat b)
subWord : Word64 → Word64 → Word64
subWord a b = primWord64FromNat ((primWord64ToNat a + 18446744073709551616)  primWord64ToNat b)
These compile to primitive addition and subtraction on 64bit words, which in the
GHC backend map to operations on Haskell 64bit words
(Data.Word.Word64
).
Integers¶
module Agda.Builtin.Int
Builtin integers are bound with the INTEGER
builtin to a data type with
two constructors: one for positive and one for negative numbers. The builtins
for the constructors are INTEGERPOS
and INTEGERNEGSUC
.
data Int : Set where
pos : Nat → Int
negsuc : Nat → Int
{# BUILTIN INTEGER Int #}
{# BUILTIN INTEGERPOS pos #}
{# BUILTIN INTEGERNEGSUC negsuc #}
Here negsuc n
represents the integer n  1
. Unlike for natural
numbers, there is no special representation of integers at compiletime since
the overhead of using the data type compared to Haskell integers is not that
big.
Builtin integers support the following primitive operation (given a suitable binding for String):
primitive
primShowInteger : Int → String
Floats¶
module Agda.Builtin.Float
module Agda.Builtin.Float.Properties
Floating point numbers are bound with the FLOAT
builtin:
postulate Float : Set
{# BUILTIN FLOAT Float #}
This lets you use floating point literals. Floats are represented by the type checker as IEEE 754 binary64 double precision floats, with the restriction that there is exactly one NaN value. The following primitive functions are available (with suitable bindings for Nat, Bool, String, Int, Maybe_):
primitive
 Relations
primFloatIsInfinite : Float → Bool
primFloatIsNaN : Float → Bool
primFloatIsNegativeZero : Float → Bool
 Conversions
primNatToFloat : Nat → Float
primIntToFloat : Int → Float
primFloatToRatio : Float → (Σ Int λ _ → Int)
primRatioToFloat : Int → Int → Float
primShowFloat : Float → String
 Operations
primFloatPlus : Float → Float → Float
primFloatMinus : Float → Float → Float
primFloatTimes : Float → Float → Float
primFloatDiv : Float → Float → Float
primFloatPow : Float → Float → Float
primFloatNegate : Float → Float
primFloatSqrt : Float → Float
primFloatExp : Float → Float
primFloatLog : Float → Float
primFloatSin : Float → Float
primFloatCos : Float → Float
primFloatTan : Float → Float
primFloatASin : Float → Float
primFloatACos : Float → Float
primFloatATan : Float → Float
primFloatATan2 : Float → Float → Float
primFloatSinh : Float → Float
primFloatCosh : Float → Float
primFloatTanh : Float → Float
primFloatASinh : Float → Float
primFloatACosh : Float → Float
primFloatATanh : Float → Float
The primitive binary relations implement their IEEE 754 equivalents, which means
that primFloatEquality
is not reflexive, and primFloatInequality
and
primFloatLess
are not total. (Specifically, NaN is not related to anything,
including itself.)
The primFloatIsSafeInteger
function determines whether the value is a number
that is a safe integer, i.e., is within the range where the arithmetic
operations do not lose precision.
Floating point numbers can be converted to their raw representation using the primitive:
primitive
primFloatToWord64 : Float → Word64
which normalises all NaN
to a canonical NaN
with an injectivity proof:
primFloatToWord64Injective : ∀ a b → primFloatToWord64 a ≡ primFloatToWord64 b → a ≡ b
in the Properties
module. These primitives can be used to define a safe
decidable propositional equality with the safe
option. The function
primFloatToWord64
cannot be guaranteed to be consistent across backends,
therefore relying on the specific result may result in inconsistencies.
The rounding operations (primFloatRound
, primFloatFloor
, and
primFloatCeiling
) return a value of type Maybe Int
, and return nothing
when applied to NaN or the infinities:
primitive
primFloatRound : Float → Maybe Int
primFloatFloor : Float → Maybe Int
primFloatCeiling : Float → Maybe Int
The primFloatDecode
function decodes a floatingpoint number to its mantissa
and exponent, normalised such that the mantissa is the smallest possible
integer. It fails when applied to NaN or the infinities, returning nothing
.
The primFloatEncode
function encodes a pair of a mantissa and exponent to a
floatingpoint number. It fails when the resulting number cannot be represented
as a float. Note that primFloatEncode
may result in a loss of precision.
 primitive
 primFloatDecode : Float → Maybe (Σ Int λ _ → Int) primFloatEncode : Int → Int → Maybe Float
Characters¶
module Agda.Builtin.Char
module Agda.Builtin.Char.Properties
The character type is bound with the CHARACTER
builtin:
postulate Char : Set
{# BUILTIN CHAR Char #}
Binding the character type lets you use character literals. The following primitive functions are available on characters (given suitable bindings for Bool, Nat and String):
primitive
primIsLower : Char → Bool
primIsDigit : Char → Bool
primIsAlpha : Char → Bool
primIsSpace : Char → Bool
primIsAscii : Char → Bool
primIsLatin1 : Char → Bool
primIsPrint : Char → Bool
primIsHexDigit : Char → Bool
primToUpper : Char → Char
primToLower : Char → Char
primCharToNat : Char → Nat
primNatToChar : Nat → Char
primShowChar : Char → String
These functions are implemented by the corresponding Haskell functions from
Data.Char (ord
and chr
for primCharToNat
and
primNatToChar
). To make primNatToChar
total chr
is applied to the
natural number modulo 0x110000
. Furthermore, to match the behaviour of
strings, surrogate code points are mapped to the replacement
character U+FFFD
.
Converting to a natural number is the obvious embedding, and its proof:
primitive
primCharToNatInjective : ∀ a b → primCharToNat a ≡ primCharToNat b → a ≡ b
can be found in the Properties
module.
Strings¶
module Agda.Builtin.String
module Agda.Builtin.String.Properties
The string type is bound with the STRING
builtin:
postulate String : Set
{# BUILTIN STRING String #}
Binding the string type lets you use string literals. The following primitive functions are available on strings (given suitable bindings for Bool, Char and List):
primitive
primStringUncons : String → Maybe (Σ Char (λ _ → String))
primStringToList : String → List Char
primStringFromList : List Char → String
primStringAppend : String → String → String
primStringEquality : String → String → Bool
primShowString : String → String
String literals can be overloaded.
Converting to and from a list is injective, and their proofs:
primitive
primStringToListInjective : ∀ a b → primStringToList a ≡ primStringToList b → a ≡ b
primStringFromListInjective : ∀ a b → primStringFromList a ≡ primStringFromList b → a ≡ b
can found in the Properties
module.
Strings cannot represent unicode surrogate code points
(characters in the range U+D800
to U+DFFF
). These are replaced by the
unicode replacement character U+FFFD
if they appear in string literals.
Equality¶
module Agda.Builtin.Equality
The identity type can be bound to the builtin EQUALITY
as follows
infix 4 _≡_
data _≡_ {a} {A : Set a} (x : A) : A → Set a where
refl : x ≡ x
{# BUILTIN EQUALITY _≡_ #}
This lets you use proofs of type lhs ≡ rhs
in the rewrite
construction.
Other variants of the identity type are also accepted as builtin:
data _≡_ {A : Set} : (x y : A) → Set where
refl : (x : A) → x ≡ x
The type of primEraseEquality
has to match the flavor of identity type.
module Agda.Builtin.Equality.Erase
Binding the builtin equality type also enables the primEraseEquality
primitive:
primitive
primEraseEquality : ∀ {a} {A : Set a} {x y : A} → x ≡ y → x ≡ y
The function takes a proof of an equality between two values x
and y
and stays
stuck on it until x
and y
actually become definitionally equal. Whenever that
is the case, primEraseEquality e
reduces to refl
.
One use of primEraseEquality
is to replace an equality proof computed using an expensive
function (e.g. a proof by reflection) by one which is trivially refl
on the diagonal.
primTrustMe¶
module Agda.Builtin.TrustMe
From the primEraseEquality
primitive, we can derive a notion of primTrustMe
:
primTrustMe : ∀ {a} {A : Set a} {x y : A} → x ≡ y
primTrustMe {x = x} {y} = primEraseEquality unsafePrimTrustMe
where postulate unsafePrimTrustMe : x ≡ y
As can be seen from the type, primTrustMe
must be used with the
utmost care to avoid inconsistencies. What makes it different from a
postulate is that if x
and y
are actually definitionally
equal, primTrustMe
reduces to refl
. One use of primTrustMe
is to lift the primitive boolean equality on builtin types like
String to something that returns a proof
object:
eqString : (a b : String) → Maybe (a ≡ b)
eqString a b = if primStringEquality a b
then just primTrustMe
else nothing
With this definition eqString "foo" "foo"
computes to just refl
.
Sorts¶
The primitive sorts used in Agda’s type system (Set, Prop, and
Setω) are declared using BUILTIN
pragmas in the
Agda.Primitive
module. These pragmas should not be used directly
in other modules, but it is possible to rename these builtin sorts
when importing Agda.Primitive
.
{# BUILTIN TYPE Set #}
{# BUILTIN PROP Prop #}
{# BUILTIN SETOMEGA Setω #}
The primitive sorts Set and Prop are automatically imported at the
top of every toplevel Agda module, unless the
noimportsorts
flag is enabled.
Universe levels¶
module Agda.Primitive
Universe levels are also declared using BUILTIN
pragmas. In contrast to the Agda.Builtin
modules, the Agda.Primitive
module
is autoimported and thus it is not possible to change the level builtins. For
reference these are the bindings:
postulate
Level : Set
lzero : Level
lsuc : Level → Level
_⊔_ : Level → Level → Level
{# BUILTIN LEVEL Level #}
{# BUILTIN LEVELZERO lzero #}
{# BUILTIN LEVELSUC lsuc #}
{# BUILTIN LEVELMAX _⊔_ #}
Sized types¶
module Agda.Builtin.Size
The builtins for sized types are different from other
builtins in that the names are defined by the BUILTIN
pragma. Hence, to
bind the size primitives it is enough to write:
{# BUILTIN SIZEUNIV SizeUniv #}  SizeUniv : SizeUniv
{# BUILTIN SIZE Size #}  Size : SizeUniv
{# BUILTIN SIZELT Size<_ #}  Size<_ : ..Size → SizeUniv
{# BUILTIN SIZESUC ↑_ #}  ↑_ : Size → Size
{# BUILTIN SIZEINF ∞ #}  ∞ : Size
{# BUILTIN SIZEMAX _⊔ˢ_ #}  _⊔ˢ_ : Size → Size → Size
Coinduction¶
module Agda.Builtin.Coinduction
The following builtins are used for coinductive definitions:
postulate
∞ : ∀ {a} (A : Set a) → Set a
♯_ : ∀ {a} {A : Set a} → A → ∞ A
♭ : ∀ {a} {A : Set a} → ∞ A → A
{# BUILTIN INFINITY ∞ #}
{# BUILTIN SHARP ♯_ #}
{# BUILTIN FLAT ♭ #}
See Coinduction for more information.
IO¶
module Agda.Builtin.IO
The sole purpose of binding the builtin IO
type is to let Agda check that
the main
function has the right type (see Compilers).
postulate IO : Set → Set
{# BUILTIN IO IO #}
Literal overloading¶
module Agda.Builtin.FromNat
module Agda.Builtin.FromNeg
module Agda.Builtin.FromString
The machinery for overloading literals uses builtins for the conversion functions.
Reflection¶
module Agda.Builtin.Reflection
The reflection machinery has builtin types for representing Agda programs. See Reflection for a detailed description.
Rewriting¶
The experimental and totally unsafe rewriting machinery (not
to be confused with the rewrite construct) has a builtin
REWRITE
for the rewriting relation:
postulate _↦_ : ∀ {a} {A : Set a} → A → A → Set a
{# BUILTIN REWRITE _↦_ #}
This builtin is bound to the builtin equality type from Agda.Builtin.Equality
in
Agda.Builtin.Equality.Rewrite
.
Static values¶
The STATIC
pragma can be used to mark definitions which should
be normalised before compilation. The typical use case for this is
to mark the interpreter of an embedded language as STATIC
:
{# STATIC <Name> #}
Strictness¶
module Agda.Builtin.Strict
There are two primitives for controlling evaluation order:
primitive
primForce : ∀ {a b} {A : Set a} {B : A → Set b} (x : A) → (∀ x → B x) → B x
primForceLemma : ∀ {a b} {A : Set a} {B : A → Set b} (x : A) (f : ∀ x → B x) → primForce x f ≡ f x
where _≡_
is the builtin equality. At compiletime
primForce x f
evaluates to f x
when x
is in weak head normal form (whnf),
i.e. one of the following:
 a constructor application
 a literal
 a lambda abstraction
 a type constructor application (data or record type)
 a function type
 a universe (
Set _
)
Similarly primForceLemma x f
, which lets you reason about programs using
primForce
, evaluates to refl
when x
is in whnf. At runtime,
primForce e f
is compiled (by the GHC backend)
to let x = e in seq x (f x)
.
For example, consider the following function:
 pow’ n a = a 2ⁿ
pow’ : Nat → Nat → Nat
pow’ zero a = a
pow’ (suc n) a = pow’ n (a + a)
There is a space leak here (both for compiletime and runtime evaluation),
caused by unevaluated a + a
thunks. This problem can be fixed with
primForce
:
infixr 0 _$!_
_$!_ : ∀ {a b} {A : Set a} {B : A → Set b} → (∀ x → B x) → ∀ x → B x
f $! x = primForce x f
 pow n a = a 2ⁿ
pow : Nat → Nat → Nat
pow zero a = a
pow (suc n) a = pow n $! a + a
Coinduction¶
The corecursive definitions below are accepted if the option
guardedness
is active:
{# OPTIONS guardedness #}
(An alternative approach is to use Sized Types.)
Coinductive Records¶
It is possible to define the type of infinite lists (or streams) of
elements of some type A
as follows:
record Stream (A : Set) : Set where
coinductive
field
hd : A
tl : Stream A
As opposed to inductive record types, we have to introduce the keyword
coinductive
before defining the fields that constitute the record.
It is interesting to note that it is not necessary to give an explicit
constructor to the record type Stream
.
We can also define pointwise equality (a bisimulation and an equivalence) of a pair of Stream
s as a
coinductive record:
record _≈_ {A} (xs : Stream A) (ys : Stream A) : Set where
coinductive
field
hd≡ : hd xs ≡ hd ys
tl≈ : tl xs ≈ tl ys
Using copatterns we can define a pair of functions
on Stream
s such that one returns the elements in
the even positions and the other the elements in the odd positions:
even : ∀ {A} → Stream A → Stream A
hd (even xs) = hd xs
tl (even xs) = even (tl (tl xs))
odd : ∀ {A} → Stream A → Stream A
odd xs = even (tl xs)
split : ∀ {A} → Stream A → Stream A × Stream A
split xs = even xs , odd xs
as well as a function that merges a pair of Stream
s by interleaving their elements:
merge : ∀ {A} → Stream A × Stream A → Stream A
hd (merge (xs , ys)) = hd xs
tl (merge (xs , ys)) = merge (ys , tl xs)
Finally, we can prove that merge
is a left inverse for split
:
mergesplitid : ∀ {A} (xs : Stream A) → merge (split xs) ≈ xs
hd≡ (mergesplitid _) = refl
tl≈ (mergesplitid xs) = mergesplitid (tl xs)
Old Coinduction¶
Note
This is the old way of coinduction support in Agda. You are advised to use Coinductive Records instead.
To use coinduction it is recommended that you import the module Coinduction from the standard library. Coinductive types can then be defined by labelling coinductive occurrences using the delay operator ∞
:
data Coℕ : Set where
zero : Coℕ
suc : ∞ Coℕ → Coℕ
The type ∞ A
can be seen as a suspended computation of type A
. It comes with delay and force functions:
♯_ : ∀ {a} {A : Set a} → A → ∞ A
♭ : ∀ {a} {A : Set a} → ∞ A → A
Values of coinductive types can be constructed using corecursion, which does not need to terminate, but has to be productive. As an approximation to productivity the termination checker requires that corecursive definitions are guarded by coinductive constructors. As an example the infinite “natural number” can be defined as follows:
inf : Coℕ
inf = suc (♯ inf)
The check for guarded corecursion is integrated with the check for sizechange termination, thus allowing interesting combinations of inductive and coinductive types. We can for instance define the type of stream processors, along with some functions:
 Infinite streams.
data Stream (A : Set) : Set where
_∷_ : (x : A) (xs : ∞ (Stream A)) → Stream A
 A stream processor SP A B consumes elements of A and produces
 elements of B. It can only consume a finite number of A’s before
 producing a B.
data SP (A B : Set) : Set where
get : (f : A → SP A B) → SP A B
put : (b : B) (sp : ∞ (SP A B)) → SP A B
 The function eat is defined by an outer corecursion into Stream B
 and an inner recursion on SP A B.
eat : ∀ {A B} → SP A B → Stream A → Stream B
eat (get f) (a ∷ as) = eat (f a) (♭ as)
eat (put b sp) as = b ∷ ♯ eat (♭ sp) as
 Composition of stream processors.
_∘_ : ∀ {A B C} → SP B C → SP A B → SP A C
get f₁ ∘ put x sp₂ = f₁ x ∘ ♭ sp₂
put x sp₁ ∘ sp₂ = put x (♯ (♭ sp₁ ∘ sp₂))
sp₁ ∘ get f₂ = get (λ x → sp₁ ∘ f₂ x)
It is also possible to define “coinductive families”. It is recommended not to use the delay constructor (♯_
) in a constructor’s index expressions. The following definition of equality between coinductive “natural numbers” is discouraged:
data _≈’_ : Coℕ → Coℕ → Set where
zero : zero ≈’ zero
suc : ∀ {m n} → ∞ (m ≈’ n) → suc (♯ m) ≈’ suc (♯ n)
The recommended definition is the following one:
data _≈_ : Coℕ → Coℕ → Set where
zero : zero ≈ zero
suc : ∀ {m n} → ∞ (♭ m ≈ ♭ n) → suc m ≈ suc n
Copatterns¶
Consider the following record:
record Enumeration (A : Set) : Set where
constructor enumeration
field
start : A
forward : A → A
backward : A → A
This gives an interface that allows us to move along the elements of a
data type A
.
For example, we can get the “third” element of a type A
:
open Enumeration
3rd : {A : Set} → Enumeration A → A
3rd e = forward e (forward e (forward e (start e)))
Or we can go back 2 positions starting from a given a
:
backward2 : {A : Set} → Enumeration A → A → A
backward2 e a = backward (backward a)
where
open Enumeration e
Now, we want to use these methods on natural numbers. For this, we need
a record of type Enumeration Nat
. Without copatterns, we would
specify all the fields in a single expression:
open Enumeration
enumNat : Enumeration Nat
enumNat = record {
start = 0
; forward = suc
; backward = pred
}
where
pred : Nat → Nat
pred zero = zero
pred (suc x) = x
test₁ : 3rd enumNat ≡ 3
test₁ = refl
test₂ : backward2 enumNat 5 ≡ 3
test₂ = refl
Note that if we want to use automated casesplitting and pattern matching to implement one of the fields, we need to do so in a separate definition.
With copatterns, we can define the fields of a record as separate declarations, in the same way that we would give different cases for a function:
open Enumeration
enumNat : Enumeration Nat
start enumNat = 0
forward enumNat n = suc n
backward enumNat zero = zero
backward enumNat (suc n) = n
The resulting behaviour is the same in both cases:
test₁ : 3rd enumNat ≡ 3
test₁ = refl
test₂ : backward2 enumNat 5 ≡ 3
test₂ = refl
Copatterns in function definitions¶
In fact, we do not need to start at 0
. We can allow the user to
specify the starting element.
Without copatterns, we just add the extra argument to the function declaration:
open Enumeration
enumNat : Nat → Enumeration Nat
enumNat initial = record {
start = initial
; forward = suc
; backward = pred
}
where
pred : Nat → Nat
pred zero = zero
pred (suc x) = x
test₁ : 3rd (enumNat 10) ≡ 13
test₁ = refl
With copatterns, the function argument must be repeated once for each field in the record:
open Enumeration
enumNat : Nat → Enumeration Nat
start (enumNat initial) = initial
forward (enumNat _) n = suc n
backward (enumNat _) zero = zero
backward (enumNat _) (suc n) = n
Mixing patterns and copatterns¶
Instead of allowing an arbitrary value, we want to limit the user to
two choices: 0
or 42
.
Without copatterns, we would need an auxiliary definition to choose which value to start with based on the userprovided flag:
open Enumeration
if_then_else_ : {A : Set} → Bool → A → A → A
if true then x else _ = x
if false then _ else y = y
enumNat : Bool → Enumeration Nat
enumNat ahead = record {
start = if ahead then 42 else 0
; forward = suc
; backward = pred
}
where
pred : Nat → Nat
pred zero = zero
pred (suc x) = x
With copatterns, we can do the case analysis directly by pattern matching:
open Enumeration
enumNat : Bool → Enumeration Nat
start (enumNat true) = 42
start (enumNat false) = 0
forward (enumNat _) n = suc n
backward (enumNat _) zero = zero
backward (enumNat _) (suc n) = n
Tip
When using copatterns to define an element of a record type,
the fields of the record must be in scope. In the examples above,
we use open Enumeration
to bring the fields of the record into
scope.
Consider the first example:
enumNat : Enumeration Nat
start enumNat = 0
forward enumNat n = suc n
backward enumNat zero = zero
backward enumNat (suc n) = n
If the fields of the Enumeration
record are not in scope (in
particular, the start
field), then Agda will not be able to
figure out what the first copattern means:
Could not parse the lefthand side start enumNat
Operators used in the grammar:
None
when scope checking the lefthand side start enumNat in the
definition of enumNat
The solution is to open the record before using its fields:
open Enumeration
enumNat : Enumeration Nat
start enumNat = 0
forward enumNat n = suc n
backward enumNat zero = zero
backward enumNat (suc n) = n
Core language¶
A program in Agda consists of a number of declarations written in an *.agda
file. A declaration introduces a new identifier and gives its type and
definition. It is possible to declare:
 datatypes
 record types (including coinductive records)
 function definitions (including mixfix operators and abstract definitions)
 modules
 local definitions let and where
 postulates
 variables
 patternsynonyms
 precedence (fixity)
 pragmas, and
 program options
Declarations have a signature part and a definition part. These can appear separately in the program. Names must be declared before they are used, but by separating the signature from the definition it is possible to define things in mutual recursion.
Grammar¶
At its core, Agda is a dependently typed lambda calculus. The grammar of
terms where a
represents a generic term is:
a ::= x  variable
 λ x → a  lambda abstraction
 f  defined function
 (x : a) → a  function space
 F  data/record type
 c a  data/record constructor
 s  sort Seti, Setω+i
Syntax overview¶
The syntax of an Agda program is defined in terms of three key components:
 Expressions write function bodies and types.
 Declarations declare types, datatypes, postulates, records, functions etc.
 Pragmas define program options.
There are also three main levels of syntax, corresponding to different levels of interpretation:
 Concrete is the highlevel sugared syntax, it representing exactly what the user wrote (Agda.Syntax.Concrete).
 Abstract, before typechecking (Agda.Syntax.Abstract)
 Internal, the fullintepreted core Agda terms, typechecked; roughly corresponding to (Agda.Syntax.Internal).
The process of translating an *.agda
file into an executable has several
stages:
*.agda file
==[ parser (Lexer.x + Parser.y) ]==>
Concrete syntax
==[ nicifier (Syntax.Concrete.Definitions) ]==>
'Nice' concrete syntax
==[ scope checking (Syntax.Translation.ConcreteToAbstract) ]==>
Abstract syntax
==[ type checking (TypeChecking.Rules.*) ]==>
Internal syntax
==[ Agda.Compiler.ToTreeless ]==>
Treeless syntax
==[ different backends (Compiler.MAlonzo.*, Compiler.JS.*, ...) ]==>
Source code
==[ different compilers (GHC compiler, ...) ]==>
Executable
The following sections describe these stages in more detail:
Lexer¶
Lexical analysis (aka tokenization) is the process of converting a sequence of
characters (the raw *.agda
file) into a sequence of tokens (strings with a
meaning).
The lexer in Agda is generated by Alex, and is an adaptation of GHC’s lexer.
The main lexing function lexer
is called by the
Agda.Syntax.Parser.Parser
to get the next token from the input.
Parser¶
The parser is the component that takes the output of the lexer and builds a data structure that we will call Concrete Syntax, while checking for correct syntax.
The parser is generated by Happy.
Example: when a name is a sequence of parts, the lexer just sees it as a string, the parser does the translation in this step.
Concrete Syntax¶
The concrete syntax is a raw representation of the program text without any desugaring at all. This is what the parser produces. The idea is that if we figure out how to keep the concrete syntax around, it can be printed exactly as the user wrote it.
Nice Concrete Syntax¶
The Nice Concrete Syntax
is a slightly reorganized version of the
Concrete Syntax
that is easier to deal with internally. Among other
things, it:
 detects mutual blocks
 assembles definitions from their isolated parts
 collects fixity information of mixfix operators and attaches it to definitions
 emits warnings for possibly unintended but still valid declarations, which
essentially is dead code such as empty
instance
blocks and misplaced pragmas
Abstract Syntax¶
The translation from Agda.Syntax.Concrete
to Agda.Syntax.Abstract
involves scope analysis, figuring out infix operator precedences and tidying
up definitions.
The abstract syntax Agda.Syntax.Abstract
is the result after desugaring
and scope analysis of the concrete syntax. The type checker works on abstract
syntax, producing internal syntax.
Internal Syntax¶
This is the final stage of syntax before being handed off to one of the backends. Terms are wellscoped and welltyped.
While producing the Internal Syntax
, terms are checked for safety. This
safety check means termination check and
coverage check for functions, and positivity check
for datatypes.
Typedirected operations such as instance resolution and disambiguation of overloaded constructors (different constructors with the same name) also happen here.
The internal syntax Agda.Syntax.Internal
uses the following haskell
datatype to represent the grammar of a Term
presented above.
data Term = Var {# UNPACK #} !Int Elims  ^ @x es@ neutral
 Lam ArgInfo (Abs Term)  ^ Terms are beta normal. Relevance is ignored
 Lit Literal
 Def QName Elims  ^ @f es@, possibly a delta/iotaredex
 Con ConHead ConInfo Elims
 ^ @c es@ or @record { fs = es }@
 @es@ allows only Apply and IApply eliminations,
 and IApply only for data constructors.
 Pi (Dom Type) (Abs Type)  ^ dependent or nondependent function space
 Sort Sort
 Level Level
 MetaV {# UNPACK #} !MetaId Elims
Treeless Syntax¶
The treeless syntax is intended to be used as input for the compiler backends. It is more lowlevel than the internal syntax and is not used for type checking. Some of the features of the treeless syntax are:
 case expressions instead of case trees
 no instantiated datatypes / constructors
For instance, the Glasgow Haskell Compiler (GHC) backend translates the treeless syntax into a proper GHC Haskell program.
Another backend that may be used is the JavaScript backend, which translates the treeless syntax to JavaScript code.
The treeless representation of the program has Anormal form (ANF). That means that all the case expressions are targeting a single variable, and all alternatives may only peel off one constructor.
The backends can handle an ANF syntax easier than a syntax of a language where one may case arbitrary expressions and use deep patterns.
Coverage Checking¶
To ensure completeness of definitions by pattern matching, Agda performs a coverage check on each definition by pattern matching. This page explains how this coverage check works by starting from simple examples and building up to the general case.
Single match on a nonindexed datatype¶
When a function definition pattern matches on a single argument of a simple (i.e. nonindexed) datatype, there should be a clause for each constructor. For example:
data TrafficLight : Set where
red yellow green : TrafficLight
go : TrafficLight → Bool
go red = false
go yellow = false
go green = true
Alternatively, one or more cases may be replaced by a catchall clause that uses a variable pattern or a wildcard pattern _. In this case, the catchall clause should be last.
go' : TrafficLight → Bool
go' green = true
go' _ = false
Note
When the –exactsplit flag is enabled, catchall clauses should be
marked explicitly by a catchall pragma
({# CATCHALL #}
).
The coverage check can be turned off for an individual definition by
putting a {# NON_COVERING #}
pragma immediately in front of the
type signature.
{# NON_COVERING #}
go'' : TrafficLight → Bool
go'' red = false
go'' green = true
In the special case of a datatype with no constructors (i.e. an empty type), there should be a single absurd clause with an absurd pattern () and no righthand side.
data ⊥ : Set where
 no constructors
magic : {A : Set} → ⊥ → A
magic ()
Matching on multiple arguments¶
If a function matches on several arguments, there should be a case for each possible combinations of constructors.
sameColor : TrafficLight → TrafficLight → Bool
sameColor red red = true
sameColor red yellow = false
sameColor red green = false
sameColor yellow red = false
sameColor yellow yellow = true
sameColor yellow green = false
sameColor green red = false
sameColor green yellow = false
sameColor green green = true
Again, one or more cases may be replaced by a catchall clause.
sameColor' : TrafficLight → TrafficLight → Bool
sameColor' red red = true
sameColor' yellow yellow = true
sameColor' green green = true
sameColor' _ _ = false
Copattern matching¶
Functions that return an element of a record type can use copatterns to give the individual fields. The coverage check will ensure that there is a single case for each field of the record type. For example:
record Person : Set where
field
name : String
age : Nat
open Person
bob : Person
name bob = "Bob"
age bob = 25
Absurd copatterns or wildcard copatterns are not supported.
Matching on indexed datatypes¶
When a function definition matches on an argument of an indexed datatype, the following conditions should be satisfied:
 For each clause that matches on a constructor pattern
c u₁ … uₙ
, the indices of the type of the pattern should be unifiable with the indices of the datatype being matched on.  For each constructor
c
that does not appear in a clause, unification of the indices of the type of the constructor with the indices of the datatype should end in a conflict.
For example, consider the definition of the head
function on
vectors:
data Vec (A : Set) : Nat → Set where
[] : Vec A 0
_∷_ : ∀ {n} → A → Vec A n → Vec A (suc n)
head : ∀ {A m} → Vec A (suc m) → A
head (x ∷ xs) = x
The type of the pattern x ∷ xs
is Vec A (suc n)
, which is
unifiable with the type Vec A (suc m)
. Meanwhile, unification of
the type Vec A 0
of the constructor []
with the type Vec A
(suc n)
results in a conflict between 0
and suc n
, so there
is no case for []
.
In case a function matches on several arguments and one or more of
them are of indexed datatypes, only those combinations of arguments
should be considered where the indices do not lead to a conflict. For
example, consider the zipWith
function on vectors:
zipWith : ∀ {A B C m} → (A → B → C) → Vec A m → Vec B m → Vec C m
zipWith f [] [] = []
zipWith f (x ∷ xs) (y ∷ ys) = f x y ∷ zipWith f xs ys
Since both input vectors have the same length m
, there is are no
cases for the combinations where one vector has length 0
and the
other has length suc n
.
In the special case where unification ends in a conflict for all constructors, there should be a single absurd clase (as for an empty type). For example:
data Fin : Nat → Set where
zero : ∀ {n} → Fin (suc n)
suc : ∀ {n} → Fin n → Fin (suc n)
nofinzero : Fin 0 → ⊥
nofinzero ()
In many common cases, absurd clauses may be omitted as long as the
remaining clauses reveal sufficient information to indicate what
arguments to case split on. As an example, consider the definition of
the lookup
function for vectors:
lookup : ∀ {A} {n} → Vec A n → Fin n → A
lookup [] ()
lookup (x ∷ xs) zero = x
lookup (x ∷ xs) (suc i) = lookup xs i
This definition pattern matches on both its (explicit) arguments in both the absurd clause and the two regular clauses. Hence it is allowed to leave out the absurd clause from the definition:
lookup' : ∀ {A} {n} → Vec A n → Fin n → A
lookup' (x ∷ xs) zero = x
lookup' (x ∷ xs) (suc i) = lookup' xs i
Refer to the next section for a precise explanation of when an absurd clause may be omitted.
General case¶
In the general case, the coverage checker constructs a case tree from the definition given by the user. It then ensures that the following properties are satisfied:
 The nonabsurd clauses of a definition should arise as the leaves of the case tree.
 The absurd clauses of a definition should arise as the internal nodes of the case tree that have no children.
 Absurd clauses may be omitted if removing the corresponding internal nodes from the case tree does not result in other internal nodes becoming childless.
 Nonabsurd clauses may be replaced by catchall clauses if (1) the patterns of those catchall clauses are more general than the omitted clauses, (2) the added catchall clauses are not more general than any of the clauses that follow it, and (3) removing the leaves corresponding to the omitted clauses does not result in any internal nodes becoming childless.
As an example, consider the case tree for the definition of the
lookup
function defined above:
lookup xs i = case xs of
[] → case i of {}
(x ∷ xs) → case i of
zero → x
(suc i) → lookup xs i
The absurd clause arises from the case split on i
in the branch
where xs = []
, which leads to zero cases. The two normal clauses
arise from the two leaves of the case tree. If the case [] → case i
of {}
is removed from the case tree, all the remaining internal
nodes still have at least one child, hence the absurd clause may be
left out of the definition.
For a full formal description of the algorithm that Agda uses to construct a case tree and check coverage of definitions by pattern matching, refer to the article Elaborating dependent (co)pattern matching: No pattern left behind.
Cubical¶
The Cubical mode extends Agda with a variety of features from Cubical Type Theory. In particular, computational univalence and higher inductive types which hence gives computational meaning to Homotopy Type Theory and Univalent Foundations. The version of Cubical Type Theory that Agda implements is a variation of the CCHM Cubical Type Theory where the Kan composition operations are decomposed into homogeneous composition and generalized transport. This is what makes the general schema for higher inductive types work, following the CHM paper.
To use the cubical mode Agda needs to be run with the
cubical
commandlineoption or with {#
OPTIONS cubical #}
at the top of the file.
The cubical mode adds the following features to Agda:
 An interval type and path types
 Generalized transport (
transp
)  Partial elements
 Homogeneous composition (
hcomp
)  Glue types
 Higher inductive types
 Cubical identity types
There is a standard agda/cubical
library for Cubical Agda
available at https://github.com/agda/cubical. This documentation uses
the naming conventions of this library, for a detailed list of all of
the builtin Cubical Agda files and primitives see
Appendix: Cubical Agda primitives. The main design choices of the core part of the
library are explained in
https://homotopytypetheory.org/2018/12/06/cubicalagda/
(lagda rendered version:
https://ice1000.org/2018/1206CubicalAgda.html).
The recommended way to get access to the Cubical primitives is to add
the following to the top of a file (this assumes that the
agda/cubical
library is installed and visible to Agda).
{# OPTIONS cubical #}
open import Cubical.Core.Everything
For detailed install instructions for agda/cubical
see:
https://github.com/agda/cubical/blob/master/INSTALL.md. In order to
make this library visible to Agda add
/path/to/cubical/cubical.agdalib
to .agda/libraries
and
cubical
to .agda/defaults
(where path/to
is the absolute
path to where the agda/cubical
library has been installed). For
details of Agda’s library management see Library Management.
Expert users who do not want to rely on agda/cubical
can just add
the relevant import statements at the top of their file (for details
see Appendix: Cubical Agda primitives). However, for beginners it is
recommended that one uses at least the core part of the
agda/cubical
library.
There is also an older version of the library available at https://github.com/Saizan/cubicaldemo/. However this is relying on deprecated features and is not recommended to use.
The interval and path types¶
The key idea of Cubical Type Theory is to add an interval type I :
SSet
(the reason this is in SSet
is because it doesn’t support
the transp
and hcomp
operations). A variable i : I
intuitively corresponds to a point in the real unit interval. In an empty context,
there are only two values of type I
: the two endpoints of the
interval, i0
and i1
.
i0 : I
i1 : I
Elements of the interval form a De Morgan algebra, with minimum
(∧
), maximum (∨
) and negation (~
).
_∧_ : I → I → I
_∨_ : I → I → I
~_ : I → I
All the properties of De Morgan algebras hold definitionally. The
endpoints of the interval i0
and i1
are the bottom and top
elements, respectively.
i0 ∨ i = i
i ∨ i1 = i1
i ∨ j = j ∨ i
i0 ∧ i = i0
i1 ∧ i = i
i ∧ j = j ∧ i
~ (~ i) = i
i0 = ~ i1
~ (i ∨ j) = ~ i ∧ ~ j
~ (i ∧ j) = ~ i ∨ ~ j
The core idea of Homotopy Type Theory and Univalent Foundations is a
correspondence between paths (as in topology) and (proofrelevant)
equalities (as in MartinLöf’s identity type). This correspondence is
taken very literally in Cubical Agda where a path in a type A
is
represented like a function out of the interval, I → A
. A
path type is in fact a special case of the more general builtin
heterogeneous path types:
 PathP : ∀ {ℓ} (A : I → Set ℓ) → A i0 → A i1 → Set ℓ
 Non dependent path types
Path : ∀ {ℓ} (A : Set ℓ) → A → A → Set ℓ
Path A a b = PathP (λ _ → A) a b
The central notion of equality in Cubical Agda is hence heterogeneous
equality (in the sense of PathOver
in HoTT). To define paths we
use λabstractions and to apply them we use regular application. For
example, this is the definition of the constant path (or proof of
reflexivity):
refl : ∀ {ℓ} {A : Set ℓ} {x : A} → Path A x x
refl {x = x} = λ i → x
Although they use the same syntax, a path is not exactly the same as a function. For example, the following is not valid:
refl : ∀ {ℓ} {A : Set ℓ} {x : A} → Path A x x
refl {x = x} = λ (i : I) → x
Because of the intuition that paths correspond to equality PathP (λ
i → A) x y
gets printed as x ≡ y
when A
does not mention
i
. By iterating the path type we can define squares, cubes, and
higher cubes in Agda, making the type theory cubical. For example a
square in A
is built out of 4 points and 4 lines:
Square : ∀ {ℓ} {A : Set ℓ} {x0 x1 y0 y1 : A} →
x0 ≡ x1 → y0 ≡ y1 → x0 ≡ y0 → x1 ≡ y1 → Set ℓ
Square p q r s = PathP (λ i → p i ≡ q i) r s
Viewing equalities as functions out of the interval makes it possible to do a lot of equality reasoning in a very direct way:
sym : ∀ {ℓ} {A : Set ℓ} {x y : A} → x ≡ y → y ≡ x
sym p = λ i → p (~ i)
cong : ∀ {ℓ} {A : Set ℓ} {x y : A} {B : A → Set ℓ} (f : (a : A) → B a) (p : x ≡ y)
→ PathP (λ i → B (p i)) (f x) (f y)
cong f p i = f (p i)
Because of the way functions compute these satisfy some new definitional equalities compared to the standard Agda definitions:
symInv : ∀ {ℓ} {A : Set ℓ} {x y : A} (p : x ≡ y) → sym (sym p) ≡ p
symInv p = refl
congId : ∀ {ℓ} {A : Set ℓ} {x y : A} (p : x ≡ y) → cong (λ a → a) p ≡ p
congId p = refl
congComp : ∀ {ℓ} {A B C : Set ℓ} (f : A → B) (g : B → C) {x y : A} (p : x ≡ y) →
cong (λ a → g (f a)) p ≡ cong g (cong f p)
congComp f g p = refl
Path types also lets us prove new things are not provable in standard Agda, for example function extensionality (pointwise equal functions are equal) has an extremely simple proof:
funExt : ∀ {ℓ} {A : Set ℓ} {B : A → Set ℓ} {f g : (x : A) → B x} →
((x : A) → f x ≡ g x) → f ≡ g
funExt p i x = p x i
Transport¶
While path types are great for reasoning about equality they don’t let us transport along paths between types or even compose paths, which in particular means that we cannot yet prove the induction principle for paths. In order to remedy this we also have a builtin (generalized) transport operation transp and homogeneous composition operations hcomp. The transport operation is generalized in the sense that it lets us specify where it is the identity function.
transp : ∀ {ℓ} (A : I → Set ℓ) (r : I) (a : A i0) → A i1
There is an additional side condition to be satisfied for a usage of transp
to typecheck: A
should be a constant function whenever the constraint r = i1
is satisfied. By constant here we mean that A
is definitionally equal to λ _ → A i0
, which in turn requires A i0
and A i1
to be definitionally equal as well.
When r
is i1
, transp A r
will compute as the identity function.
transp A i1 a = a
This is only sound if in such a case A
is a trivial path, as the side condition requires.
It might seems strange that the side condition expects r
and
A
to interact, but both of them can depend on any of the
interval variables in scope, so assuming a specific value for r
can affect what A
looks like.
Some examples of the side condition for different values of r
:
 If
r
is some inscope variablei
, on whichA
may depend as well, thenA
only needs to be a constant function when substitutingi1
fori
.  If
r
isi0
then there is no restrition onA
, since the side condition is vacuously true.  If
r
isi1
thenA
must be a constant function.
We can use transp
to define regular transport:
transport : ∀ {ℓ} {A B : Set ℓ} → A ≡ B → A → B
transport p a = transp (λ i → p i) i0 a
By combining the transport and min operations we can define the induction principle for paths:
J : ∀ {ℓ} {A : Set ℓ} {x : A} (P : ∀ y → x ≡ y → Set ℓ)
(d : P x refl) {y : A} (p : x ≡ y)
→ P y p
J P d p = transport (λ i → P (p i) (λ j → p (i ∧ j))) d
One subtle difference between paths and the propositional equality
type of Agda is that the computation rule for J
does not hold
definitionally. If J
is defined using patternmatching as in the
Agda standard library then this holds, however as the path types are
not inductively defined this does not hold for the above definition of
J
. In particular, transport in a constant family is only the
identity function up to a path which implies that the computation rule
for J
only holds up to a path:
transportRefl : ∀ {ℓ} {A : Set ℓ} (x : A) → transport refl x ≡ x
transportRefl {A = A} x i = transp (λ _ → A) i x
JRefl : ∀ {ℓ} {A : Set ℓ} {x : A} (P : ∀ y → x ≡ y → Set ℓ)
(d : P x refl) → J P d refl ≡ d
JRefl P d = transportRefl d
Internally in Agda the transp
operation computes by cases on the
type, so for example for Σtypes it is computed elementwise. For path
types it is however not yet possible to provide the computation rule
as we need some way to remember the endpoints of the path after
transporting it. Furthermore, this must work for arbitrary higher
dimensional cubes (as we can iterate the path types). For this we
introduce the “homogeneous composition operations” (hcomp
) that
generalize binary composition of paths to nary composition of higher
dimensional cubes.
Partial elements¶
In order to describe the homogeneous composition operations we need to
be able to write partially specified ndimensional cubes (i.e. cubes
where some faces are missing). Given an element of the interval r :
I
there is a predicate IsOne
which represents the constraint r
= i1
. This comes with a proof that i1
is in fact equal to i1
called 1=1 : IsOne i1
. We use Greek letters like φ
or ψ
when such an r
should be thought of as being in the domain of
IsOne
.
Using this we introduce a type of partial elements called Partial φ
A
, this is a special version of IsOne φ → A
with a more
extensional judgmental equality (two elements of Partial φ A
are
considered equal if they represent the same subcube, so the faces of
the cubes can for example be given in different order and the two
elements will still be considered the same). The idea is that
Partial φ A
is the type of cubes in A
that are only defined
when IsOne φ
. There is also a dependent version of this called
PartialP φ A
which allows A
to be defined only when IsOne
φ
.
Partial : ∀ {ℓ} → I → Set ℓ → SSet ℓ
PartialP : ∀ {ℓ} → (φ : I) → Partial φ (Set ℓ) → SSet ℓ
There is a new form of pattern matching that can be used to introduce partial elements:
partialBool : ∀ i → Partial (i ∨ ~ i) Bool
partialBool i (i = i0) = true
partialBool i (i = i1) = false
The term partialBool i
should be thought of a boolean with different
values when (i = i0)
and (i = i1)
. Terms of type Partial φ
A
can also be introduced using a Pattern matching lambda.
partialBool' : ∀ i → Partial (i ∨ ~ i) Bool
partialBool' i = λ { (i = i0) → true
; (i = i1) → false }
When the cases overlap they must agree (note that the order of the cases doesn’t have to match the interval formula exactly):
partialBool'' : ∀ i j → Partial (~ i ∨ i ∨ (i ∧ j)) Bool
partialBool'' i j = λ { (i = i1) → true
; (i = i1) (j = i1) → true
; (i = i0) → false }
Furthermore IsOne i0
is actually absurd.
empty : {A : Set} → Partial i0 A
empty = λ { () }
Cubical Agda also has cubical subtypes as in the CCHM type theory:
_[_↦_] : ∀ {ℓ} (A : Set ℓ) (φ : I) (u : Partial φ A) → SSet ℓ
A [ φ ↦ u ] = Sub A φ u
A term v : A [ φ ↦ u ]
should be thought of as a term of type
A
which is definitionally equal to u : A
when IsOne φ
is
satisfied. Any term u : A
can be seen as an term of A [ φ ↦ u
]
which agrees with itself on φ
:
inS : ∀ {ℓ} {A : Set ℓ} {φ : I} (u : A) → A [ φ ↦ (λ _ → u) ]
One can also forget that a partial element agrees with u
on φ
:
outS : ∀ {ℓ} {A : Set ℓ} {φ : I} {u : Partial φ A} → A [ φ ↦ u ] → A
They satisfy the following equalities:
outS (inS a) = a
inS {φ = φ} (outS {φ = φ} a) = a
outS {φ = i1} {u} _ = u 1=1
Note that given a : A [ φ ↦ u ]
and α : IsOne φ
, it is not the case
that outS a = u α
; however, underneath the pattern binding (φ = i1)
,
one has outS a = u 1=1
.
With all of this cubical infrastructure we can now describe the
hcomp
operations.
Homogeneous composition¶
The homogeneous composition operations generalize binary composition of paths so that we can compose multiple composable cubes.
hcomp : ∀ {ℓ} {A : Set ℓ} {φ : I} (u : I → Partial φ A) (u0 : A) → A
When calling hcomp {φ = φ} u u0
Agda makes sure that u0
agrees
with u i0
on φ
. The idea is that u0
is the base and u
specifies the sides of an open box. This is hence an open (higher
dimensional) cube where the side opposite of u0
is missing. The
hcomp
operation then gives us the missing side opposite of
u0
. For example binary composition of paths can be written as:
compPath : ∀ {ℓ} {A : Set ℓ} {x y z : A} → x ≡ y → y ≡ z → x ≡ z
compPath {x = x} p q i = hcomp (λ j → λ { (i = i0) → x
; (i = i1) → q j })
(p i)
Pictorially we are given p : x ≡ y
and q : y ≡ z
, and the
composite of the two paths is obtained by computing the missing lid of
this open square:
x z
^ ^
 
x   q j
 
x > y
p i
In the drawing the direction i
goes lefttoright and j
goes
bottomtotop. As we are constructing a path from x
to z
along
i
we have i : I
in the context already and we put p i
as
bottom. The direction j
that we are doing the composition in is
abstracted in the first argument to hcomp
.
Note that the partial element u
does not have to specify
all the sides of the open box, giving more sides simply gives you
more control on the result of hcomp
.
For example if we omit the (i = i0) → x
side in the
definition of compPath
we still get a valid term of type
A
. However, that term would reduce to hcomp (\ j → \ { () }) x
when i = i0
and so that definition would not build
a path that starts from x
.
We can also define homogeneous filling of cubes as
hfill : ∀ {ℓ} {A : Set ℓ} {φ : I}
(u : ∀ i → Partial φ A) (u0 : A [ φ ↦ u i0 ])
(i : I) → A
hfill {φ = φ} u u0 i = hcomp (λ j → λ { (φ = i1) → u (i ∧ j) 1=1
; (i = i0) → outS u0 })
(outS u0)
When i
is i0
this is u0
and when i
is i1
this is
hcomp u u0
. This can hence be seen as giving us the interior of an
open box. In the special case of the square above hfill
gives us a
direct cubical proof that composing p
with refl
is p
.
compPathRefl : ∀ {ℓ} {A : Set ℓ} {x y : A} (p : x ≡ y) → compPath p refl ≡ p
compPathRefl {x = x} {y = y} p j i = hfill (λ _ → λ { (i = i0) → x
; (i = i1) → y })
(inS (p i))
(~ j)
Glue types¶
In order to be able to prove the univalence theorem we also have to
add “Glue” types. These lets us turn equivalences between types into
paths between types. An equivalence of types A
and B
is
defined as a map f : A → B
such that its fibers are contractible.
fiber : ∀ {ℓ} {A B : Set ℓ} (f : A → B) (y : B) → Set ℓ
fiber {A = A} f y = Σ[ x ∈ A ] f x ≡ y
isContr : ∀ {ℓ} → Set ℓ → Set ℓ
isContr A = Σ[ x ∈ A ] (∀ y → x ≡ y)
record isEquiv {ℓ} {A B : Set ℓ} (f : A → B) : Set ℓ where
field
equivproof : (y : B) → isContr (fiber f y)
_≃_ : ∀ {ℓ} (A B : Set ℓ) → Set ℓ
A ≃ B = Σ[ f ∈ (A → B) ] (isEquiv f)
The simplest example of an equivalence is the identity function.
idfun : ∀ {ℓ} → (A : Set ℓ) → A → A
idfun _ x = x
idIsEquiv : ∀ {ℓ} (A : Set ℓ) → isEquiv (idfun A)
equivproof (idIsEquiv A) y =
((y , refl) , λ z i → z .snd (~ i) , λ j → z .snd (~ i ∨ j))
idEquiv : ∀ {ℓ} (A : Set ℓ) → A ≃ A
idEquiv A = (idfun A , idIsEquiv A)
An important special case of equivalent types are isomorphic types (i.e. types with maps going back and forth which are mutually inverse): https://github.com/agda/cubical/blob/master/Cubical/Foundations/Isomorphism.agda.
As everything has to work up to higher dimensions the Glue types take
a partial family of types that are equivalent to the base type A
:
Glue : ∀ {ℓ ℓ'} (A : Set ℓ) {φ : I}
→ Partial φ (Σ[ T ∈ Set ℓ' ] T ≃ A) → Set ℓ'
These come with a constructor and eliminator:
glue : ∀ {ℓ ℓ'} {A : Set ℓ} {φ : I} {Te : Partial φ (Σ[ T ∈ Set ℓ' ] T ≃ A)}
→ PartialP φ T → A → Glue A Te
unglue : ∀ {ℓ ℓ'} {A : Set ℓ} (φ : I) {Te : Partial φ (Σ[ T ∈ Set ℓ' ] T ≃ A)}
→ Glue A Te → A
Using Glue types we can turn an equivalence of types into a path as follows:
ua : ∀ {ℓ} {A B : Set ℓ} → A ≃ B → A ≡ B
ua {_} {A} {B} e i = Glue B (λ { (i = i0) → (A , e)
; (i = i1) → (B , idEquiv B) })
The idea is that we glue A
together with B
when i = i0
using e
and B
with itself when i = i1
using the identity
equivalence. This hence gives us the key part of univalence: a
function for turning equivalences into paths. The other part of
univalence is that this map itself is an equivalence which follows
from the computation rule for ua
:
uaβ : ∀ {ℓ} {A B : Set ℓ} (e : A ≃ B) (x : A) → transport (ua e) x ≡ e .fst x
uaβ e x = transportRefl (e .fst x)
Transporting along the path that we get from applying ua
to an
equivalence is hence the same as applying the equivalence. This is
what makes it possible to use the univalence axiom computationally in
Cubical Agda: we can package up our equivalences as paths, do equality
reasoning using these paths, and in the end transport along the paths
in order to compute with the equivalences.
We have the following equalities:
Glue A {i1} Te = Te 1=1 .fst
unglue φ (glue t a) = a
glue (\ { (φ = i1) > g}) (unglue φ g) = g
unglue i1 {Te} g = Te 1=1 .snd .fst g
glue {φ = i1} t a = t 1=1
For more results about Glue types and univalence see https://github.com/agda/cubical/blob/master/Cubical/Core/Glue.agda and https://github.com/agda/cubical/blob/master/Cubical/Foundations/Univalence.agda. For some examples of what can be done with this for working with binary and unary numbers see https://github.com/agda/cubical/blob/master/Cubical/Data/BinNat/BinNat.agda.
Higher inductive types¶
Cubical Agda also lets us directly define higher inductive types as datatypes with path constructors. For example the circle and torus can be defined as:
data S¹ : Set where
base : S¹
loop : base ≡ base
data Torus : Set where
point : Torus
line1 : point ≡ point
line2 : point ≡ point
square : PathP (λ i → line1 i ≡ line1 i) line2 line2
Functions out of higher inductive types can then be defined using patternmatching:
t2c : Torus → S¹ × S¹
t2c point = (base , base)
t2c (line1 i) = (loop i , base)
t2c (line2 j) = (base , loop j)
t2c (square i j) = (loop i , loop j)
c2t : S¹ × S¹ → Torus
c2t (base , base) = point
c2t (loop i , base) = line1 i
c2t (base , loop j) = line2 j
c2t (loop i , loop j) = square i j
When giving the cases for the path and square constructors we have to
make sure that the function maps the boundary to the right thing. For
instance the following definition does not pass Agda’s typechecker as
the boundary of the last case does not match up with the expected
boundary of the square constructor (as the line1
and line2
cases are mixed up).
c2t_bad : S¹ × S¹ → Torus
c2t_bad (base , base) = point
c2t_bad (loop i , base) = line2 i
c2t_bad (base , loop j) = line1 j
c2t_bad (loop i , loop j) = square i j
Functions defined by patternmatching on higher inductive types compute definitionally, for all constructors.
c2tt2c : ∀ (t : Torus) → c2t (t2c t) ≡ t
c2tt2c point = refl
c2tt2c (line1 _) = refl
c2tt2c (line2 _) = refl
c2tt2c (square _ _) = refl
t2cc2t : ∀ (p : S¹ × S¹) → t2c (c2t p) ≡ p
t2cc2t (base , base) = refl
t2cc2t (base , loop _) = refl
t2cc2t (loop _ , base) = refl
t2cc2t (loop _ , loop _) = refl
By turning this isomorphism into an equivalence we get a direct proof that the torus is equal to two circles.
Torus≡S¹×S¹ : Torus ≡ S¹ × S¹
Torus≡S¹×S¹ = isoToPath (iso t2c c2t t2cc2t c2tt2c)
Cubical Agda also supports parameterized and recursive higher inductive types, for example propositional truncation (squash types) is defined as:
data ∥_∥ {ℓ} (A : Set ℓ) : Set ℓ where
∣_∣ : A → ∥ A ∥
squash : ∀ (x y : ∥ A ∥) → x ≡ y
isProp : ∀ {ℓ} → Set ℓ → Set ℓ
isProp A = (x y : A) → x ≡ y
recPropTrunc : ∀ {ℓ} {A : Set ℓ} {P : Set ℓ} → isProp P → (A → P) → ∥ A ∥ → P
recPropTrunc Pprop f ∣ x ∣ = f x
recPropTrunc Pprop f (squash x y i) =
Pprop (recPropTrunc Pprop f x) (recPropTrunc Pprop f y) i
For many more examples of higher inductive types see: https://github.com/agda/cubical/tree/master/Cubical/HITs.
Cubical identity types and computational HoTT/UF¶
As mentioned above the computation rule for J
does not hold
definitionally for path types. Cubical Agda solves this by introducing
a cubical identity type. The
https://github.com/agda/cubical/blob/master/Cubical/Core/Id.agda file
exports all of the primitives for this type, including the notation
_≡_
and a J
eliminator that computes definitionally on
refl
.
The cubical identity type and the path type are equivalent, so all of the results for one can be transported to the other one (using univalence). Using this we have implemented an interface to HoTT/UF which provides the user with the key primitives of Homotopy Type Theory and Univalent Foundations implemented using cubical primitives under the hood. This hence gives an axiom free version of HoTT/UF which computes properly.
module Cubical.Core.HoTTUF where
open import Cubical.Core.Id public
using ( _≡_  The identity type.
; refl  Unfortunately, pattern matching on refl is not available.
; J  Until it is, you have to use the induction principle J.
; transport  As in the HoTT Book.
; ap
; _∙_
; _⁻¹
; _≡⟨_⟩_  Standard equational reasoning.
; _∎
; funExt  Function extensionality
 (can also be derived from univalence).
; Σ  Sum type. Needed to define contractible types, equivalences
; _,_  and univalence.
; pr₁  The eta rule is available.
; pr₂
; isProp  The usual notions of proposition, contractible type, set.
; isContr
; isSet
; isEquiv  A map with contractible fibers
 (Voevodsky's version of the notion).
; _≃_  The type of equivalences between two given types.
; EquivContr  A formulation of univalence.
; ∥_∥  Propositional truncation.
; ∣_∣  Map into the propositional truncation.
; ∥∥isProp  A truncated type is a proposition.
; ∥∥recursion  Nondependent elimination.
; ∥∥induction  Dependent elimination.
)
In order to get access to only the HoTT/UF primitives start a file as follows:
{# OPTIONS cubical #}
open import Cubical.Core.HoTTUF
However, even though this interface exists it is still recommended
that one uses the cubical identity types unless one really need J
to compute on refl
. The reason for this is that the syntax for
path types does not work for the identity types, making many proofs
more involved as the only way to reason about them is using J
.
Furthermore, the path types satisfy many useful definitional
equalities that the identity types don’t.
References¶
Cyril Cohen, Thierry Coquand, Simon Huber and Anders Mörtberg; “Cubical Type Theory: a constructive interpretation of the univalence axiom”.
Thierry Coquand, Simon Huber, Anders Mörtberg; “On Higher Inductive Types in Cubical Type Theory”.
Appendix: Cubical Agda primitives¶
The Cubical Agda primitives and internals are exported by a series of
files found in the lib/prim/Agda/Builtin/Cubical
directory of
Agda. The agda/cubical
library exports all of these primitives
with the names used throughout this document. Experts might find it
useful to know what is actually exported as there are quite a few
primitives available that are not really exported by agda/cubical
,
so the goal of this section is to list the contents of these
files. However, for regular users and beginners the agda/cubical
library should be sufficient and this section can safely be ignored.
The key file with primitives is Agda.Primitive.Cubical
. It exports
the following BUILTIN
, primitives and postulates:
{# BUILTIN INTERVAL I #}  I : SSet
{# BUILTIN IZERO i0 #}
{# BUILTIN IONE i1 #}
infix 30 primINeg
infixr 20 primIMin primIMax
primitive
primIMin : I → I → I  _∧_
primIMax : I → I → I  _∨_
primINeg : I → I  ~_
{# BUILTIN ISONE IsOne #}  IsOne : I → SSet
postulate
itIsOne : IsOne i1  1=1
IsOne1 : ∀ i j → IsOne i → IsOne (primIMax i j)
IsOne2 : ∀ i j → IsOne j → IsOne (primIMax i j)
{# BUILTIN ITISONE itIsOne #}
{# BUILTIN ISONE1 IsOne1 #}
{# BUILTIN ISONE2 IsOne2 #}
{# BUILTIN PARTIAL Partial #}
{# BUILTIN PARTIALP PartialP #}
postulate
isOneEmpty : ∀ {a} {A : Partial i0 (Set a)} → PartialP i0 A
{# BUILTIN ISONEEMPTY isOneEmpty #}
primitive
primPOr : ∀ {a} (i j : I) {A : Partial (primIMax i j) (Set a)}
→ PartialP i (λ z → A (IsOne1 i j z)) → PartialP j (λ z → A (IsOne2 i j z))
→ PartialP (primIMax i j) A
 Computes in terms of primHComp and primTransp
primComp : ∀ {a} (A : (i : I) → Set (a i)) {φ : I} → (∀ i → Partial φ (A i)) → (a : A i0) → A i1
syntax primPOr p q u t = [ p ↦ u , q ↦ t ]
primitive
primTransp : ∀ {a} (A : (i : I) → Set (a i)) (φ : I) → (a : A i0) → A i1
primHComp : ∀ {a} {A : Set a} {φ : I} → (∀ i → Partial φ A) → A → A
The Path types are exported by Agda.Builtin.Cubical.Path
:
postulate
PathP : ∀ {ℓ} (A : I → Set ℓ) → A i0 → A i1 → Set ℓ
{# BUILTIN PATHP PathP #}
infix 4 _≡_
_≡_ : ∀ {ℓ} {A : Set ℓ} → A → A → Set ℓ
_≡_ {A = A} = PathP (λ _ → A)
{# BUILTIN PATH _≡_ #}
The Cubical subtypes are exported by Agda.Builtin.Cubical.Sub
:
{# BUILTIN SUB Sub #}
postulate
inc : ∀ {ℓ} {A : Set ℓ} {φ} (x : A) → Sub A φ (λ _ → x)
{# BUILTIN SUBIN inS #}
primitive
primSubOut : ∀ {ℓ} {A : Set ℓ} {φ : I} {u : Partial φ A} → Sub _ φ u → A
The Glue types are exported by Agda.Builtin.Cubical.Glue
:
record isEquiv {ℓ ℓ'} {A : Set ℓ} {B : Set ℓ'} (f : A → B) : Set (ℓ ⊔ ℓ') where
field
equivproof : (y : B) → isContr (fiber f y)
infix 4 _≃_
_≃_ : ∀ {ℓ ℓ'} (A : Set ℓ) (B : Set ℓ') → Set (ℓ ⊔ ℓ')
A ≃ B = Σ (A → B) \ f → (isEquiv f)
equivFun : ∀ {ℓ ℓ'} {A : Set ℓ} {B : Set ℓ'} → A ≃ B → A → B
equivFun e = fst e
equivProof : ∀ {la lt} (T : Set la) (A : Set lt) → (w : T ≃ A) → (a : A)
→ ∀ ψ → (Partial ψ (fiber (w .fst) a)) → fiber (w .fst) a
equivProof A B w a ψ fb = contr' {A = fiber (w .fst) a} (w .snd .equivproof a) ψ fb
where
contr' : ∀ {ℓ} {A : Set ℓ} → isContr A → (φ : I) → (u : Partial φ A) → A
contr' {A = A} (c , p) φ u = hcomp (λ i → λ { (φ = i1) → p (u 1=1) i
; (φ = i0) → c }) c
{# BUILTIN EQUIV _≃_ #}
{# BUILTIN EQUIVFUN equivFun #}
{# BUILTIN EQUIVPROOF equivProof #}
primitive
primGlue : ∀ {ℓ ℓ'} (A : Set ℓ) {φ : I}
→ (T : Partial φ (Set ℓ')) → (e : PartialP φ (λ o → T o ≃ A))
→ Set ℓ'
prim^glue : ∀ {ℓ ℓ'} {A : Set ℓ} {φ : I}
→ {T : Partial φ (Set ℓ')} → {e : PartialP φ (λ o → T o ≃ A)}
→ PartialP φ T → A → primGlue A T e
prim^unglue : ∀ {ℓ ℓ'} {A : Set ℓ} {φ : I}
→ {T : Partial φ (Set ℓ')} → {e : PartialP φ (λ o → T o ≃ A)}
→ primGlue A T e → A
primFaceForall : (I → I) → I
 pathToEquiv proves that transport is an equivalence (for details
 see Agda.Builtin.Cubical.Glue). This is needed internally.
{# BUILTIN PATHTOEQUIV pathToEquiv #}
Note that the Glue types are uncurried in agda/cubical
to make
them more pleasant to use:
Glue : ∀ {ℓ ℓ'} (A : Set ℓ) {φ : I}
→ (Te : Partial φ (Σ[ T ∈ Set ℓ' ] T ≃ A))
→ Set ℓ'
Glue A Te = primGlue A (λ x → Te x .fst) (λ x → Te x .snd)
The Agda.Builtin.Cubical.Id
exports the cubical identity types:
postulate
Id : ∀ {ℓ} {A : Set ℓ} → A → A → Set ℓ
{# BUILTIN ID Id #}
{# BUILTIN CONID conid #}
primitive
primDepIMin : _
primIdFace : ∀ {ℓ} {A : Set ℓ} {x y : A} → Id x y → I
primIdPath : ∀ {ℓ} {A : Set ℓ} {x y : A} → Id x y → x ≡ y
primitive
primIdJ : ∀ {ℓ ℓ'} {A : Set ℓ} {x : A} (P : ∀ y → Id x y → Set ℓ') →
P x (conid i1 (λ i → x)) → ∀ {y} (p : Id x y) → P y p
primitive
primIdElim : ∀ {a c} {A : Set a} {x : A}
(C : (y : A) → Id x y → Set c) →
((φ : I) (y : A [ φ ↦ (λ _ → x) ])
(w : (x ≡ outS y) [ φ ↦ (λ { (φ = i1) → \ _ → x}) ]) →
C (outS y) (conid φ (outS w))) →
{y : A} (p : Id x y) → C y p
Cumulativity¶
Basics¶
Since version 2.6.1, Agda supports optional cumulativity of universes
under the cumulativity
flag.
{# OPTIONS cumulativity #}
When the cumulativity
flag is enabled, Agda uses the subtyping
rule Set i =< Set j
whenever i =< j
. For example, in addition
to its usual type Set
, Nat
also has the type Set₁
and even
Set i
for any i : Level
.
_ : Set
_ = Nat
_ : Set₁
_ = Nat
_ : ∀ {i} → Set i
_ = Nat
With cumulativity is enabled, one can implement lifting to a higher universe as the identity function.
lift : ∀ {a b} → Set a → Set (a ⊔ b)
lift x = x
Example usage: Nary functions¶
In Agda without cumulativity, it is tricky to define a
universepolymorphic Nary function type A → A → ... → A → B
because the universe level depends on whether the number of arguments
is zero:
module WithoutCumulativity where
Narylevel : Level → Level → Nat → Level
Narylevel ℓ₁ ℓ₂ zero = ℓ₂
Narylevel ℓ₁ ℓ₂ (suc n) = ℓ₁ ⊔ Narylevel ℓ₁ ℓ₂ n
Nary : ∀ {ℓ₁ ℓ₂} n → Set ℓ₁ → Set ℓ₂ → Set (Narylevel ℓ₁ ℓ₂ n)
Nary zero A B = B
Nary (suc n) A B = A → Nary n A B
In contrast, in Agda with cumulativity one can always work with the highest possible universe level. This makes it much easier to define the type of Nary functions.
module WithCumulativity where
Nary : Nat → Set ℓ₁ → Set ℓ₂ → Set (ℓ₁ ⊔ ℓ₂)
Nary zero A B = B
Nary (suc n) A B = A → Nary n A B
curryⁿ : (Vec A n → B) → Nary n A B
curryⁿ {n = zero} f = f []
curryⁿ {n = suc n} f = λ x → curryⁿ λ xs → f (x ∷ xs)
_$ⁿ_ : Nary n A B → (Vec A n → B)
f $ⁿ [] = f
f $ⁿ (x ∷ xs) = f x $ⁿ xs
∀ⁿ : ∀ {A : Set ℓ₁} n → Nary n A (Set ℓ₂) → Set (ℓ₁ ⊔ ℓ₂)
∀ⁿ zero P = P
∀ⁿ (suc n) P = ∀ x → ∀ⁿ n (P x)
Limitations¶
Currently cumulativity only enables subtyping between universes, but
not between any other types containing universes. For example, List
Set
is not a subtype of List Set₁
. Agda also does not have
cumulativity for any other types containing universe levels, so List
{lzero} Nat
is not a subtype of List {lsuc lzero} Nat
. Such
rules might be added in a future version of Agda.
Constraint solving¶
When working in Agda with cumulativity, universe level metavariables
are often underconstrained. For example, the expression List Nat
could mean List {lzero} Nat
, but also List {lsuc lzero} Nat
,
or indeed List {i} Nat
for any i : Level
.
Currently Agda uses the following heuristic to instantiate universe
level metavariables. At the end of each type signature, each mutual
block, or declaration that is not part of a mutual block, Agda
instantiates all universe level metavariables that are unbounded from
above. A metavariable _l : Level
is unbounded from above if all
unsolved constraints that mention the metavariable are of the form
aᵢ =< _l : Level
, and _l
does not occur in the type of any
other unsolved metavariables. For each metavariable that satisfies
these conditions, it is instantiated to a₁ ⊔ a₂ ⊔ ... ⊔ aₙ
where
a₁ =< _l : Level
, …, aₙ =< _l : Level
are all constraints
that mention _l.
The heuristic as described above is considered experimental and is subject to change in future versions of Agda.
Data Types¶
Simple datatypes¶
Example datatypes¶
In the introduction we already showed the definition of the data type of natural numbers (in unary notation):
data Nat : Set where
zero : Nat
suc : Nat → Nat
We give a few more examples. First the data type of truth values:
data Bool : Set where
true : Bool
false : Bool
The True
set represents the trivially true proposition:
data True : Set where
tt : True
The False
set has no constructor and hence no elements. It
represents the trivially false proposition:
data False : Set where
Another example is the data type of nonempty binary trees with natural numbers in the leaves:
data BinTree : Set where
leaf : Nat → BinTree
branch : BinTree → BinTree → BinTree
Finally, the data type of Brouwer ordinals:
data Ord : Set where
zeroOrd : Ord
sucOrd : Ord → Ord
limOrd : (Nat → Ord) → Ord
General form¶
The general form of the definition of a simple datatype D
is the
following
data D : Setᵢ where
c₁ : A₁
...
cₙ : Aₙ
The name D
of the data type and the names c₁
, …, cₙ
of
the constructors must be new w.r.t. the current signature and context,
and the types A₁
, …, Aₙ
must be function types ending in
D
, i.e. they must be of the form
(y₁ : B₁) → ... → (yₘ : Bₘ) → D
Parametrized datatypes¶
Datatypes can have parameters. They are declared after the name of the datatype but before the colon, for example:
data List (A : Set) : Set where
[] : List A
_∷_ : A → List A → List A
Indexed datatypes¶
In addition to parameters, datatypes can also have indices. In
contrast to parameters which are required to be the same for all
constructors, indices can vary from constructor to constructor. They
are declared after the colon as function arguments to Set
. For
example, fixedlength vectors can be defined by indexing them over
their length of type Nat
:
data Vector (A : Set) : Nat → Set where
[] : Vector A zero
_∷_ : {n : Nat} → A → Vector A n → Vector A (suc n)
Notice that the parameter A
is bound once for all constructors,
while the index {n : Nat}
must be bound locally in the constructor
_∷_
.
Indexed datatypes can also be used to describe predicates, for example
the predicate Even : Nat → Set
can be defined as follows:
data Even : Nat → Set where
evenzero : Even zero
evenplus2 : {n : Nat} → Even n → Even (suc (suc n))
General form¶
The general form of the definition of a (parametrized, indexed)
datatype D
is the following
data D (x₁ : P₁) ... (xₖ : Pₖ) : (y₁ : Q₁) → ... → (yₗ : Qₗ) → Set ℓ where
c₁ : A₁
...
cₙ : Aₙ
where the types A₁
, …, Aₙ
are function types of the form
(z₁ : B₁) → ... → (zₘ : Bₘ) → D x₁ ... xₖ t₁ ... tₗ
Strict positivity¶
When defining a datatype D
, Agda poses an additional requirement
on the types of the constructors of D
, namely that D
may only
occur strictly positively in the types of their arguments.
Concretely, for a datatype with constructors c₁ : A₁
, …, cₙ :
Aₙ
, Agda checks that each Aᵢ has the form
(y₁ : B₁) → ... → (yₘ : Bₘ) → D
where an argument types Bᵢ of the constructors is either
noninductive (a side condition) and does not mention
D
at all,or inductive and has the form
(z₁ : C₁) → ... → (zₖ : Cₖ) → D
where
D
must not occur in any Cⱼ.
The strict positivity condition rules out declarations such as
data Bad : Set where
bad : (Bad → Bad) → Bad
 A B C
 A is in a negative position, B and C are OK
since there is a negative occurrence of Bad
in the type of the
argument of the constructor. (Note that the corresponding data type
declaration of Bad
is allowed in standard functional languages
such as Haskell and ML.).
Non strictlypositive declarations are rejected because they admit nonterminating functions.
If the positivity check is disabled, so that a similar declaration of
Bad
is allowed, it is possible to construct a term of the empty
type, even without recursion.
{# OPTIONS nopositivitycheck #}
data ⊥ : Set where
data Bad : Set where
bad : (Bad → ⊥) → Bad
selfapp : Bad → ⊥
selfapp (bad f) = f (bad f)
absurd : ⊥
absurd = selfapp (bad selfapp)
For more general information on termination see Termination Checking.
Flat Modality¶
The flat/crisp attribute @♭/@flat
is an idempotent comonadic
modality modeled after Spatial Type Theory and Crisp Type Theory. It is similar to a necessity modality.
We can define ♭ A
as a type for any (@♭ A : Set l)
via an
inductive definition:
data ♭ {@♭ l : Level} (@♭ A : Set l) : Set l where
con : (@♭ x : A) → ♭ A
counit : {@♭ l : Level} {@♭ A : Set l} → ♭ A → A
counit (con x) = x
When trying to provide a @♭
arguments only other @♭
variables will be available, the others will be marked as @⊤
in the context.
For example the following will not typecheck:
unit : {@♭ l : Level} {@♭ A : Set l} → A → ♭ A
unit x = con x
Pattern Matching on @♭
¶
Agda allows matching on @♭
arguments by default.
When matching on a @♭
argument the flat
status gets propagated to the arguments of the constructor
data _⊎_ (A B : Set) : Set where
inl : A → A ⊎ B
inr : B → A ⊎ B
flatsum : {@♭ A B : Set} → (@♭ x : A ⊎ B) → ♭ A ⊎ ♭ B
flatsum (inl x) = inl (con x)
flatsum (inr x) = inr (con x)
When refining @♭
variables the equality also needs to be
provided as @♭
flatsubst : {@♭ A : Set} {P : A → Set} (@♭ x y : A) (@♭ eq : x ≡ y) → P x → P y
flatsubst x .x refl p = p
if we simply had (eq : x ≡ y)
the code would be rejected.
Pattern matching on @♭
arguments can be disabled entirely by using
the noflatsplit
flag
{# OPTIONS noflatsplit #}
Subtyping of flat function spaces¶
Normally, if f : (@♭ x : A) → B
then we have λ x → f x : (x : A)
→ B
but not f : (x : A) → B
. When the option subtyping
is
enabled, Agda will make use of the subtyping rule (@♭ x : A) → B <:
(x : A) → B
, so there is no need for etaexpanding the function
f
.
Foreign Function Interface¶
Compiler Pragmas¶
There are two backendgeneric pragmas used for the FFI:
{# COMPILE <Backend> <Name> <Text> #}
{# FOREIGN <Backend> <Text> #}
The COMPILE
pragma associates some information <Text>
with a
name <Name>
defined in the same module, and the FOREIGN
pragma
associates <Text>
with the current toplevel module. This
information is interpreted by the specific backend during compilation
(see below). These pragmas were added in Agda 2.5.3.
Haskell FFI¶
Note
This section applies to the GHC Backend.
The FOREIGN
pragma¶
The GHC backend interprets FOREIGN
pragmas as inline Haskell code and can
contain arbitrary code (including import statements) that will be added to the
compiled module. For instance:
{# FOREIGN GHC import Data.Maybe #}
{# FOREIGN GHC
data Foo = Foo  Bar Foo
countBars :: Foo > Integer
countBars Foo = 0
countBars (Bar f) = 1 + countBars f
#}
The COMPILE
pragma¶
There are four forms of COMPILE
annotations recognized by the GHC backend
{# COMPILE GHC <Name> = <HaskellCode> #}
{# COMPILE GHC <Name> = type <HaskellType> #}
{# COMPILE GHC <Name> = data <HaskellData> (<HsCon1>  ..  <HsConN>) #}
{# COMPILE GHC <Name> as <HaskellName> #}
The first three tells the compiler how to compile a given Agda definition and the last exposes an Agda definition under a particular Haskell name allowing Agda libraries to be used from Haskell.
Using Haskell Types from Agda¶
In order to use a Haskell function from Agda its type must be mapped to an Agda
type. This mapping can be configured using the type
and data
forms of the
COMPILE
pragma.
Opaque types¶
Opaque Haskell types are exposed to Agda by postulating an Agda type and
associating it to the Haskell type using the type
form of the COMPILE
pragma:
{# FOREIGN GHC import qualified System.IO #}
postulate FileHandle : Set
{# COMPILE GHC FileHandle = type System.IO.Handle #}
This tells the compiler that the Agda type FileHandle
corresponds to the Haskell
type System.IO.Handle
and will enable functions using file handles to be used
from Agda.
Data types¶
Nonopaque Haskell data types can be mapped to Agda datatypes using the data
form
of the COMPILED
pragma:
data Maybe (A : Set) : Set where
nothing : Maybe A
just : A → Maybe A
{# COMPILE GHC Maybe = data Maybe (Nothing  Just) #}
The compiler checks that the types of the Agda constructors match the types of the corresponding Haskell constructors and that no constructors have been left out (on either side).
Builtin Types¶
The GHC backend compiles certain Agda builtin types to special Haskell types. The mapping between Agda builtin types and Haskell types is as follows:
Agda Builtin  Haskell Type 

NAT 
Integer 
INTEGER 
Integer 
STRING 
Data.Text.Text 
CHAR 
Char 
BOOL 
Bool 
FLOAT 
Double 
Warning
Haskell code manipulating Agda natural numbers as integers must take care to avoid negative values.
Warning
Agda FLOAT
values have only one logical NaN
value. At runtime,
there might be multiple different NaN
representations present. All
such NaN
values must be treated equal by FFI calls.
Using Haskell functions from Agda¶
Once a suitable mapping between Haskell types and Agda types has been set
up, Haskell functions whose types map to Agda types can be exposed to Agda
code with a COMPILE
pragma:
open import Agda.Builtin.IO
open import Agda.Builtin.String
open import Agda.Builtin.Unit
{# FOREIGN GHC
import qualified Data.Text.IO as Text
import qualified System.IO as IO
#}
postulate
stdout : FileHandle
hPutStrLn : FileHandle → String → IO ⊤
{# COMPILE GHC stdout = IO.stdout #}
{# COMPILE GHC hPutStrLn = Text.hPutStrLn #}
The compiler checks that the type of the given Haskell code matches the
type of the Agda function. Note that the COMPILE
pragma only affects
the runtime behaviour–at typechecking time the functions are treated as
postulates.
Warning
It is possible to give Haskell definitions to defined (nonpostulate) Agda functions. In this case the Agda definition will be used at typechecking time and the Haskell definition at runtime. However, there are no checks to ensure that the Agda code and the Haskell code behave the same and discrepancies may lead to undefined behaviour.
This feature can be used to let you reason about code involving calls to Haskell functions under the assumption that you have a correct Agda model of the behaviour of the Haskell code.
Using Agda functions from Haskell¶
Since Agda 2.3.4 Agda functions can be exposed to Haskell code using
the as
form of the COMPILE
pragma:
module IdAgda where
idAgda : ∀ {A : Set} → A → A
idAgda x = x
{# COMPILE GHC idAgda as idAgdaFromHs #}
This tells the compiler that the Agda function idAgda
should be compiled
to a Haskell function called idAgdaFromHs
. Without this pragma, functions
are compiled to Haskell functions with unpredictable names and, as a result,
cannot be invoked from Haskell. The type of idAgdaFromHs
will be the translated
type of idAgda
.
The compiled and exported function idAgdaFromHs
can then be imported and
invoked from Haskell like this:
 file UseIdAgda.hs
module UseIdAgda where
import MAlonzo.Code.IdAgda (idAgdaFromHs)
 idAgdaFromHs :: () > a > a
idAgdaApplied :: a > a
idAgdaApplied = idAgdaFromHs ()
Polymorphic functions¶
Agda is a monomorphic language, so polymorphic functions are modeled as functions taking types as arguments. These arguments will be present in the compiled code as well, so when calling polymorphic Haskell functions they have to be discarded explicitly. For instance,
postulate
ioReturn : {A : Set} → A → IO A
{# COMPILE GHC ioReturn = \ _ x > return x #}
In this case compiled calls to ioReturn
will still have A
as an
argument, so the compiled definition ignores its first argument
and then calls the polymorphic Haskell return
function.
Levelpolymorphic types¶
Levelpolymorphic types face a similar problem to polymorphic functions. Since Haskell does not have universe levels the Agda type will have more arguments than the corresponding Haskell type. This can be solved by defining a Haskell type synonym with the appropriate number of phantom arguments. For instance:
data Either {a b} (A : Set a) (B : Set b) : Set (a ⊔ b) where
left : A → Either A B
right : B → Either A B
{# FOREIGN GHC type AgdaEither a b = Either #}
{# COMPILE GHC Either = data AgdaEither (Left  Right) #}
Handling typeclass constraints¶
There is (currently) no way to map a Haskell type with type class constraints to an Agda type. This means that functions with class constraints cannot be used from Agda. However, this can be worked around by wrapping class constraints in Haskell data types, and providing Haskell functions using explicit dictionary passing.
For instance, suppose we have a simple GUI library in Haskell:
module GUILib where
class Widget w
setVisible :: Widget w => w > Bool > IO ()
data Window
instance Widget Window
newWindow :: IO Window
To use this library from Agda we first define a Haskell type for widget dictionaries and map this
to an Agda type Widget
:
{# FOREIGN GHC import GUILib #}
{# FOREIGN GHC data WidgetDict w = Widget w => WidgetDict #}
postulate
Widget : Set → Set
{# COMPILE GHC Widget = type WidgetDict #}
We can then expose setVisible
as an Agda function taking a Widget
instance argument:
postulate
setVisible : {w : Set} {{_ : Widget w}} → w → Bool → IO ⊤
{# COMPILE GHC setVisible = \ _ WidgetDict > setVisible #}
Note that the Agda Widget
argument corresponds to a WidgetDict
argument
on the Haskell side. When we match on the WidgetDict
constructor in the Haskell
code, the packed up dictionary will become available for the call to setVisible
.
The window type and functions are mapped as expected and we also add an Agda instance
packing up the Widget Window
Haskell instance into a WidgetDict
:
postulate
Window : Set
newWindow : IO Window
instance WidgetWindow : Widget Window
{# COMPILE GHC Window = type Window #}
{# COMPILE GHC newWindow = newWindow #}
{# COMPILE GHC WidgetWindow = WidgetDict #}
We can then write code like this:
openWindow : IO Window
openWindow = newWindow >>= λ w →
setVisible w true >>= λ _ →
return w
JavaScript FFI¶
The JavaScript backend recognizes COMPILE
pragmas of the following form:
{# COMPILE JS <Name> = <JsCode> #}
where <Name>
is a postulate, constructor, or data type. The code for a data type is used to compile
pattern matching and should be a function taking a value of the data type and a table of functions
(corresponding to case branches) indexed by the constructor names. For instance, this is the compiled
code for the List
type, compiling lists to JavaScript arrays:
data List {a} (A : Set a) : Set a where
[] : List A
_∷_ : (x : A) (xs : List A) → List A
{# COMPILE JS List = function(x,v) {
if (x.length < 1) {
return v["[]"]();
} else {
return v["_∷_"](x[0], x.slice(1));
}
} #}
{# COMPILE JS [] = Array() #}
{# COMPILE JS _∷_ = function (x) { return function(y) { return Array(x).concat(y); }; } #}
Function Definitions¶
Introduction¶
A function is defined by first declaring its type followed by a number of
equations called clauses. Each clause consists of the function being defined
applied to a number of patterns, followed by =
and a term called the
righthand side. For example:
not : Bool → Bool
not true = false
not false = true
Functions are allowed to call themselves recursively, for example:
twice : Nat → Nat
twice zero = zero
twice (suc n) = suc (suc (twice n))
General form¶
The general form for defining a function is
f : (x₁ : A₁) → … → (xₙ : Aₙ) → B
f p₁ … pₙ = d
…
f q₁ … qₙ = e
where f
is a new identifier, pᵢ
and qᵢ
are patterns of type Aᵢ
,
and d
and e
are expressions.
The declaration above gives the identifier f
the type
(x₁ : A₁) → … → (xₙ : Aₙ) → B
and f
is defined by the defining
equations. Patterns are matched from top to bottom, i.e., the first pattern
that matches the actual parameters is the one that is used.
By default, Agda checks the following properties of a function definition:
 The patterns in the lefthand side of each clause should consist only of constructors and variables.
 No variable should occur more than once on the lefthand side of a single clause.
 The patterns of all clauses should together cover all possible inputs of the function, see Coverage Checking.
 The function should be terminating on all possible inputs, see Termination Checking.
Special patterns¶
In addition to constructors consisting of constructors and variables, Agda supports two special kinds of patterns: dot patterns and absurd patterns.
Dot patterns¶
A dot pattern (also called inaccessible pattern) can be used when
the only typecorrect value of the argument is determined by the
patterns given for the other arguments.
The syntax for a dot pattern is .t
.
As an example, consider the datatype Square
defined as follows
data Square : Nat → Set where
sq : (m : Nat) → Square (m * m)
Suppose we want to define a function root : (n : Nat) → Square n → Nat
that
takes as its arguments a number n
and a proof that it is a square, and
returns the square root of that number. We can do so as follows:
root : (n : Nat) → Square n → Nat
root .(m * m) (sq m) = m
Notice that by matching on the argument of type Square n
with the constructor
sq : (m : Nat) → Square (m * m)
, n
is forced to be equal to m * m
.
In general, when matching on an argument of type D i₁ … iₙ
with a constructor
c : (x₁ : A₁) → … → (xₘ : Aₘ) → D j₁ … jₙ
, Agda will attempt to unify
i₁ … iₙ
with j₁ … jₙ
. When the unification algorithm instantiates a
variable x
with value t
, the corresponding argument of the function
can be replaced by a dot pattern .t
. Using a dot pattern is optional, but
can help readability. The following are also legal definitions of
root
:
Since Agda 2.4.2.4:
root₁ : (n : Nat) → Square n → Nat
root₁ _ (sq m) = m
Since Agda 2.5.2:
root₂ : (n : Nat) → Square n → Nat
root₂ n (sq m) = m
In the case of root₂
, n
evaluates to m * m
in the body of the
function and is thus equivalent to
root₃ : (n : Nat) → Square n → Nat
root₃ _ (sq m) = let n = m * m in m
Absurd patterns¶
Absurd patterns can be used when none of the constructors for a particular
argument would be valid. The syntax for an absurd pattern is ()
.
As an example, if we have a datatype Even
defined as follows
data Even : Nat → Set where
evenzero : Even zero
evenplus2 : {n : Nat} → Even n → Even (suc (suc n))
then we can define a function onenoteven : Even 1 → ⊥
by using an absurd
pattern:
onenoteven : Even 1 → ⊥
onenoteven ()
Note that if the lefthand side of a clause contains an absurd pattern, its righthand side must be omitted.
In general, when matching on an argument of type D i₁ … iₙ
with an absurd
pattern, Agda will attempt for each constructor
c : (x₁ : A₁) → … → (xₘ : Aₘ) → D j₁ … jₙ
of the datatype D
to unify
i₁ … iₙ
with j₁ … jₙ
. The absurd pattern will only be accepted if all
of these unifications end in a conflict.
Aspatterns¶
Aspatterns (or @patterns
) can be used to name a pattern. The name has the
same scope as normal pattern variables (i.e. the righthand side, where clause,
and dot patterns). The name reduces to the value of the named pattern. For example:
module _ {A : Set} (_<_ : A → A → Bool) where
merge : List A → List A → List A
merge xs [] = xs
merge [] ys = ys
merge xs@(x ∷ xs₁) ys@(y ∷ ys₁) =
if x < y then x ∷ merge xs₁ ys
else y ∷ merge xs ys₁
Aspatterns are properly supported since Agda 2.5.2.
Case trees¶
Internally, Agda represents function definitions as case trees. For example, a function definition
max : Nat → Nat → Nat
max zero n = n
max m zero = m
max (suc m) (suc n) = suc (max m n)
will be represented internally as a case tree that looks like this:
max m n = case m of
zero → n
suc m' → case n of
zero → suc m'
suc n' → suc (max m' n')
Note that because Agda uses this representation of the function
max
, the clause max m zero = m
does not hold definitionally
(i.e. as a reduction rule). If you would try to prove that this
equation holds, you would not be able to write refl
:
data _≡_ {A : Set} (x : A) : A → Set where
refl : x ≡ x
 Does not work!
lemma : (m : Nat) → max m zero ≡ m
lemma = refl
Clauses which do not hold definitionally are usually (but not always) the result of writing clauses by hand instead of using Agda’s case split tactic. These clauses are highlighted by Emacs.
The exactsplit
flag causes Agda to raise an error whenever a
clause in a definition by pattern matching cannot be made to hold
definitionally. Specific clauses can be excluded from this check by
means of the {# CATCHALL #}
pragma.
For instance, the above definition of max
will be rejected when
using the exactsplit
flag because its second clause does not to
hold definitionally.
When using the exactsplit
flag, catchall clauses have to
be marked as such, for instance:
eq : Nat → Nat → Bool
eq zero zero = true
eq (suc m) (suc n) = eq m n
{# CATCHALL #}
eq _ _ = false
The noexactsplit
flag can be used to override a global
exactsplit
in a file, by adding a pragma
{# OPTIONS noexactsplit #}
. This option is enabled by
default.
Function Types¶
Function types are written (x : A) → B
, or in the case of nondependent functions simply A → B
. For instance, the type of the addition function for natural numbers is:
Nat → Nat → Nat
and the type of the addition function for vectors is:
(A : Set) → (n : Nat) → (u : Vec A n) → (v : Vec A n) → Vec A n
where Set
is the type of sets and Vec A n
is the type of vectors with n
elements of type A
. Arrows between consecutive hypotheses of the form (x : A)
may also be omitted, and (x : A) (y : A)
may be shortened to (x y : A)
:
(A : Set) (n : Nat)(u v : Vec A n) → Vec A n
Functions are constructed by lambda abstractions, which can be either typed or untyped. For instance, both expressions below have type (A : Set) → A → A
(the second expression checks against other types as well):
example₁ = \ (A : Set)(x : A) → x
example₂ = \ A x → x
You can also use the Unicode symbol λ
(type “\lambda” or “\Gl” in the Emacs Agda mode) instead of \
(type “\” in the Emacs Agda mode).
The application of a function f : (x : A) → B
to an argument a : A
is written f a
and the type of this is B[x := a]
.
Notational conventions¶
Function types:
prop₁ : ((x : A) (y : B) → C) isthesameas ((x : A) → (y : B) → C)
prop₂ : ((x y : A) → C) isthesameas ((x : A)(y : A) → C)
prop₃ : (forall (x : A) → C) isthesameas ((x : A) → C)
prop₄ : (forall x → C) isthesameas ((x : _) → C)
prop₅ : (forall x y → C) isthesameas (forall x → forall y → C)
You can also use the Unicode symbol ∀
(type “\all” in the Emacs Agda mode) instead of forall
.
Functional abstraction:
(\x y → e) isthesameas (\x → (\y → e))
Functional application:
(f a b) isthesameas ((f a) b)
Generalization of Declared Variables¶
Overview¶
Since version 2.6.0, Agda supports implicit generalization over variables in types.
Variables to be generalized over must be declared with their types in a variable
block. For example:
variable
ℓ : Level
n m : Nat
data Vec (A : Set ℓ) : Nat → Set ℓ where
[] : Vec A 0
_∷_ : A → Vec A n → Vec A (suc n)
Here the parameter ℓ
and the n
in the type of _∷_
are not bound explicitly,
but since they are declared as generalizable variables, bindings for them are inserted
automatically. The level ℓ
is added as a parameter to the datatype and n
is added
as an argument to _∷_
. The resulting declaration is
data Vec {ℓ : Level} (A : Set ℓ) : Nat → Set ℓ where
[] : Vec A 0
_∷_ : {n : Nat} → A → Vec A n → Vec A (suc n)
See Placement of generalized bindings below for more details on where bindings are inserted.
Variables are generalized in toplevel type signatures, module telescopes, and record and datatype parameter telescopes.
Issues related to this feature are marked with generalize in the issue tracker.
Nested generalization¶
When generalizing a variable, any generalizable variables in its type are also generalized
over. For instance, you can declare A
to be a type at some level ℓ
as
variable
A : Set ℓ
Now if A
is mentioned in a type, the level ℓ
will also be generalized over:
 id : {A.ℓ : Level} {A : Set ℓ} → A → A
id : A → A
id x = x
The nesting can be arbitrarily deep, so
variable
x : A
refl′ : x ≡ x
refl′ = refl
expands to
refl′ : {x.A.ℓ : Level} {x.A : Set x.A.ℓ} {x : x.A} → x ≡ x
See Naming of nested variables below for how the names are chosen.
Nested variables are not necessarily generalized over. In this example, if the universe
level of A
is fixed there is nothing to generalize:
postulate
 pure : {A : Set} {F : Set → Set} → A → F A
pure : {F : Set → Set} → A → F A
See Generalization over unsolved metavariables for more details.
Note
Nested generalized variables are local to each variable, so if you declare
variable
B : Set ℓ
then A
and B
can still be generalized at different levels. For instance,
 _$_ : {A.ℓ : Level} {A : Set A.ℓ} {B.ℓ : Level} {B : Set B.ℓ} → (A → B) → A → B
_$_ : (A → B) → A → B
f $ x = f x
Generalization over unsolved metavariables¶
Generalization over nested variables is implemented by creating a metavariable for each
nested variable and generalize over any such meta that is still unsolved after type
checking. This is what makes the pure
example from the previous section work: the
metavariable created for ℓ
is solved to level 0 and is thus not generalized over.
A typical case where this happens is when you have dependencies between different nested variables. For instance:
postulate
Con : Set
variable
Γ Δ Θ : Con
postulate
Sub : Con → Con → Set
idS : Sub Γ Γ
_∘_ : Sub Γ Δ → Sub Δ Θ → Sub Γ Θ
variable
δ σ γ : Sub Γ Δ
postulate
assoc : δ ∘ (σ ∘ γ) ≡ (δ ∘ σ) ∘ γ
In the type of assoc
each substitution gets two nested variable metas for their contexts,
but the type of _∘_
requires the contexts of its arguments to match up, so some of
these metavariables are solved. The resulting type is
assoc : {δ.Γ δ.Δ : Con} {δ : Sub δ.Γ δ.Δ} {σ.Δ : Con} {σ : Sub δ.Δ σ.Δ}
{γ.Δ : Con} {γ : Sub σ.Δ γ.Δ} → (δ ∘ (σ ∘ γ)) ≡ ((δ ∘ σ) ∘ γ)
where we can see from the names that σ.Γ
was unified with δ.Δ
and γ.Γ
with
σ.Δ
. In general, when unifying two metavariables the “youngest” one is eliminated which
is why δ.Δ
and σ.Δ
are the ones that remain in the type.
If a metavariable for a nested generalizable variable is partially solved, the leftover metas are generalized over. For instance,
variable
xs : Vec A n
head : Vec A (suc n) → A
head (x ∷ _) = x
 lemma : {xs.n.1 : Nat} {xs : Vec Nat (suc xs.n.1)} → head xs ≡ 1 → (0 < sum xs) ≡ true
lemma : head xs ≡ 1 → (0 < sum xs) ≡ true
In the type of lemma
a metavariable is created for the length of xs
, which
the application head xs
refines to suc _n
, for some new metavariable _n
.
Since there are no further constraints on _n
, it’s generalized over, creating the
type given in the comment. See Naming of nested variables below for how the name xs.n.1
is chosen.
Note
Only metavariables originating from nested variables are generalized over. An exception
to this is in variable
blocks where all unsolved metas are turned into nested variables.
This means writing
variable
A : Set _
is equivalent to A : Set ℓ
up to naming of the nested variable (see below).
Naming of nested variables¶
The general naming scheme for nested generalized variables is
parentVar.nestedVar
. So, in the case of the identity function
id : A → A
expanding to
id : {A.ℓ : Level} {A : Set ℓ} → A → A
the name of the level variable is A.ℓ
since the name of the nested variable is
ℓ
and its parent is the named variable A
. For multiple levels of nesting the
parent can be another nested variable as in the refl′
case above
refl′ : {x.A.ℓ : Level} {x.A : Set x.A.ℓ} {x : x.A} → x ≡ x
If a nested generalizable variable is solved with a term containing
further metas, these are generalized over as explained in the lemma
example
above. The names of the new variables are of the form parentName.i
where
parentName
is the name of the solved variable and i
numbers the metas,
starting from 1, in the order they appear in the solution.
If a variable comes from a free unsolved metavariable in a variable
block
(see this note), its name is chosen as follows:
 If it is a labelled argument to a function, the label is used as the name,
 otherwise the name is its lefttoright index (starting at 1) in the list of unnamed variables in the type.
It is then given a hierarchical name based on the named variable whose type it occurs in. For example,
postulate
V : (A : Set) → Nat → Set
P : V A n → Set
variable
v : V _ _
postulate
thm : P v
Here there are two unnamed variables in the type of v
, namely the two arguments to V
.
The first argument has the label A
in the definition of V
, so this variable gets the name
v.A
. The second argument has no label and thus gets the name v.2
since it is the second
unnamed variable in the type of v
.
If the variable comes from a partially instantiated nested variable the name of the metavariable is used unqualified.
Note
Currently it is not allowed to use hierarchical names when giving parameters to functions, see Issue #3208.
Placement of generalized bindings¶
The following rules are used to place generalized variables:
 Generalized variables are placed at the front of the type signature or telescope.
 Variables mentioned eariler are placed before variables mentioned later, where nested variables count as being mentioned together with their parent.
Note
This means that an implicitly quantified variable cannot depend on an explicitly quantified one. See Issue #3352 for the feature request to lift this restriction.
Indexed datatypes¶
When generalizing datatype parameters and indicies a variable is turned into an index if it is only mentioned in indices and into a parameter otherwise. For instance,
data All (P : A → Set) : Vec A n → Set where
[] : All P []
_∷_ : P x → All P xs → All P (x ∷ xs)
Here A
is generalized as a parameter and n
as an index. That is, the
resulting signature is
data All {A : Set} (P : A → Set) : {n : Nat} → Vec A n → Set where
Instance and irrelevant variables¶
Generalized variables are introduced as implicit arguments by default, but this can be changed to instance arguments or irrelevant arguments by annotating the declaration of the variable:
record Eq (A : Set) : Set where
field eq : A → A → Bool
variable
{{EqA}} : Eq A  generalized as an instance argument
.ignore : A  generalized as an irrelevant (implicit) argument
Variables are never generalized as explicit arguments.
Importing and exporting variables¶
Generalizable variables are treated in the same way as other declared symbols
(functions, datatypes, etc) and use the same mechanisms for importing and exporting
between modules. This means that unless marked private
they are exported from a
module.
Interaction¶
When developing types interactively, generalizable variables can be used in holes if they have already been generalized, but it is not possible to introduce new generalizations interactively. For instance,
works : (A → B) → Vec A n → Vec B {!n!}
fails : (A → B) → Vec A {!n!} → Vec B {!n!}
In works
you can give n
in the hole, since a binding for n
has been introduced
by its occurrence in the argument vector. In fails
on the other hand, there is no reference
to n
so neither hole can be filled interactively.
Modalities¶
One can give a modality when declaring a generalizable variable:
variable
@0 o : Nat
In the generalization process generalizable variables get the modality that they are declared with, whereas other variables always get the default modality.
Guarded Cubical¶
Note
This is a stub.
Cubical Agda is extended with Nakano’s later modality and guarded recursion based on Ticked Cubical Type Theory [2]. For its usage, see [1] or the example.
The implementation currently allows for something more general than in the above reference, in preparation for the ticks described in [3].
Given a type A in the primLockUniv universe we can form function types annotated with @tick (or its synonym @lock): (@tick x : A) > B. Lambda abstraction at such a type introduces the variable in the context with a @tick annotation. Application t u for t : (@tick x : A) → B is restricted so that t is typable in the prefix of the context that does not include any @tick variables in u. The only exception to that restriction, at the moment, are variables of interval I, or IsOne _ type.
References¶
[1] Niccolò Veltri and Andrea Vezzosi. “Formalizing picalculus in guarded cubical Agda.” In CPP’20. ACM, New York, NY, USA, 2020.
[2] Rasmus Ejlers Møgelberg and Niccolò Veltri. “Bisimulation as path type for guarded recursive types.” In POPL’19, 2019.
[3] Magnus Baunsgaard Kristensen, Rasmus Ejlers Møgelberg, Andrea Vezzosi. “Greatest HITs: Higher inductive types in coinductive definitions via induction under clocks.”
Implicit Arguments¶
It is possible to omit terms that the type checker can figure out for
itself, replacing them by _
.
If the type checker cannot infer the value of an _
it will report
an error.
For instance, for the polymorphic identity function
id : (A : Set) → A → A
the first argument can be inferred from the type of the second argument,
so we might write id _ zero
for the application of the identity function to zero
.
We can even write this function application without the first argument. In that case we declare an implicit function space:
id : {A : Set} → A → A
and then we can use the notation id zero
.
Another example:
_==_ : {A : Set} → A → A → Set
subst : {A : Set} (C : A → Set) {x y : A} → x == y → C x → C y
Note how the first argument to _==_
is left implicit.
Similarly, we may leave out the implicit arguments A
, x
, and y
in an
application of subst
.
To give an implicit argument explicitly, enclose it in curly braces.
The following two expressions are equivalent:
x1 = subst C eq cx
x2 = subst {_} C {_} {_} eq cx
It is worth noting that implicit arguments are also inserted at the end of an application,
if it is required by the type.
For example, in the following, y1
and y2
are equivalent.
y1 : a == b → C a → C b
y1 = subst C
y2 : a == b → C a → C b
y2 = subst C {_} {_}
Implicit arguments are inserted eagerly in lefthand sides so y3
and y4
are equivalent. An exception is when no type signature is given, in which case
no implicit argument insertion takes place. Thus in the definition of y5
the only implicit is the A
argument of subst
.
y3 : {x y : A} → x == y → C x → C y
y3 = subst C
y4 : {x y : A} → x == y → C x → C y
y4 {x} {y} = subst C {_} {_}
y5 = subst C
It is also possible to write lambda abstractions with implicit arguments. For
example, given id : (A : Set) → A → A
, we can define the identity function with
implicit type argument as
id’ = λ {A} → id A
Implicit arguments can also be referred to by name,
so if we want to give the expression e
explicitly for y
without giving a value for x
we can write
subst C {y = e} eq cx
In rare circumstances it can be useful to separate the name used to give an argument by name from the name of the bound variable, for instance if the desired name shadows an existing name. To do this you write
id₂ : {A = X : Set} → X → X  name of bound variable is X
id₂ x = x
useid₂ : (Y : Set) → Y → Y
useid₂ Y = id₂ {A = Y}  but the label is A
Labeled bindings must appear by themselves when typed, so the type Set
needs to
be repeated in this example:
const : {A = X : Set} {B = Y : Set} → A → B → A
const x y = x
When constructing implicit function spaces the implicit argument can be omitted,
so both expressions below are valid expressions of type {A : Set} → A → A
:
z1 = λ {A} x → x
z2 = λ x → x
The ∀
(or forall
) syntax for function types also has implicit variants:
① : (∀ {x : A} → B) isthesameas ({x : A} → B)
② : (∀ {x} → B) isthesameas ({x : _} → B)
③ : (∀ {x y} → B) isthesameas (∀ {x} → ∀ {y} → B)
In very special situations it makes sense to declare unnamed hidden arguments
{A} → B
. In the following example
, the hidden argument to scons
of type
zero ≤ zero
can be solved by ηexpansion, since this type reduces to ⊤
.
data ⊥ : Set where
_≤_ : Nat → Nat → Set
zero ≤ _ = ⊤
suc m ≤ zero = ⊥
suc m ≤ suc n = m ≤ n
data SList (bound : Nat) : Set where
[] : SList bound
scons : (head : Nat) → {head ≤ bound} → (tail : SList head) → SList bound
example : SList zero
example = scons zero []
There are no restrictions on when a function space can be implicit. Internally, explicit and implicit function spaces are treated in the same way. This means that there are no guarantees that implicit arguments will be solved. When there are unsolved implicit arguments the type checker will give an error message indicating which application contains the unsolved arguments. The reason for this liberal approach to implicit arguments is that limiting the use of implicit argument to the cases where we guarantee that they are solved rules out many useful cases in practice.
Tactic arguments¶
You can declare tactics to be used to solve a particular implicit argument using
the @(tactic t)
attribute, where t : Term → TC ⊤
. For instance:
cleversearch : Term → TC ⊤
cleversearch hole = unify hole (lit (nat 17))
thebestnumber : {@(tactic cleversearch) n : Nat} → Nat
thebestnumber {n} = n
check : thebestnumber ≡ 17
check = refl
The tactic can be an arbitrary term of the right type and may depend on previous arguments to the function:
default : {A : Set} → A → Term → TC ⊤
default x hole = bindTC (quoteTC x) (unify hole)
search : (depth : Nat) → Term → TC ⊤
example : {@(tactic default 10) depth : Nat}
{@(tactic search depth) proof : Proof} →
Goal
Metavariables¶
Unification¶
Instance Arguments¶
Instance arguments are a special kind of implicit arguments that get solved by a special instance resolution algorithm, rather than by the unification algorithm used for normal implicit arguments. Instance arguments are the Agda equivalent of Haskell type class constraints and can be used for many of the same purposes.
An instance argument will be resolved if its type is a named type (i.e. a data type or record type) or a variable type (i.e. a previously bound variable of type Set ℓ), and a unique instance of the required type can be built from declared instances and the current context.
Usage¶
Instance arguments are enclosed in double curly braces {{ }}
, e.g. {{x : T}}
.
Alternatively they can be enclosed, with proper spacing, e.g. ⦃ x : T ⦄
, in the
unicode braces ⦃ ⦄
(U+2983
and U+2984
, which can be typed as
\{{
and \}}
in the Emacs mode).
For instance, given a function _==_
_==_ : {A : Set} {{eqA : Eq A}} → A → A → Bool
for some suitable type Eq
, you might define
elem : {A : Set} {{eqA : Eq A}} → A → List A → Bool
elem x (y ∷ xs) = x == y  elem x xs
elem x [] = false
Here the instance argument to _==_
is solved by the corresponding argument
to elem
. Just like ordinary implicit arguments, instance arguments can be
given explicitly. The above definition is equivalent to
elem : {A : Set} {{eqA : Eq A}} → A → List A → Bool
elem {{eqA}} x (y ∷ xs) = _==_ {{eqA}} x y  elem {{eqA}} x xs
elem x [] = false
A very useful function that exploits this is the function it
which lets you
apply instance resolution to solve an arbitrary goal:
it : ∀ {a} {A : Set a} → {{A}} → A
it {{x}} = x
As the last example shows, the name of the instance argument can be omitted in the type signature:
_==_ : {A : Set} → {{Eq A}} → A → A → Bool
Defining type classes¶
The type of an instance argument should have the form {Γ} → C vs
,
where C
is a postulated name, a bound variable, or the name of a
data or record type, and {Γ}
denotes an arbitrary number of
implicit or instance arguments (see Dependent instances below
for an example where {Γ}
is nonempty).
Instances with explicit arguments are also accepted but will not be considered as instances because the value of the explicit arguments cannot be derived automatically. Having such an instance has no effect and thus raises a warning.
Instance arguments whose types end in any other type are currently also accepted but cannot be resolved by instance search, so they must be given by hand. For this reason it is not recommended to use such instance arguments. Doing so will also raise a warning.
Other than that there are no requirements on the type of an instance argument. In particular, there is no special declaration to say that a type is a “type class”. Instead, Haskellstyle type classes are usually defined as record types. For instance,
record Monoid {a} (A : Set a) : Set a where
field
mempty : A
_<>_ : A → A → A
In order to make the fields of the record available as functions taking instance arguments you can use the special module application
open Monoid {{...}} public
This will bring into scope
mempty : ∀ {a} {A : Set a} → {{Monoid A}} → A
_<>_ : ∀ {a} {A : Set a} → {{Monoid A}} → A → A → A
Superclass dependencies can be implemented using Instance fields.
See Module application and Record modules for details about how
the module application is desugared. If defined by hand, mempty
would be
mempty : ∀ {a} {A : Set a} → {{Monoid A}} → A
mempty {{mon}} = Monoid.mempty mon
Although record types are a natural fit for Haskellstyle type classes, you can use instance arguments with data types to good effect. See the Examples below.
Declaring instances¶
As seen above, instance arguments in the context are available when solving
instance arguments, but you also need to be able to
define toplevel instances for concrete types. This is done using the
instance
keyword, which starts a block in
which each definition is marked as an instance available for instance
resolution. For example, an instance Monoid (List A)
can be defined as
instance
ListMonoid : ∀ {a} {A : Set a} → Monoid (List A)
ListMonoid = record { mempty = []; _<>_ = _++_ }
Or equivalently, using copatterns:
instance
ListMonoid : ∀ {a} {A : Set a} → Monoid (List A)
mempty {{ListMonoid}} = []
_<>_ {{ListMonoid}} xs ys = xs ++ ys
Toplevel instances must target a named type (Monoid
in this case), and
cannot be declared for types in the context.
You can define local instances in letexpressions in the same way as a toplevel instance. For example:
mconcat : ∀ {a} {A : Set a} → {{Monoid A}} → List A → A
mconcat [] = mempty
mconcat (x ∷ xs) = x <> mconcat xs
sum : List Nat → Nat
sum xs =
let instance
NatMonoid : Monoid Nat
NatMonoid = record { mempty = 0; _<>_ = _+_ }
in mconcat xs
Instances can have instance arguments themselves, which will be filled in recursively during instance resolution. For instance,
record Eq {a} (A : Set a) : Set a where
field
_==_ : A → A → Bool
open Eq {{...}} public
instance
eqList : ∀ {a} {A : Set a} → {{Eq A}} → Eq (List A)
_==_ {{eqList}} [] [] = true
_==_ {{eqList}} (x ∷ xs) (y ∷ ys) = x == y && xs == ys
_==_ {{eqList}} _ _ = false
eqNat : Eq Nat
_==_ {{eqNat}} = natEquals
ex : Bool
ex = (1 ∷ 2 ∷ 3 ∷ []) == (1 ∷ 2 ∷ [])  false
Note the two calls to _==_
in the righthand side of the second clause. The
first uses the Eq A
instance and the second uses a recursive call to
eqList
. In the example ex
, instance resolution, needing a value of type Eq
(List Nat)
, will try to use the eqList
instance and find that it needs an
instance argument of type Eq Nat
, it will then solve that with eqNat
and return the solution eqList {{eqNat}}
.
Note
At the moment there is no termination check on instances, so it is possible
to construct nonsensical instances like
loop : ∀ {a} {A : Set a} → {{Eq A}} → Eq A
.
To prevent looping in cases like this, the search depth of instance search
is limited, and once the maximum depth is reached, a type error will be
thrown. You can set the maximum depth using the instancesearchdepth
flag.
Restricting instance search¶
To restrict an instance to the current module, you can mark it as private. For instance,
record Default (A : Set) : Set where
field default : A
open Default {{...}} public
module M where
private
instance
defaultNat : Default Nat
defaultNat .default = 6
test₁ : Nat
test₁ = default
_ : test₁ ≡ 6
_ = refl
open M
instance
defaultNat : Default Nat
defaultNat .default = 42
test₂ : Nat
test₂ = default
_ : test₂ ≡ 42
_ = refl
Constructor instances¶
Although instance arguments are most commonly used for record types,
mimicking Haskellstyle type classes, they can also be used with data
types. In this case you often want the constructors to be instances,
which is achieved by declaring them inside an instance
block. Constructors can only be declared as instances if all their
arguments are implicit or instance arguments. See
Instance resolution below for the details.
A simple example of a constructor that can be made an instance is the reflexivity constructor of the equality type:
data _≡_ {a} {A : Set a} (x : A) : A → Set a where
instance refl : x ≡ x
This allows trivial equality proofs to be inferred by instance resolution,
which can make working with functions that have preconditions less of a burden.
As an example, here is how one could use this to define a function that takes a
natural number and gives back a Fin n
(the type of naturals smaller than
n
):
data Fin : Nat → Set where
zero : ∀ {n} → Fin (suc n)
suc : ∀ {n} → Fin n → Fin (suc n)
mkFin : ∀ {n} (m : Nat) → {{suc m  n ≡ 0}} → Fin n
mkFin {zero} m {{}}
mkFin {suc n} zero = zero
mkFin {suc n} (suc m) = suc (mkFin m)
five : Fin 6
five = mkFin 5  OK
In the first clause of mkFin
we use an absurd pattern to discharge the impossible assumption suc m ≡
0
. See the next section for
another example of constructor instances.
Record fields can also be declared instances, with the effect that the corresponding projection function is considered a toplevel instance.
Overlapping instances¶
By default, Agda does not allow overlapping instances. Two instances are defined to overlap if they could both solve the instance goal when given appropriate solutions for their recursive (instance) arguments.
For example, in code below, the instances zero and suc overlap for the goal ex₁, because either one of them can be used to solve the goal when given appropriate arguments, hence instance search fails.
infix 4 _∈_
data _∈_ {A : Set} (x : A) : List A → Set where
instance
zero : ∀ {xs} → x ∈ x ∷ xs
suc : ∀ {y xs} → {{x ∈ xs}} → x ∈ y ∷ xs
ex₁ : 1 ∈ 1 ∷ 2 ∷ 3 ∷ 4 ∷ []
ex₁ = it  overlapping instances
Overlapping instances can be enabled via the overlappinginstances
flag. Be aware that enabling this flag might lead to an exponential
slowdown in instance resolution and possibly (apparent) looping
behaviour.
Examples¶
Dependent instances¶
Consider a variant on the Eq
class where the equality function produces a
proof in the case the arguments are equal:
record Eq {a} (A : Set a) : Set a where
field
_==_ : (x y : A) → Maybe (x ≡ y)
open Eq {{...}} public
A simple booleanvalued equality function is problematic for types with dependencies, like the Σtype
data Σ {a b} (A : Set a) (B : A → Set b) : Set (a ⊔ b) where
_,_ : (x : A) → B x → Σ A B
since given two pairs x , y
and x₁ , y₁
, the types of the second
components y
and y₁
can be completely different and not admit an
equality test. Only when x
and x₁
are really equal can we hope to
compare y
and y₁
. Having the equality function return a proof means
that we are guaranteed that when x
and x₁
compare equal, they really
are equal, and comparing y
and y₁
makes sense.
An Eq
instance for Σ
can be defined as follows:
instance
eqΣ : ∀ {a b} {A : Set a} {B : A → Set b} → {{Eq A}} → {{∀ {x} → Eq (B x)}} → Eq (Σ A B)
_==_ {{eqΣ}} (x , y) (x₁ , y₁) with x == x₁
_==_ {{eqΣ}} (x , y) (x₁ , y₁)  nothing = nothing
_==_ {{eqΣ}} (x , y) (.x , y₁)  just refl with y == y₁
_==_ {{eqΣ}} (x , y) (.x , y₁)  just refl  nothing = nothing
_==_ {{eqΣ}} (x , y) (.x , .y)  just refl  just refl = just refl
Note that the instance argument for B
states that there should be
an Eq
instance for B x
, for any x : A
. The argument x
must be implicit, indicating that it needs to be inferred by
unification whenever the B
instance is used. See
Instance resolution below for more details.
Instance resolution¶
Given a goal that should be solved using instance resolution we proceed in the following four stages:
 Verify the goal
 First we check that the goal type has the right shape to be solved
by instance resolution. It should be of the form
{Γ} → C vs
, where the target typeC
is a variable from the context or the name of a data or record type, and{Γ}
denotes a telescope of implicit or instance arguments. If this is not the case instance resolution fails with an error message[1].  Find candidates
 In the second stage we compute a set of
candidates. Letbound variables and
toplevel definitions in scope are candidates if they are defined in
an
instance
block. Lambdabound variables, i.e. variables bound in lambdas, function types, lefthand sides, or module parameters, are candidates if they are bound as instance arguments using{{ }}
. Only candidates of type{Δ} → C us
, whereC
is the target type computed in the previous stage and{Δ}
only contains implicit or instance arguments, are considered.  Check the candidates
We attempt to use each candidate in turn to build an instance of the goal type
{Γ} → C vs
. First we extend the current context by{Γ}
. Then, given a candidatec : {Δ} → A
we generate fresh metavariablesαs : {Δ}
for the arguments ofc
, with ordinary metavariables for implicit arguments, and instance metavariables, solved by a recursive call to instance resolution, for instance arguments.Next we unify
A[Δ := αs]
withC vs
and apply instance resolution to the instance metavariables inαs
. Both unification and instance resolution have three possible outcomes: yes, no, or maybe. In case we get a no answer from any of them, the current candidate is discarded, otherwise we return the potential solutionλ {Γ} → c αs
. Compute the result
From the previous stage we get a list of potential solutions. If the list is empty we fail with an error saying that no instance for
C vs
could be found (no). If there is a single solution we use it to solve the goal (yes), and if there are multiple solutions we check if they are all equal. If they are, we solve the goal with one of them (yes), but if they are not, we postpone instance resolution (maybe), hoping that some of the maybes will turn into nos once we know more about the involved metavariables.If there are leftover instance problems at the end of type checking, the corresponding metavariables are printed in the Emacs status buffer together with their types and source location. The candidates that gave rise to potential solutions can be printed with the show constraints command (
Cc C=
).
[1]  Instance goal verification is buggy at the moment. See issue #1322. 
Irrelevance¶
Since version 2.2.8 Agda supports irrelevancy annotations. The general rule is that anything prepended by a dot (.) is marked irrelevant, which means that it will only be typechecked but never evaluated.
Note
This section is about compiletime irrelevance. See Runtime Irrelevance for the section on runtime irrelevance.
Motivating example¶
One intended use case of irrelevance is data structures with embedded proofs, like sorted lists.
data _≤_ : Nat → Nat → Set where
zero≤ : {n : Nat} → zero ≤ n
suc≤suc : {m n : Nat} → m ≤ n → suc m ≤ suc n
postulate
p₁ : 0 ≤ 1
p₂ : 0 ≤ 1
module NoIrrelevance where
data SList (bound : Nat) : Set where
[] : SList bound
scons : (head : Nat)
→ (head ≤ bound)
→ (tail : SList head)
→ SList bound
Usually, when we define datatypes with embedded proofs we are forced to reason about the values of these proofs. For example, suppose we have two lists l₁
and l₂
with the same elements but different proofs:
l₁ : SList 1
l₁ = scons 0 p₁ []
l₂ : SList 1
l₂ = scons 0 p₂ []
Now suppose we want to prove that l₁
and l₂
are equal:
l₁≡l₂ : l₁ ≡ l₂
l₁≡l₂ = refl
It’s not so easy! Agda gives us an error:
p₁ != p₂ of type 0 ≤ 1
when checking that the expression refl has type l₁ ≡ l₂
We can’t show that l₁ ≡ l₂
by refl
when p₁
and p₂
are relevant. Instead, we need to reason about proofs of 0 ≤ 1
.
postulate
proofequality : p₁ ≡ p₂
Now we can prove l₁ ≡ l₂
by rewriting with this equality:
l₁≡l₂ : l₁ ≡ l₂
l₁≡l₂ rewrite proofequality = refl
Reasoning about equality of proofs becomes annoying quickly. We would like to avoid this kind of reasoning about proofs here  in this case we only care that a proof of head ≤ bound
exists, i.e. any proof suffices. We can use irrelevance annotations to tell Agda we don’t care about the values of the proofs:
data SList (bound : Nat) : Set where
[] : SList bound
scons : (head : Nat)
→ .(head ≤ bound)  note the dot!
→ (tail : SList head)
→ SList bound
The effect of the irrelevant type in the signature of scons is that scons’s second argument is never inspected after Agda has ensured that it has the right type. The typechecker ignores irrelevant arguments when checking equality, so two lists can be equal even if they contain different proofs:
l₁ : SList 1
l₁ = scons 0 p₁ []
l₂ : SList 1
l₂ = scons 0 p₂ []
l₁≡l₂ : l₁ ≡ l₂
l₁≡l₂ = refl
Irrelevant function types¶
For starters, consider irrelevant nondependent function types:
f : .A → B
This type implies that f
does not depend computationally on its argument.
What can be done to irrelevant arguments¶
Example 1. We can prove that two applications of an unknown irrelevant function to two different arguments are equal.
 an unknown function that does not use its second argument
postulate
f : {A B : Set} > A > .B > A
 the second argument is irrelevant for equality
proofIrr : {A : Set}{x y z : A} > f x y ≡ f x z
proofIrr = refl
Example 2. We can use irrelevant arguments as arguments to other irrelevant functions.
id : {A B : Set} > (.A > B) > .A > B
id g x = g x
Example 3. We can match on an irrelevant argument of an empty type with an absurd pattern ()
.
data ⊥ : Set where
zeronotone : .(0 ≡ 1) → ⊥
zeronotone ()
What can’t be done to irrelevant arguments¶
Example 1. You can’t use an irrelevant value in a nonirrelevant context.
badplus : Nat → .Nat → Nat
badplus n m = m + n
Variable m is declared irrelevant, so it cannot be used here
when checking that the expression m has type Nat
Example 2. You can’t declare the function’s return type as irrelevant.
bad : Nat → .Nat
bad n = 1
Invalid dotted expression
when checking that the expression .Nat has type Set _47
Example 3. You can’t pattern match on an irrelevant value.
badMatching : Nat → .Nat → Nat
badMatching n zero = n
badMatching n (suc m) = n
Cannot pattern match against irrelevant argument of type Nat
when checking that the pattern zero has type Nat
Example 4. We also can’t match on an irrelevant record (see Record Types).
record Σ (A : Set) (B : A → Set) : Set where
constructor _,_
field
fst : A
snd : B fst
irrElim : {A : Set} {B : A → Set} → .(Σ A B) → _
irrElim (a , b) = ?
Cannot pattern match against irrelevant argument of type Σ A B
when checking that the pattern a , b has type Σ A B
If this were allowed, b would have type B a but this type is not even wellformed because a is irrelevant!
Irrelevant declarations¶
Postulates and functions can be marked as irrelevant by prefixing the name with a dot when the name is declared. Irrelevant definitions can only be used as arguments of functions of an irrelevant function type .A → B
.
Examples:
.irrFunction : Nat → Nat
irrFunction zero = zero
irrFunction (suc n) = suc (suc (irrFunction n))
postulate
.assumefalse : (A : Set) → A
An important example is the irrelevance axiom irrAx
:
postulate
.irrAx : ∀ {ℓ} {A : Set ℓ} > .A > A
This axiom is not provable inside Agda, but it is often very useful when working with irrelevance.
Irrelevant record fields¶
Record fields (see Record Types) can be marked as irrelevant by
prefixing their name with a dot in the definition of the record type.
Projections for irrelevant fields are only created if option
irrelevantprojections
is supplied (since Agda > 2.5.4).
Example 1. A record type containing pairs of numbers satisfying certain properties.
record InterestingNumbers : Set where
field
n : Nat
m : Nat
.prop1 : n + m ≡ n * m + 2
.prop2 : suc m ≤ n
Example 2. For any type A
, we can define a ‘squashed’ version Squash A
where all elements are equal.
record Squash (A : Set) : Set where
constructor squash
field
.proof : A
open Squash
.unsquash : ∀ {A} → Squash A → A
unsquash x = proof x
Example 3. We can define the subset of x : A
satisfying P x
with irrelevant membership certificates.
record Subset (A : Set) (P : A > Set) : Set where
constructor _#_
field
elem : A
.certificate : P elem
.certificate : {A : Set}{P : A > Set} > (x : Subset A P) > P (Subset.elem x)
certificate (a # p) = irrAx p
Dependent irrelevant function types¶
Just like nondependent functions, we can also make dependent functions irrelevant. The basic syntax is as in the following examples:
f : .(x y : A) → B
f : .{x y z : A} → B
f : .(xs {ys zs} : A) → B
f : ∀ x .y → B
f : ∀ x .{y} {z} .v → B
f : .{{x : A}} → B
The declaration
f : .(x : A) → B[x]
f x = t[x]
requires that x
is irrelevant both in t[x]
and in B[x]
. This is possible if, for instance, B[x] = C x
, with C : .A → Set
.
Dependent irrelevance allows us to define the eliminator for the Squash type:
elimSquash : {A : Set} (P : Squash A → Set)
(ih : .(a : A) → P (squash a)) →
(a⁻ : Squash A) → P a⁻
elimSquash P ih (squash a) = ih a
Note that this would not typecheck with (ih : (a : A) → P (squash a))
.
Irrelevant instance arguments¶
Contrary to normal instance arguments, irrelevant instance arguments (see Instance Arguments) are not required to have a unique solution.
record ⊤ : Set where
instance constructor tt
NonZero : Nat → Set
NonZero zero = ⊥
NonZero (suc _) = ⊤
pred′ : (n : Nat) .{{_ : NonZero n}} → Nat
pred′ zero {{}}
pred′ (suc n) = n
findnonzero : (n : Nat) {{x y : NonZero n}} → Nat
findnonzero n = pred′ n
Subtyping of irrelevant function spaces¶
Normally, if f : .(x : A) → B
then we have λ x → f x : (x : A) →
B
but not f : (x : A) → B
. When the option subtyping
is
enabled, Agda will make use of the subtyping rule .(x : A) → B <: (x
: A) → B
, so there is no need for etaexpanding the function f
.
Lambda Abstraction¶
Pattern matching lambda¶
Anonymous pattern matching functions can be defined using one of the two following syntaxes:
\ { p11 .. p1n > e1 ; … ; pm1 .. pmn > em }
\ where
p11 .. p1n > e1
…
pm1 .. pmn > em
(where, as usual, \
and >
can be replaced by λ
and →
).
Note that the where
keyword introduces an indented block of clauses;
if there is only one clause then it may be used inline.
Internally this is translated into a function definition of the following form:
extlam p11 .. p1n = e1
…
extlam pm1 .. pmn = em
where extlam is a fresh name. This means that anonymous pattern matching functions are generative. For instance, refl
will not be accepted as an inhabitant of the type
(λ { true → true ; false → false }) ==
(λ { true → true ; false → false })
because this is equivalent to extlam1 ≡ extlam2
for some distinct fresh names extlam1
and extlam2
.
Currently the where
and with
constructions are not allowed in (the toplevel clauses of) anonymous pattern matching functions.
Examples:
and : Bool → Bool → Bool
and = λ { true x → x ; false _ → false }
xor : Bool → Bool → Bool
xor = λ { true true → false
; false false → false
; _ _ → true
}
eq : Bool → Bool → Bool
eq = λ where
true true → true
false false → true
_ _ → false
fst : {A : Set} {B : A → Set} → Σ A B → A
fst = λ { (a , b) → a }
snd : {A : Set} {B : A → Set} (p : Σ A B) → B (fst p)
snd = λ { (a , b) → b }
swap : {A B : Set} → Σ A (λ _ → B) → Σ B (λ _ → A)
swap = λ where (a , b) → (b , a)
Regular patternmatching lambdas are treated as nonerased function
definitions. One can make a patternmatching lambda erased by writing
@0
or @erased
after the lambda:
@0 _ : @0 Set → Set
_ = λ @0 { A → A }
@0 _ : @0 Set → Set
_ = λ @erased where
A → A
Local Definitions: let and where¶
There are two ways of declaring local definitions in Agda:
 letexpressions
 whereblocks
letexpressions¶
A letexpression defines an abbreviation. In other words, the expression that we define in a letexpression can neither be recursive, nor can let bound functions be defined by pattern matching.
Example:
f : Nat
f = let h : Nat → Nat
h m = suc (suc m)
in h zero + h (suc zero)
letexpressions have the general form
let f₁ : A₁₁ → … → A₁ₙ → A₁
f₁ x₁ … xₙ = e₁
…
fₘ : Aₘ₁ → … → Aₘₖ → Aₘ
fₘ x₁ … xₖ = eₘ
in e’
where previous definitions are in scope in later definitions. The
type signatures can be left out if Agda can infer them.
After typechecking, the meaning of this is simply the substitution
e’[f₁ := λ x₁ … xₙ → e; …; fₘ := λ x₁ … xₖ → eₘ]
. Since Agda
substitutes away letbindings, they do not show up in terms Agda
prints, nor in the goal display in interactive mode.
Let binding record patterns¶
For a record
record R : Set where
constructor c
field
f : X
g : Y
h : Z
a let expression of the form
let (c x y z) = t
in u
will be translated internally to as
let x = f t
y = g t
z = h t
in u
This is not allowed if R
is declared coinductive
.
whereblocks¶
whereblocks are much more powerful than letexpressions, as they
support arbitrary local definitions.
A where
can be attached to any function clause.
whereblocks have the general form
clause
where
decls
or
clause
module M where
decls
A simple instance is
g ps = e
where
f : A₁ → … → Aₙ → A
f p₁₁ … p₁ₙ= e₁
…
…
f pₘ₁ … pₘₙ= eₘ
Here, the pᵢⱼ
are patterns of the corresponding types and eᵢ
is an expression that can contain occurrences of f
.
Functions defined with a whereexpression must follow the rules for general definitions by pattern matching.
Example:
reverse : {A : Set} → List A → List A
reverse {A} xs = revappend xs []
where
revappend : List A → List A → List A
revappend [] ys = ys
revappend (x ∷ xs) ys = revappend xs (x ∷ ys)
Variable scope¶
The pattern variables of the parent clause of the whereblock are in
scope; in the previous example, these are A
and xs
. The
variables bound by the type signature of the parent clause are not in
scope. This is why we added the hidden binder {A}
.
Scope of the local declarations¶
The where
definitions are not visible outside of the clause that
owns these definitions (the parent clause). If the where
block is
given a name (form module M where
), then the definitions are
available as qualified by M
, since module M
is visible even
outside of the parent clause. The special form of an anonymous module
(module _ where
) makes the definitions visible outside of the
parent clause without qualification.
If the parent function of a named where
block
(form module M where
) is private
,
then module M
is also private
.
However, the declarations inside M
are not private unless declared
so explicitly. Thus, the following example scope checks fine:
module Parent₁ where
private
parent = local
module Private where
local = Set
module Public = Private
test₁ = Parent₁.Public.local
Likewise, a private
declaration for a parent function does not
affect the privacy of local functions defined under a
module _ where
block:
module Parent₂ where
private
parent = local
module _ where
local = Set
test₂ = Parent₂.local
They can be declared private
explicitly, though:
module Parent₃ where
parent = local
module _ where
private
local = Set
Now, Parent₃.local
is not in scope.
A private
declaration for the parent of an ordinary
where
block has no effect on the local definitions, of course.
They are not even in scope.
Proving properties¶
Sometimes one needs to refer to local definitions in proofs about the
parent function. In this case, the module ⋯ where
variant is preferable.
reverse : {A : Set} → List A → List A
reverse {A} xs = revappend xs []
module Rev where
revappend : List A → List A → List A
revappend [] ys = ys
revappend (x :: xs) ys = revappend xs (x :: ys)
This gives us access to the local function as
Rev.revappend : {A : Set} (xs : List A) → List A → List A → List A
Alternatively, we can define local functions as private to the module we are working in; hence, they will not be visible in any module that imports this module but it will allow us to prove some properties about them.
private
revappend : {A : Set} → List A → List A → List A
revappend [] ys = ys
revappend (x ∷ xs) ys = revappend xs (x ∷ ys)
reverse' : {A : Set} → List A → List A
reverse' xs = revappend xs []
More Examples (for Beginners)¶
Using a letexpression:
twmap : {A : Set} → List A → List (List A)
twmap {A} xs = let twice : List A → List A
twice xs = xs ++ xs
in map (\ x → twice [ x ]) xs
Same definition but with less type information:
twmap' : {A : Set} → List A → List (List A)
twmap' {A} xs = let twice : _
twice xs = xs ++ xs
in map (\ x → twice [ x ]) xs
Same definition but with a whereexpression
twmap'' : {A : Set} → List A → List (List A)
twmap'' {A} xs = map (\ x → twice [ x ]) xs
where twice : List A → List A
twice xs = xs ++ xs
Even less type information using let:
g : Nat → List Nat
g zero = [ zero ]
g (suc n) = let sing = [ suc n ]
in sing ++ g n
Same definition using where:
g' : Nat → List Nat
g' zero = [ zero ]
g' (suc n) = sing ++ g' n
where sing = [ suc n ]
More than one definition in a let:
h : Nat → Nat
h n = let add2 : Nat
add2 = suc (suc n)
twice : Nat → Nat
twice m = m * m
in twice add2
More than one definition in a where:
fibfact : Nat → Nat
fibfact n = fib n + fact n
where fib : Nat → Nat
fib zero = suc zero
fib (suc zero) = suc zero
fib (suc (suc n)) = fib (suc n) + fib n
fact : Nat → Nat
fact zero = suc zero
fact (suc n) = suc n * fact n
Combining let and where:
k : Nat → Nat
k n = let aux : Nat → Nat
aux m = pred (h m) + fibfact m
in aux (pred n)
where pred : Nat → Nat
pred zero = zero
pred (suc m) = m
Lexical Structure¶
Agda code is written in UTF8 encoded plain text files with the
extension .agda
. Most unicode characters can be used in
identifiers and whitespace is important, see Names and
Layout below.
Tokens¶
Keywords and special symbols¶
Most nonwhitespace unicode can be used as part of an Agda name, but there are two kinds of exceptions:
 special symbols
 Characters with special meaning that cannot appear at all in a name. These are
.;{}()@"
.  keywords
Reserved words that cannot appear as a name part, but can appear in a name together with other characters.
=

>
→
:
?
\
λ
∀..
...
abstract
coinductive
constructor
data
doetaequality
field
forallimport
in
inductive
infix
infixl
infixr
instance
interleaved
let
macromodule
mutual
noetaequality
open
overlappattern
postulate
primitive
private
quote quoteTermrecord
rewrite
Set
syntax
tactic
unquote unquoteDecl unquoteDef variablewhere
with
The
Set
keyword can appear with a natural number suffix, optionally subscripted (see Sort System). For instanceSet42
andSet₄₂
are both keywords. keywords in import directives
The following words are only reserved in import directives (in connection with
import
oropen
):public
using
hiding
renaming
to
Names¶
A qualified name is a nonempty sequence of names separated by
dots (.
). A name is an alternating sequence of name parts and
underscores (_
), containing at least one name part. A name part
is a nonempty sequence of unicode characters, excluding whitespace,
_
, and special symbols. A
name part cannot be one of the
keywords above, and cannot start
with a single quote, '
(which are used for character literals, see
Literals below).
 Examples
 Valid:
data?
,::
,if_then_else_
,0b
,_⊢_∈_
,x=y
 Invalid:
data_?
,foo__bar
,_
,a;b
,[_.._]
 Valid:
The underscores in a name indicate where the arguments go when the name is used
as an operator. For instance, the application _+_ 1 2
can be written as 1
+ 2
. See Mixfix Operators for more information. Since most sequences
of characters are valid names, whitespace is more important than in other
languages. In the example above the whitespace around +
is required, since
1+2
is a valid name.
Qualified names are used to refer to entities defined in other modules. For
instance Prelude.Bool.true
refers to the name true
defined in the
module Prelude.Bool
. See Module System for more information.
Literals¶
There are four types of literal values: integers, floats, characters, and strings. See Builtins for the corresponding types, and Literal Overloading for how to support literals for userdefined types.
 Integers
Integer values in decimal, hexadecimal (prefixed by
0x
), or binary (prefixed by0b
) notation. The character _ can be used to separate groups of digits. Nonnegative numbers map by default to builtin natural numbers, but can be overloaded. Negative numbers have no default interpretation and can only be used through overloading.Examples:
123
,0xF0F080
,42
,0xF
,0b11001001
,1_000_000_000
,0b01001000_01001001
.
 Floats
Floating point numbers in the standard notation (with square brackets denoting optional parts):
float ::= [] decimal . decimal [exponent]  [] decimal exponent exponent ::= (e  E) [+  ] decimal
These map to builtin floats and cannot be overloaded.
Examples:
1.0
,5.0e+12
,1.01e16
,4.2E9
,50e3
.
 Characters
Character literals are enclosed in single quotes (
'
). They can be a single (unicode) character, other than'
or\
, or an escaped character. Escaped characters start with a backslash\
followed by an escape code. Escape codes are natural numbers in decimal or hexadecimal (prefixed byx
) between0
and0x10ffff
(1114111
), or one of the following special escape codes:Code ASCII Code ASCII Code ASCII Code ASCII a
7 b
8 t
9 n
10 v
11 f
12 \
\
'
'
"
"
NUL
0 SOH
1 STX
2 ETX
3 EOT
4 ENQ
5 ACK
6 BEL
7 BS
8 HT
9 LF
10 VT
11 FF
12 CR
13 SO
14 SI
15 DLE
16 DC1
17 DC2
18 DC3
19 DC4
20 NAK
21 SYN
22 ETB
23 CAN
24 EM
25 SUB
26 ESC
27 FS
28 GS
29 RS
30 US
31 SP
32 DEL
127 Character literals map to the builtin character type and cannot be overloaded.
Examples:
'A'
,'∀'
,'\x2200'
,'\ESC'
,'\32'
,'\n'
,'\''
,'"'
.
 Strings
String literals are sequences of, possibly escaped, characters enclosed in double quotes
"
. They follow the same rules as character literals except that double quotes"
need to be escaped rather than single quotes'
. String literals map to the builtin string type by default, but can be overloaded.Example:
"Привет \"мир\"\n"
.
Holes¶
Holes are an integral part of the interactive development supported by the
Emacs mode. Any text enclosed in {!
and !}
is a
hole and may contain nested holes. A hole with no contents can be written
?
. There are a number of Emacs commands that operate on the contents of a
hole. The type checker ignores the contents of a hole and treats it as an
unknown (see Implicit Arguments).
Example: {! f {!x!} 5 !}
Comments¶
Singleline comments are written with a double dash 
followed by
arbitrary text. Multiline comments are enclosed in {
and }
and can be nested. Comments cannot appear in string
literals.
Example:
{ Here is a { nested }
comment }
s : String line comment {
s = "{ not a comment }"
Layout¶
Agda is layout sensitive using similar rules as Haskell, with the exception
that layout is mandatory: you cannot use explicit {
, }
and ;
to
avoid it.
A layout block contains a sequence of statements and is started by one of the layout keywords:
abstract
constructor
do
field
instance
let
macro
mutual
postulate
primitive
private
variable
where
The first token after the layout keyword decides the indentation of the block. Any token indented more than this is part of the previous statement, a token at the same level starts a new statement, and a token indented less lies outside the block.
data Nat : Set where  starts a layout block
 comments are not tokens
zero : Nat  statement 1
suc : Nat →  statement 2
Nat  also statement 2
one : Nat  outside the layout block
one = suc zero
Note that the indentation of the layout keyword does not matter.
If several layout blocks are started by layout keywords without line break in between (where line breaks inside block comments do not count), then those blocks indented more than the last block go passive, meaning they cannot be further extended by new statements:
private module M where postulate
A : Set  moduleblock goes passive
B : Set  postulateblock can still be extended
module N where  privateblock can still be extended
An Agda file contains one toplevel layout block, with the special rule that the contents of the toplevel module need not be indented.
module Example where
NotIndented : Set₁
NotIndented = Set
Literate Agda¶
Agda supports literate programming with multiple typesetting
tools like LaTeX, Markdown and reStructuredText. For instance, with LaTeX,
everything in a file is a comment unless enclosed in \begin{code}
,
\end{code}
. Literate Agda files have special file extensions, like
.lagda
and .lagda.tex
for LaTeX, .lagda.md
for Markdown,
.lagda.rst
for reStructuredText instead of .agda
. The main use case
for literate Agda is to generate LaTeX documents from Agda code. See
Generating HTML and Generating LaTeX for more information.
\documentclass{article}
% some preamble stuff
\begin{document}
Introduction usually goes here
\begin{code}
module MyPaper where
open import Prelude
five : Nat
five = 2 + 3
\end{code}
Now, conclusions!
\end{document}
Literal Overloading¶
Natural numbers¶
By default natural number literals are
mapped to the builtin natural number type. This can be
changed with the FROMNAT
builtin, which binds to a function accepting a
natural number:
{# BUILTIN FROMNAT fromNat #}
This causes natural number literals n
to be desugared to fromNat n
,
whenever fromNat
is in scope unqualified (renamed or not).
Note that the desugaring happens before implicit argument are inserted so fromNat
can have any number of
implicit or instance arguments. This can be
exploited to support overloaded literals by defining a type class containing fromNat
:
module numbersimple where
record Number {a} (A : Set a) : Set a where
field fromNat : Nat → A
open Number {{...}} public
{# BUILTIN FROMNAT fromNat #}
This definition requires that any natural number can be mapped into the given
type, so it won’t work for types like Fin n
. This can be solved by refining
the Number
class with an additional constraint:
record Number {a} (A : Set a) : Set (lsuc a) where
field
Constraint : Nat → Set a
fromNat : (n : Nat) {{_ : Constraint n}} → A
open Number {{...}} public using (fromNat)
{# BUILTIN FROMNAT fromNat #}
This is the definition used in Agda.Builtin.FromNat
.
A Number
instance for Nat
is simply this:
instance
NumNat : Number Nat
NumNat .Number.Constraint _ = ⊤
NumNat .Number.fromNat m = m
A Number
instance for Fin n
can be defined as follows:
_≤_ : (m n : Nat) → Set
zero ≤ n = ⊤
suc m ≤ zero = ⊥
suc m ≤ suc n = m ≤ n
fromN≤ : ∀ m n → m ≤ n → Fin (suc n)
fromN≤ zero _ _ = zero
fromN≤ (suc _) zero ()
fromN≤ (suc m) (suc n) p = suc (fromN≤ m n p)
instance
NumFin : ∀ {n} → Number (Fin (suc n))
NumFin {n} .Number.Constraint m = m ≤ n
NumFin {n} .Number.fromNat m {{m≤n}} = fromN≤ m n m≤n
test : Fin 5
test = 3
It is important that the constraint for literals is trivial. Here,
3 ≤ 5
evaluates to ⊤
whose inhabitant is found by unification.
Using predefined function from the standard library and instance NumNat
,
the NumFin
instance can be simply:
open import Data.Fin using (Fin; #_)
open import Data.Nat using (suc; _≤?_)
open import Relation.Nullary.Decidable using (True)
instance
NumFin : ∀ {n} → Number (Fin n)
NumFin {n} .Number.Constraint m = True (suc m ≤? n)
NumFin {n} .Number.fromNat m {{m<n}} = #_ m {m<n = m<n}
Negative numbers¶
Negative integer literals have no default mapping and can only be used through
the FROMNEG
builtin. Binding this to a function fromNeg
causes
negative integer literals n
to be desugared to fromNeg n
, where n
is a builtin natural number. From Agda.Builtin.FromNeg
:
record Negative {a} (A : Set a) : Set (lsuc a) where
field
Constraint : Nat → Set a
fromNeg : (n : Nat) {{_ : Constraint n}} → A
open Negative {{...}} public using (fromNeg)
{# BUILTIN FROMNEG fromNeg #}
Strings¶
String literals are overloaded with
the FROMSTRING
builtin, which works just like FROMNAT
. If it is not
bound string literals map to builtin strings. From
Agda.Builtin.FromString
:
record IsString {a} (A : Set a) : Set (lsuc a) where
field
Constraint : String → Set a
fromString : (s : String) {{_ : Constraint s}} → A
open IsString {{...}} public using (fromString)
{# BUILTIN FROMSTRING fromString #}
Restrictions¶
Currently only integer and string literals can be overloaded.
Overloading does not work in patterns yet.
Mixfix Operators¶
A type name, function name, or constructor name can comprise one or more name
parts if we separate them with underscore characters _
, and the
resulting name can be used as an operator. From left to right, each argument
goes in the place of each underscore _
.
For instance, we can join with underscores the name parts if
, then
,
and else
into a single name if_then_else_
. The application of the
function name if_then_else_
to some arguments named x
, y
, and z
can still be written as:
 a standard application by using the full name
if_then_else_ x y z
 an operator application by placing the arguments between the name parts
if x then y else z
, leaving a space between arguments and part names  other sections of the full name, for instance leaving one or two underscores:
(if_then y else z) x
(if x then_else z) y
if x then y else_ z
if x then_else_ y z
if_then y else_ x z
(if_then_else z) x y
Examples of type names, function names, and constructor names as mixfix operators:
 Example type name _⇒_
_⇒_ : Bool → Bool → Bool
true ⇒ b = b
false ⇒ _ = true
 Example function name _and_
_and_ : Bool → Bool → Bool
true and x = x
false and _ = false
 Example function name if_then_else_
if_then_else_ : {A : Set} → Bool → A → A → A
if true then x else y = x
if false then x else y = y
 Example constructor name _∷_
data List (A : Set) : Set where
nil : List A
_∷_ : A → List A → List A
Precedence¶
Consider the expression true and false ⇒ false
.
Depending on which of _and_
and _⇒_
has more precedence,
it can either be read as (false and true) ⇒ false = true
,
or as false and (true ⇒ false) = true
.
Each operator is associated to a precedence, which is a floating point number (can be negative and fractional!). The default precedence for an operator is 20.
Note
Please note that >
is directly handled in the parser. As a result, the
precedence of >
is lower than any precedence you may declare with
infixl
and infixr
.
If we give _and_
more precedence than _⇒_
, then we will get the first result:
infix 30 _and_
 infix 20 _⇒_ (default)
pand : {x y z : Bool} → x and y ⇒ z ≡ (x and y) ⇒ z
pand = refl
eand : false and true ⇒ false ≡ true
eand = refl
But, if we declare a new operator _and’_
and give it less precedence than
_⇒_
, then we will get the second result:
_and’_ : Bool → Bool → Bool
_and’_ = _and_
infix 15 _and’_
 infix 20 _⇒_ (default)
p⇒ : {x y z : Bool} → x and’ y ⇒ z ≡ x and’ (y ⇒ z)
p⇒ = refl
e⇒ : false and’ true ⇒ false ≡ false
e⇒ = refl
Fixities can be changed when importing with a renaming
directive:
open M using (_∙_)
open M renaming (_∙_ to infixl 10 _*_)
This code brings two instances of the operator _∙_
in scope:
 the first named
_∙_
and with its original fixity  the second named
_*_
and with the fixity changed to act like a left associative operator of precedence 10.
Associativity¶
Consider the expression true ⇒ false ⇒ false
. Depending on whether _⇒_
associates to the left or to the right, it can be read as
(false ⇒ true) ⇒ false = false
, or false ⇒ (true ⇒ false) = true
,
respectively.
If we declare an operator _⇒_
as infixr
, it will associate to the right:
infixr 20 _⇒_
pright : {x y z : Bool} → x ⇒ y ⇒ z ≡ x ⇒ (y ⇒ z)
pright = refl
eright : false ⇒ true ⇒ false ≡ true
eright = refl
If we declare an operator _⇒’_
as infixl
, it will associate to the left:
infixl 20 _⇒’_
_⇒’_ : Bool → Bool → Bool
_⇒’_ = _⇒_
pleft : {x y z : Bool} → x ⇒’ y ⇒’ z ≡ (x ⇒’ y) ⇒’ z
pleft = refl
eleft : false ⇒’ true ⇒’ false ≡ false
eleft = refl
Ambiguity and Scope¶
If you have not yet declared the fixity of an operator, Agda will complain if you try to use ambiguously:
eambiguous : Bool
eambiguous = true ⇒ true ⇒ true
Could not parse the application true ⇒ true ⇒ true
Operators used in the grammar:
⇒ (infix operator, level 20)
Fixity declarations may appear anywhere in a module that other declarations may appear. They then apply to the entire scope in which they appear (i.e. before and after, but not outside).
Operators in telescopes¶
Agda does not yet support declaring the fixity of operators declared in telescopes, see Issue #1235 <https://github.com/agda/agda/issues/1235>.
However, the following hack currently works:
module _ {A : Set} (_+_ : A → A → A) (let infixl 5 _+_; _+_ = _+_) where
Module System¶
Module application¶
Anonymous modules¶
Basics¶
First let us introduce some terminology. A definition is a syntactic construction defining an entity such as a function or a datatype. A name is a string used to identify definitions. The same definition can have many names and at different points in the program it will have different names. It may also be the case that two definitions have the same name. In this case there will be an error if the name is used.
The main purpose of the module system is to structure the way names are used in a program. This is done by organising the program in an hierarchical structure of modules where each module contains a number of definitions and submodules. For instance,
module Main where
module B where
f : Nat → Nat
f n = suc n
g : Nat → Nat → Nat
g n m = m
Note that we use indentation to indicate which definitions are part of a module. In the example f
is in the module Main.B
and g
is in Main
. How to refer to a particular definition is determined by where it is located in the module hierarchy. Definitions from an enclosing module are referred to by their given names as seen in the type of f above. To access a definition from outside its defining module a qualified name has to be used.
module Main₂ where
module B where
f : Nat → Nat
f n = suc n
ff : Nat → Nat
ff x = B.f (B.f x)
To be able to use the short names for definitions in a module the module has to be opened.
module Main₃ where
module B where
f : Nat → Nat
f n = suc n
open B
ff : Nat → Nat
ff x = f (f x)
If A.qname
refers to a definition d
, then after open A
, qname
will also refer to d
. Note that qname
can itself be a qualified name. Opening a module only introduces new names for a definition, it never removes the old names. The policy is to allow the introduction of ambiguous names, but give an error if an ambiguous name is used.
Modules can also be opened within a local scope by putting the open B
within a where
clause:
ff₁ : Nat → Nat
ff₁ x = f (f x) where open B
Private definitions¶
To make a definition inaccessible outside its defining module it can be declared private
. A private definition is treated as a normal definition inside the module that defines it, but outside the module the definition has no name. In a dependently type setting there are some problems with private definitions—since the type checker performs computations, private names might show up in goals and error messages. Consider the following (contrived) example
module Main₄ where
module A where
private
IsZero’ : Nat → Set
IsZero’ zero = ⊤
IsZero’ (suc n) = ⊥
IsZero : Nat → Set
IsZero n = IsZero’ n
open A
prf : (n : Nat) → IsZero n
prf n = ?
The type of the goal ?0
is IsZero n
which normalises to IsZero’ n
. The question is how to display this normal form to the user. At the point of ?0
there is no name for IsZero’
. One option could be try to fold the term and print IsZero n
. This is a very hard problem in general, so rather than trying to do this we make it clear to the user that IsZero’
is something that is not in scope and print the goal as ;Main₄.A.IsZero’ n
. The leading semicolon indicates that the entity is not in scope. The same technique is used for definitions that only have ambiguous names.
In effect using private definitions means that, from the user’s perspective, we do not have subject reduction. This is just an illusion, however—the type checker has full access to all definitions.
Name modifiers¶
An alternative to making definitions private is to exert finer control over what names are introduced when opening a module. This is done by qualifying an open
statement with one or more of the modifiers using
, hiding
, or renaming
. You can combine both using
and hiding
with renaming
, but not with each other. The effect of
open A using (xs) renaming (ys to zs)
is to introduce the names xs
and zs
where xs
refers to the same definition as A.xs
and zs
refers to A.ys
. We do not permit xs
, ys
and zs
to overlap. The other forms of opening are defined in terms of this one.
An omitted renaming
modifier is equivalent to an empty renaming.
To refer to a module M
inside A
you write module M
. For instance,
open A using (module M)
Since 2.6.1: The fixity of an operator can be set or changed in a renaming
directive:
module ExampleRenamingFixity where
module ArithFoo where
postulate
A : Set
_&_ _^_ : A → A → A
infixr 10 _&_
open ArithFoo renaming (_&_ to infixl 8 _+_; _^_ to infixl 10 _^_)
Here, we change the fixity of _&_
while renaming it to _+_
, and assign a new fixity to _^_
which has the default fixity in module ArithFoo
.
Reexporting names¶
A useful feature is the ability to reexport names from another module. For instance, one may want to create a module to collect the definitions from several other modules. This is achieved by qualifying the open statement with the public keyword:
module Example where
module Nat₁ where
data Nat₁ : Set where
zero : Nat₁
suc : Nat₁ → Nat₁
module Bool₁ where
data Bool₁ : Set where
true false : Bool₁
module Prelude where
open Nat₁ public
open Bool₁ public
isZero : Nat₁ → Bool₁
isZero zero = true
isZero (suc _) = false
The module Prelude
above exports the names Nat
, zero
, Bool
, etc., in addition to isZero
.
Parameterised modules¶
So far, the module system features discussed have dealt solely with scope manipulation. We now turn our attention to some more advanced features.
It is sometimes useful to be able to work temporarily in a given signature. For instance, when defining functions for sorting lists it is convenient to assume a set of list elements A
and an ordering over A
. In Coq this can be done in two ways: using a functor, which is essentially a function between modules, or using a section. A section allows you to abstract some arguments from several definitions at once. We introduce parameterised modules analogous to sections in Coq. When declaring a module you can give a telescope of module parameters which are abstracted from all the definitions in the module. For instance, a simple implementation of a sorting function looks like this:
module Sort (A : Set)(_≤_ : A → A → Bool) where
insert : A → List A → List A
insert x [] = x ∷ []
insert x (y ∷ ys) with x ≤ y
insert x (y ∷ ys)  true = x ∷ y ∷ ys
insert x (y ∷ ys)  false = y ∷ insert x ys
sort : List A → List A
sort [] = []
sort (x ∷ xs) = insert x (sort xs)
As mentioned parametrising a module has the effect of abstracting the parameters over the definitions in the module, so outside the Sort module we have
Sort.insert : (A : Set)(_≤_ : A → A → Bool) →
A → List A → List A
Sort.sort : (A : Set)(_≤_ : A → A → Bool) →
List A → List A
For function definitions, explicit module parameter become explicit arguments to the abstracted function, and implicit parameters become implicit arguments. For constructors, however, the parameters are always implicit arguments. This is a consequence of the fact that module parameters are turned into datatype parameters, and the datatype parameters are implicit arguments to the constructors. It also happens to be the reasonable thing to do.
Something which you cannot do in Coq is to apply a section to its arguments. We allow this through the module application statement. In our example:
module SortNat = Sort Nat leqNat
This will define a new module SortNat as follows
module SortNat where
insert : Nat → List Nat → List Nat
insert = Sort.insert Nat leqNat
sort : List Nat → List Nat
sort = Sort.sort Nat leqNat
The new module can also be parameterised, and you can use name modifiers to control what definitions from the original module are applied and what names they have in the new module. The general form of a module application is
module M1 Δ = M2 terms modifiers
A common pattern is to apply a module to its arguments and then open the resulting module. To simplify this we introduce the shorthand
open module M1 Δ = M2 terms [public] mods
for
module M1 Δ = M2 terms mods
open M1 [public]
Splitting a program over multiple files¶
When building large programs it is crucial to be able to split the program over multiple files and to not have to type check and compile all the files for every change. The module system offers a structured way to do this. We define a program to be a collection of modules, each module being defined in a separate file. To gain access to a module defined in a different file you can import the module:
import M
In order to implement this we must be able to find the file in which a module is defined. To do this we require that the toplevel module A.B.C
is defined in the file C.agda
in the directory A/B/
. One could imagine instead to give a file name to the import statement, but this would mean cluttering the program with details about the file system which is not very nice.
When importing a module M
, the module and its contents are brought into scope as if the module had been defined in the current file. In order to get access to the unqualified names of the module contents it has to be opened. Similarly to module application we introduce the shorthand
open import M
for
import M
open M
Sometimes the name of an imported module clashes with a local module. In this case it is possible to import the module under a different name.
import M as M’
It is also possible to attach modifiers to import statements, limiting or changing what names are visible from inside the module.
Note that modifiers attached to open import
statements apply to the open
statement and not the import
statement.
Datatype modules and record modules¶
When you define a datatype it also defines a module so constructors can now be referred to qualified by their data type. For instance, given:
module DatatypeModules where
data Nat₂ : Set where
zero : Nat₂
suc : Nat₂ → Nat₂
data Fin : Nat₂ → Set where
zero : ∀ {n} → Fin (suc n)
suc : ∀ {n} → Fin n → Fin (suc n)
you can refer to the constructors unambiguously as Nat₂.zero
, Nat₂.suc
, Fin.zero
, and Fin.suc
(Nat₂
and Fin
are modules containing the respective constructors). Example:
inj : (n m : Nat₂) → Nat₂.suc n ≡ suc m → n ≡ m
inj .m m refl = refl
Previously you had to write something like
inj₁ : (n m : Nat₂) → _≡_ {A = Nat₂} (suc n) (suc m) → n ≡ m
inj₁ .m m refl = refl
to make the type checker able to figure out that you wanted the natural number suc in this case.
Also record declarations define a corresponding module, see Record modules.
Mutual Recursion¶
Agda offers multiple ways to write mutuallydefined data types, record types and functions.
The last two are more expressive than the first one as they allow the interleaving of declarations and definitions thus making it possible for some types to refere to the constructors of a mutuallydefined datatype.
Interleaved mutual blocks¶
Mutual recursive functions can be written by placing them inside an interleaved mutual
block. The type signature of each function must come before its defining clauses and its
usage sites on the righthand side of other functions.
The clauses for different functions can be interleaved e.g. for pedagogical purposes:
interleaved mutual
 Declarations:
even : Nat → Bool
odd : Nat → Bool
 zero is even, not odd
even zero = true
odd zero = false
 suc case: switch evenness on the predecessor
even (suc n) = odd n
odd (suc n) = even n
You can mix arbitrary declarations, such as modules and postulates, with mutually recursive
definitions. For data types and records the following syntax is used to separate the
declaration from the introduction of constructors in one or many data ... where
blocks:
interleaved mutual
 Declaration of a product record, a universe of codes, and a decoding function
record _×_ (A B : Set) : Set
data U : Set
El : U → Set
 We have a code for the type of natural numbers in our universe
data U where `Nat : U
El `Nat = Nat
 Btw we know how to pair values in a record
record _×_ A B where
inductive; constructor _,_
field fst : A; snd : B
 And we have a code for pairs in our universe
data _ where
_`×_ : (A B : U) → U
El (A `× B) = El A × El B
 we can now build types of nested pairs of natural numbers
tyexample : U
tyexample = `Nat `× ((`Nat `× `Nat) `× `Nat)
 and their values
valexample : El tyexample
valexample = 0 , ((1 , 2) , 3)
You can mix constructors for different data types in a data _ where
block
(underscore instead of name).
The interleaved mutual
blocks get desugared into the
Forward declaration blocks described below by:
 leaving the signatures where they are,
 grouping the clauses for a function together with the first of them, and
 grouping the constructors for a datatype together with the first of them.
Forward declaration¶
Mutual recursive functions can be written by placing the type signatures of all mutually recursive function before their definitions. The span of the mutual block will be automatically inferred by Agda:
f : A
g : B[f]
f = a[f, g]
g = b[f, g].
You can mix arbitrary declarations, such as modules and postulates, with mutually recursive definitions. For data types and records the following syntax is used to separate the declaration from the definition:
 Declaration.
data Vec (A : Set) : Nat → Set  Note the absence of ‘where’.
 Definition.
data Vec A where  Note the absence of a type signature.
[] : Vec A zero
_::_ : {n : Nat} → A → Vec A n → Vec A (suc n)
 Declaration.
record Sigma (A : Set) (B : A → Set) : Set
 Definition.
record Sigma A B where
constructor _,_
field fst : A
snd : B fst
The parameter lists in the second part of a data or record declaration behave like variables lefthand sides (although infix syntax is not supported). That is, they should have no type signatures, but implicit parameters can be omitted or bound by name.
Such a separation of declaration and definition is for instance needed when defining a set of codes for types and their interpretation as actual types (a socalled universe):
 Declarations.
data TypeCode : Set
Interpretation : TypeCode → Set
 Definitions.
data TypeCode where
nat : TypeCode
pi : (a : TypeCode) (b : Interpretation a → TypeCode) → TypeCode
Interpretation nat = Nat
Interpretation (pi a b) = (x : Interpretation a) → Interpretation (b x)
Note
In contrast to Interleaved mutual blocks,
in forwarddeclaration style we can only have one data ... where
block per data type.
When making separated declarations/definitions private or abstract you should attach the private
keyword to the declaration and the abstract
keyword to the definition. For instance, a private, abstract function can be defined as
private
f : A
abstract
f = e
Oldstyle mutual
blocks¶
Mutual recursive functions can be written by placing the type signatures of all mutually recursive function before their definitions:
mutual
f : A
f = a[f, g]
g : B[f]
g = b[f, g]
Using the mutual
keyword,
the universe example from above is expressed as follows:
mutual
data TypeCode : Set where
nat : TypeCode
pi : (a : TypeCode) (b : Interpretation a → TypeCode) → TypeCode
Interpretation : TypeCode → Set
Interpretation nat = Nat
Interpretation (pi a b) = (x : Interpretation a) → Interpretation (b x)
This alternative syntax desugars into the new syntax by sorting the content of the mutual block into a declaration and a definition part and placing the declarations before the definitions.
Declarations comprise:
 Type signatures of functions,
data
andrecord
declarations,unquoteDecl
. (Function includes herepostulate
andprimitive
etc.)  Module statements, such as
module
aliases,import
andopen
statements.  Pragmas that only need the name, but not the definition of the thing they affect (e.g.
INJECTIVE
).
Definitions comprise:
 Function clauses,
data
constructors andrecord
definitions,unquoteDef
. pattern
synonym definitions. Pragmas that need the definition, e.g.
INLINE
,ETA
, etc.  Pragmas that are not needed for type checking, like compiler pragmas.
Module definitions with module ... where
are not supported in oldstyle mutual
blocks.
Pattern Synonyms¶
A pattern synonym is a declaration that can be used on the left hand side (when pattern matching) as well as the right hand side (in expressions). For example:
data Nat : Set where
zero : Nat
suc : Nat → Nat
pattern z = zero
pattern ss x = suc (suc x)
f : Nat → Nat
f z = z
f (suc z) = ss z
f (ss n) = n
Pattern synonyms are implemented by substitution on the abstract syntax, so definitions are scopechecked but not typechecked. They are particularly useful for universe constructions.
Overloading¶
Pattern synonyms can be overloaded as long as all candidates have the same shape. Two pattern synonym definitions have the same shape if they are equal up to variable and constructor names. Shapes are checked at resolution time and after expansion of nested pattern synonyms.
For example:
data List (A : Set) : Set where
lnil : List A
lcons : A → List A → List A
data Vec (A : Set) : Nat → Set where
vnil : Vec A zero
vcons : ∀ {n} → A → Vec A n → Vec A (suc n)
pattern [] = lnil
pattern [] = vnil
pattern _∷_ x xs = lcons x xs
pattern _∷_ y ys = vcons y ys
lmap : ∀ {A B} → (A → B) → List A → List B
lmap f [] = []
lmap f (x ∷ xs) = f x ∷ lmap f xs
vmap : ∀ {A B n} → (A → B) → Vec A n → Vec B n
vmap f [] = []
vmap f (x ∷ xs) = f x ∷ vmap f xs
Flipping the arguments in the synonym for vcons
, changing it to pattern
_∷_ ys y = vcons y ys
, results in the following error when trying to use the
synonym:
Cannot resolve overloaded pattern synonym _∷_, since candidates
have different shapes:
pattern _∷_ x xs = lcons x xs
at patternsynonyms.lagda.rst:51,1316
pattern _∷_ ys y = vcons y ys
at patternsynonyms.lagda.rst:52,1316
(hint: overloaded pattern synonyms must be equal up to variable and
constructor names)
when checking that the clause lmap f (x ∷ xs) = f x ∷ lmap f xs has
type {A B : Set} → (A → B) → List A → List B
Refolding¶
For each pattern pattern lhs = rhs
, Agda declares a DISPLAY
pragma refolding rhs
to lhs
(see The DISPLAY pragma for
more details).
Positivity Checking¶
Note
This is a stub.
The NO_POSITIVITY_CHECK
pragma¶
The pragma switches off the positivity checker for data/record definitions and mutual blocks. This pragma was added in Agda 2.5.1
The pragma must precede a data/record definition or a mutual
block. The pragma cannot be used in safe
mode.
Examples:
Skipping a single data definition:
{# NO_POSITIVITY_CHECK #} data D : Set where lam : (D → D) → D
Skipping a single record definition:
{# NO_POSITIVITY_CHECK #} record U : Set where field ap : U → U
Skipping an oldstyle mutual block. Somewhere within a mutual block before a data/record definition:
mutual data D : Set where lam : (D → D) → D {# NO_POSITIVITY_CHECK #} record U : Set where field ap : U → U
Skipping an oldstyle mutual block. Before the
mutual
keyword:{# NO_POSITIVITY_CHECK #} mutual data D : Set where lam : (D → D) → D record U : Set where field ap : U → U
Skipping a newstyle mutual block. Anywhere before the declaration or the definition of a data/record in the block:
record U : Set data D : Set record U where field ap : U → U {# NO_POSITIVITY_CHECK #} data D where lam : (D → D) → D
POLARITY pragmas¶
Polarity pragmas can be attached to postulates. The polarities express how the postulate’s arguments are used. The following polarities are available:
_
: Unused.++
: Strictly positive.+
: Positive.
: Negative.*
: Unknown/mixed.
Polarity pragmas have the form {# POLARITY name <zero or more
polarities> #}
, and can be given wherever fixity declarations can
be given. The listed polarities apply to the given postulate’s
arguments (explicit/implicit/instance), from left to right. Polarities
currently cannot be given for module parameters. If the postulate
takes n arguments (excluding module parameters), then the number of
polarities given must be between 0 and n (inclusive).
Polarity pragmas make it possible to use postulated type formers in recursive types in the following way:
postulate
∥_∥ : Set → Set
{# POLARITY ∥_∥ ++ #}
data D : Set where
c : ∥ D ∥ → D
Note that one can use postulates that may seem benign, together with polarity pragmas, to prove that the empty type is inhabited:
postulate
_⇒_ : Set → Set → Set
lambda : {A B : Set} → (A → B) → A ⇒ B
apply : {A B : Set} → A ⇒ B → A → B
{# POLARITY _⇒_ ++ #}
data ⊥ : Set where
data D : Set where
c : D ⇒ ⊥ → D
notinhabited : D → ⊥
notinhabited (c f) = apply f (c f)
d : D
d = c (lambda notinhabited)
bad : ⊥
bad = notinhabited d
Polarity pragmas are not allowed in safe mode.
Postulates¶
A postulate is a declaration of an element of some type without an accompanying definition. With postulates we can introduce elements in a type without actually giving the definition of the element itself.
The general form of a postulate declaration is as follows:
postulate
c11 ... c1i : <Type>
...
cn1 ... cnj : <Type>
Example:
postulate
A B : Set
a : A
b : B
_=AB=_ : A > B > Set
a==b : a =AB= b
Introducing postulates is in general not recommended. Once postulates are introduced the consistency of the whole development is at risk, because there is nothing that prevents us from introducing an element in the empty set.
data False : Set where
postulate bottom : False
A preferable way to work is to define a module parametrised by the elements we need
module Absurd (bt : False) where
 ...
module M (A B : Set) (a : A) (b : B)
(_=AB=_ : A > B > Set) (a==b : a =AB= b) where
 ...
Pragmas¶
Pragmas are comments that are not ignored by Agda but have some special meaning. The general format is:
{# <PRAGMA_NAME> <arguments> #}
Index of pragmas¶
 BUILTIN
 CATCHALL
 COMPILE
 DISPLAY
 FOREIGN
 INJECTIVE
 INLINE
 NO_POSITIVITY_CHECK
 NO_TERMINATION_CHECK
 NO_UNIVERSE_CHECK
 NOINLINE
 NON_COVERING
 NON_TERMINATING
 OPTIONS
 POLARITY
 REWRITE
 STATIC
 TERMINATING
 WARNING_ON_USAGE
 WARNING_ON_IMPORT
See also Commandline and pragma options.
The DISPLAY
pragma¶
Users can declare a DISPLAY
pragma:
{# DISPLAY f e1 .. en = e #}
This causes f e1 .. en
to be printed in the same way as e
, where
ei
can bind variables used in e
. The expressions ei
and e
are scope checked, but not type checked.
For example this can be used to print overloaded (instance) functions with the overloaded name:
instance
NumNat : Num Nat
NumNat = record { ..; _+_ = natPlus }
{# DISPLAY natPlus a b = a + b #}
Limitations
 Lefthand sides are restricted to variables, constructors, defined functions or types, and literals. In particular, lambdas are not allowed in lefthand sides.
 Since DISPLAY pragmas are not type checked implicit argument insertion may not work properly if the type of f computes to an implicit function space after pattern matching.
The INJECTIVE
pragma¶
Injective pragmas can be used to mark a definition as injective for
the pattern matching unifier. This can be used as a version of
injectivetypeconstructors
that only applies to specific
datatypes.
Example:
open import Agda.Builtin.Equality
open import Agda.Builtin.Nat
data Fin : Nat → Set where
zero : {n : Nat} → Fin (suc n)
suc : {n : Nat} → Fin n → Fin (suc n)
{# INJECTIVE Fin #}
Fininjective : {m n : Nat} → Fin m ≡ Fin n → m ≡ n
Fininjective refl = refl
Aside from datatypes, this pragma can also be used to mark other definitions as being injective (for example postulates).
The INLINE
and NOINLINE
pragmas¶
A definition marked with an INLINE
pragma is inlined during compilation. If it is a simple
definition that does no pattern matching, it is also inlined in function bodies at typechecking
time.
When the autoinline
commandline option is enabled, definitions
are automatically marked INLINE
if they satisfy the following criteria:
 No pattern matching.
 Uses each argument at most once.
 Does not use all its arguments.
Automatic inlining can be prevented using the NOINLINE
pragma.
Example:
 Would be autoinlined since it doesn't use the type arguments.
_∘_ : {A B C : Set} → (B → C) → (A → B) → A → C
(f ∘ g) x = f (g x)
{# NOINLINE _∘_ #}  prevents autoinlining
 Would not be autoinlined since it's using all its arguments
_o_ : (Set → Set) → (Set → Set) → Set → Set
(F o G) X = F (G X)
{# INLINE _o_ #}  force inlining
The NON_COVERING
pragma¶
New in version 2.6.1.
The NON_COVERING
pragma can be placed before a function (or a
block of mutually defined functions) which the user knows to be
partial. To be used as a version of
allowincompletematches
that only applies to specific
functions.
The OPTIONS
pragma¶
Some options can be given at the top of .agda files in the form
{# OPTIONS {opt₁} {opt₂} ... #}
The possible options are listed in Commandline and pragma options.
The WARNING_ON_
pragmas¶
A library author can use a WARNING_ON_USAGE
pragma to attach to a defined
name a warning to be raised whenever this name is used (since Agda 2.5.4).
Similarly they can use a WARNING_ON_IMPORT
pragma to attach to a module
a warning to be raised whenever this module is imported (since Agda 2.6.1).
This would typically be used to declare a name or a module ‘DEPRECATED’ and advise the enduser to port their code before the feature is dropped.
Users can turn these warnings off by using the warn=noUserWarning
option.
For more information about the warning machinery, see Warnings.
Example:
 The new name for the identity
id : {A : Set} → A → A
id x = x
 The deprecated name
λx→x = id
 The warning
{# WARNING_ON_USAGE λx→x "DEPRECATED: Use `id` instead of `λx→x`" #}
{# WARNING_ON_IMPORT "DEPRECATED: Use module `Function.Identity` rather than `Identity`" #}
Prop¶
Prop
is Agda’s builtin sort of definitionally proofirrelevant
propositions. It is similar to the sort Set
, but all elements of
a type in Prop
are considered to be (definitionally) equal.
The implementation of Prop
is based on the POPL 2019 paper
Definitional ProofIrrelevance without K by Gaëtan Gilbert, Jesper Cockx,
Matthieu Sozeau, and Nicolas Tabareau.
Usage¶
Just as for Set
, we can define new types in Prop
’s as data or
record types:
data ⊥ : Prop where
record ⊤ : Prop where
constructor tt
When defining a function from a data type in Prop
to a type in
Set
, pattern matching is restricted to the absurd pattern ()
:
absurd : (A : Set) → ⊥ → A
absurd A ()
Unlike for Set
, all elements of a type in Prop
are
definitionally equal. This implies all applications of absurd
are
the same:
onlyoneabsurdity : {A : Set} → (p q : ⊥) → absurd A p ≡ absurd A q
onlyoneabsurdity p q = refl
Since pattern matching on datatypes in Prop
is limited, it is
recommended to define types in Prop
as recursive functions rather
than inductive datatypes. For example, the relation _≤_
on natural
numbers can be defined as follows:
_≤_ : Nat → Nat → Prop
zero ≤ y = ⊤
suc x ≤ zero = ⊥
suc x ≤ suc y = x ≤ y
The induction principle for _≤_
can then be defined by matching on
the arguments of type Nat
:
module _ (P : (m n : Nat) → Set)
(pzy : (y : Nat) → P zero y)
(pss : (x y : Nat) → P x y → P (suc x) (suc y)) where
≤ind : (m n : Nat) → m ≤ n → P m n
≤ind zero y pf = pzy y
≤ind (suc x) (suc y) pf = pss x y (≤ind x y pf)
≤ind (suc _) zero ()
Note that while it is also possible to define _≤_ as a datatype in Prop, it is hard to use that version because of the limitations to matching.
When defining a record type in Set
, the types of the fields can be
both in Set
and Prop
. For example:
record Fin (n : Nat) : Set where
constructor _[_]
field
⟦_⟧ : Nat
proof : suc ⟦_⟧ ≤ n
open Fin
Fin≡ : ∀ {n} (x y : Fin n) → ⟦ x ⟧ ≡ ⟦ y ⟧ → x ≡ y
Fin≡ x y refl = refl
The predicative hierarchy of Prop
¶
Just as for Set
, Agda has a predicative hierarchy of sorts
Prop₀
(= Prop
), Prop₁
, Prop₂
, … where Prop₀ :
Set₁
, Prop₁ : Set₂
, Prop₂ : Set₃
, etc. Like Set
,
Prop
also supports universe polymorphism (see universe
levels), so for each ℓ : Level
we have the sort
Prop ℓ
. For example:
True : ∀ {ℓ} → Prop (lsuc ℓ)
True {ℓ} = ∀ (P : Prop ℓ) → P → P
The propositional squash type¶
When defining a datatype in Prop ℓ
, it is allowed to have
constructors that take arguments in Set ℓ′
for any ℓ′ ≤ ℓ
.
For example, this allows us to define the propositional squash type
and its eliminator:
data Squash {ℓ} (A : Set ℓ) : Prop ℓ where
squash : A → Squash A
squashelim : ∀ {ℓ₁ ℓ₂} (A : Set ℓ₁) (P : Prop ℓ₂) → (A → P) → Squash A → P
squashelim A P f (squash x) = f x
This type allows us to simulate Agda’s existing irrelevant arguments (see irrelevance) by replacing .A with Squash A.
Limitations¶
It is possible to define an equality type in Prop as follows:
data _≐_ {ℓ} {A : Set ℓ} (x : A) : A → Prop ℓ where
refl : x ≐ x
However, the corresponding eliminator cannot be defined because of the limitations on pattern matching. As a consequence, this equality type is only useful for refuting impossible equations:
0≢1 : 0 ≐ 1 → ⊥
0≢1 ()
Record Types¶
Records are types for grouping values together. They generalise the dependent product type by providing named fields and (optional) further components.
Example: the Pair type constructor¶
Record types can be declared using the record
keyword
record Pair (A B : Set) : Set where
field
fst : A
snd : B
This defines a new type constructor Pair : Set → Set → Set
and two
projection functions
Pair.fst : {A B : Set} → Pair A B → A
Pair.snd : {A B : Set} → Pair A B → B
Elements of record types can be defined using a record expression
p23 : Pair Nat Nat
p23 = record { fst = 2; snd = 3 }
or using copatterns. Copatterns may be used prefix
p34 : Pair Nat Nat
Pair.fst p34 = 3
Pair.snd p34 = 4
suffix (in which case they are written prefixed with a dot)
p56 : Pair Nat Nat
p56 .Pair.fst = 5
p56 .Pair.snd = 6
or using an anonymous copatternmatching lambda (you may only use the suffix form of copatterns in this case)
p78 : Pair Nat Nat
p78 = λ where
.Pair.fst → 7
.Pair.snd → 8
If you use the constructor
keyword, you can also use the named
constructor to define elements of the record type:
record Pair (A B : Set) : Set where
constructor _,_
field
fst : A
snd : B
p45 : Pair Nat Nat
p45 = 4 , 5
In this sense, record types behave much like single constructor datatypes (but see Etaexpansion below).
Declaring, constructing and decomposing records¶
Declaring record types¶
The general form of a record declaration is as follows:
record <recordname> <parameters> : Set <level> where
<directives>
constructor <constructorname>
field
<fieldname1> : <type1>
<fieldname2> : <type2>
 ...
<declarations>
All the components are optional, and can be given in any order. In particular, fields can be given in more than one block, interspersed with other declarations. Each field is a component of the record. Types of later fields can depend on earlier fields.
The directives available are etaequality
, noetaequality
,
pattern
(see Etaexpansion), inductive
and coinductive
(see
Recursive records).
Constructing record values¶
Record values are constructed by giving a value for each record field:
record { <fieldname1> = <term1> ; <fieldname2> = <term2> ; ... }
where the types of the terms match the types of the fields. If a
constructor <constructorname>
has been declared for the record,
this can also be written
<constructorname> <term1> <term2> ...
For named definitions, this can also be expressed using copatterns:
<nameddef> : <recordname> <parameters>
<recordname>.<fieldname1> <nameddef> = <term1>
<recordname>.<fieldname2> <nameddef> = <term2>
...
Records can also be constructed by updating other records.
Building records from modules¶
The record { <fields> }
syntax also accepts module names. Fields
are defined using the corresponding definitions from the given module.
For instance assuming this record type R and module M:
record R : Set where
field
x : X
y : Y
z : Z
module M where
x = ...
y = ...
r : R
r = record { M; z = ... }
This construction supports any combination of explicit field definitions and applied modules. If a field is both given explicitly and available in one of the modules, then the explicit one takes precedence. If a field is available in more than one module then this is ambiguous and therefore rejected. As a consequence the order of assignments does not matter.
The modules can be both applied to arguments and have import directives such as hiding, using, and renaming. Here is a contrived example building on the example above:
module M2 (a : A) where
w = ...
z = ...
r2 : A → R
r2 a = record { M hiding (y); M2 a renaming (w to y) }
Decomposing record values¶
With the field name, we can project the corresponding component out of a record value. It is also possible to pattern match against inductive records:
sum : Pair Nat Nat → Nat
sum (x , y) = x + y
Or, using a let binding record pattern:
sum' : Pair Nat Nat → Nat
sum' p = let (x , y) = p in x + y
Note
Naming the constructor is not required to enable pattern matching against record values. Record expressions can appear as patterns.
Record update¶
Assume that we have a record type and a corresponding value:
record MyRecord : Set where
field
a b c : Nat
old : MyRecord
old = record { a = 1; b = 2; c = 3 }
Then we can update (some of) the record value’s fields in the following way:
new : MyRecord
new = record old { a = 0; c = 5 }
Here new
normalises to record { a = 0; b = 2; c = 5 }
. Any
expression yielding a value of type MyRecord
can be used instead of
old
. Using that records can be built from module names, together with the fact that
all records define a module, this can also be
written as
new' : MyRecord
new' = record { MyRecord old; a = 0; c = 5}
Record updating is not allowed to change types: the resulting value must have the same type as the original one, including the record parameters. Thus, the type of a record update can be inferred if the type of the original record can be inferred.
The record update syntax is expanded before type checking. When the expression
record old { updfields }
is checked against a record type R
, it is expanded to
let r = old in record { newfields }
where old
is required to have type R
and newfields
is defined as
follows: for each field x
in R
,
 if
x = e
is contained inupdfields
thenx = e
is included innewfields
, and otherwise if
x
is an explicit field thenx = R.x r
is included innewfields
, and if
x
is an implicit or instance field, then it is omitted fromnewfields
.
The reason for treating implicit and instance fields specially is to allow code like the following:
data Vec (A : Set) : Nat → Set where
[] : Vec A zero
_∷_ : ∀{n} → A → Vec A n → Vec A (suc n)
record R : Set where
field
{length} : Nat
vec : Vec Nat length
 More fields ...
xs : R
xs = record { vec = 0 ∷ 1 ∷ 2 ∷ [] }
ys = record xs { vec = 0 ∷ [] }
Without the special treatment the last expression would need to
include a new binding for length
(for instance length = _
).
Record modules¶
Along with a new type, a record declaration also defines a module with the same name, parameterised over an element of the record type containing the projection functions. This allows records to be “opened”, bringing the fields into scope. For instance
swap : {A B : Set} → Pair A B → Pair B A
swap p = snd , fst
where open Pair p
In the example, the record module Pair
has the shape
module Pair {A B : Set} (p : Pair A B) where
fst : A
snd : B
It’s possible to add arbitrary definitions to the record module, by defining them inside the record declaration
record Functor (F : Set → Set) : Set₁ where
field
fmap : ∀ {A B} → (A → B) → F A → F B
_<$_ : ∀ {A B} → A → F B → F A
x <$ fb = fmap (λ _ → x) fb
Note
In general new definitions need to appear after the field declarations, but
simple nonrecursive function definitions without pattern matching can be
interleaved with the fields. The reason for this restriction is that the
type of the record constructor needs to be expressible using letexpressions.
In the example below D₁
can only contain declarations for which the
generated type of mkR
is wellformed.
record R Γ : Setᵢ where
constructor mkR
field f₁ : A₁
D₁
field f₂ : A₂
mkR : ∀ {Γ} (f₁ : A₁) (let D₁) (f₂ : A₂) → R Γ
Etaexpansion¶
The eta (η) rule for a record type
record R : Set where
field
a : A
b : B
c : C
states that every x : R
is definitionally equal to record { a =
R.a x ; b = R.b x ; c = R.c x }
. By default, all (inductive) record
types enjoy ηequality if the positivity checker has confirmed it is
safe to have it. The keywords etaequality
/noetaequality
enable/disable η rules for the record type being declared.
Recursive records¶
Recursive records need to be declared as either inductive or coinductive.
record Tree (A : Set) : Set where
inductive
constructor tree
field
elem : A
subtrees : List (Tree A)
record Stream (A : Set) : Set where
coinductive
constructor _::_
field
head : A
tail : Stream A
Inductive records have etaequality
on by default, while
noetaequality
is the default for coinductive records. In fact,
the etaequality
and coinductive
directives are not allowed
together, since this can easily make Agda loop. This can be overridden
at your own risk by using the pragma ETA
instead.
It is possible to pattern match on inductive records, but not on coinductive ones.
However, inductive records without ηequality do not support both matching on
the record constructor and construction of record elements by
copattern matching. It has been discovered that the combination of
both leads to loss of subject reduction, i.e., reduction does not
preserve typing. For records without η, matching on the record
constructor is off by default and construction by copattern matching
is on. If you want the converse, you can add the record directive
pattern
:
record HereditaryList : Set where
inductive
noetaequality
pattern
field sublists : List HereditaryList
pred : HereditaryList → List HereditaryList
pred record{ sublists = ts } = ts
Instance fields¶
Instance fields, that is record fields marked with {{ }}
can be used to
model “superclass” dependencies. For example:
record Eq (A : Set) : Set where
field
_==_ : A → A → Bool
open Eq {{...}}
record Ord (A : Set) : Set where
field
_<_ : A → A → Bool
{{eqA}} : Eq A
open Ord {{...}} hiding (eqA)
Now anytime you have a function taking an Ord A
argument the Eq A
instance
is also available by virtue of ηexpansion. So this works as you would expect:
_≤_ : {A : Set} {{OrdA : Ord A}} → A → A → Bool
x ≤ y = (x == y)  (x < y)
There is a problem however if you have multiple record arguments with conflicting
instance fields. For instance, suppose we also have a Num
record with an Eq
field
record Num (A : Set) : Set where
field
fromNat : Nat → A
{{eqA}} : Eq A
open Num {{...}} hiding (eqA)
_≤3 : {A : Set} {{OrdA : Ord A}} {{NumA : Num A}} → A → Bool
x ≤3 = (x == fromNat 3)  (x < fromNat 3)
Here the Eq A
argument to _==_
is not resolved since there are two conflicting
candidates: Ord.eqA OrdA
and Num.eqA NumA
. To solve this problem you can declare
instance fields as overlappable using the overlap
keyword:
record Ord (A : Set) : Set where
field
_<_ : A → A → Bool
overlap {{eqA}} : Eq A
open Ord {{...}} hiding (eqA)
record Num (A : Set) : Set where
field
fromNat : Nat → A
overlap {{eqA}} : Eq A
open Num {{...}} hiding (eqA)
_≤3 : {A : Set} {{OrdA : Ord A}} {{NumA : Num A}} → A → Bool
x ≤3 = (x == fromNat 3)  (x < fromNat 3)
Whenever there are multiple valid candidates for an instance goal, if all candidates
are overlappable, the goal is solved by the leftmost candidate. In the example above
that means that the Eq A
goal is solved by the instance from the Ord
argument.
Clauses for instance fields can be omitted when defining values of record
types. For instance we can define Nat
instances for Eq
, Ord
and
Num
as follows, leaving out cases for the eqA
fields:
instance
EqNat : Eq Nat
_==_ {{EqNat}} = Agda.Builtin.Nat._==_
OrdNat : Ord Nat
_<_ {{OrdNat}} = Agda.Builtin.Nat._<_
NumNat : Num Nat
fromNat {{NumNat}} n = n
Reflection¶
Builtin types¶
Names¶
The builtin QNAME
type represents quoted names and comes equipped with
equality, ordering, and a show function.
postulate Name : Set
{# BUILTIN QNAME Name #}
primitive
primQNameEquality : Name → Name → Bool
primQNameLess : Name → Name → Bool
primShowQName : Name → String
The fixity of a name can also be retrived.
primitive
primQNameFixity : Name → Fixity
To define a decidable propositional equality with the option
safe
, one can use the conversion to a pair of builtin
64bit machine words
primitive
primQNameToWord64s : Name → Σ Word64 (λ _ → Word64)
with the injectivity proof in the Properties
module.:
primitive
primQNameToWord64sInjective : ∀ a b → primQNameToWord64s a ≡ primQNameToWord64s b → a ≡ b
Name literals are created using the quote
keyword and can appear both in
terms and in patterns
nameOfNat : Name
nameOfNat = quote Nat
isNat : Name → Bool
isNat (quote Nat) = true
isNat _ = false
Note that the name being quoted must be in scope.
Metavariables¶
Metavariables are represented by the builtin AGDAMETA
type. They have
primitive equality, ordering, show, and conversion to Nat:
postulate Meta : Set
{# BUILTIN AGDAMETA Meta #}
primitive
primMetaEquality : Meta → Meta → Bool
primMetaLess : Meta → Meta → Bool
primShowMeta : Meta → String
primMetaToNat : Meta → Nat
Builtin metavariables show up in reflected terms. In Properties
, there is a proof of injectivity
of primMetaToNat
primitive
primMetaToNatInjective : ∀ a b → primMetaToNat a ≡ primMetaToNat b → a ≡ b
which can be used to define a decidable propositional equality with
the option safe
.
Literals¶
Literals are mapped to the builtin AGDALITERAL
datatype. Given the appropriate
builtin binding for the types Nat
, Float
, etc, the AGDALITERAL
datatype
has the following shape:
data Literal : Set where
nat : (n : Nat) → Literal
word64 : (n : Word64) → Literal
float : (x : Float) → Literal
char : (c : Char) → Literal
string : (s : String) → Literal
name : (x : Name) → Literal
meta : (x : Meta) → Literal
{# BUILTIN AGDALITERAL Literal #}
{# BUILTIN AGDALITNAT nat #}
{# BUILTIN AGDALITWORD64 word64 #}
{# BUILTIN AGDALITFLOAT float #}
{# BUILTIN AGDALITCHAR char #}
{# BUILTIN AGDALITSTRING string #}
{# BUILTIN AGDALITQNAME name #}
{# BUILTIN AGDALITMETA meta #}
Arguments¶
Arguments can be (visible), {hidden}, or {{instance}}:
data Visibility : Set where
visible hidden instance′ : Visibility
{# BUILTIN HIDING Visibility #}
{# BUILTIN VISIBLE visible #}
{# BUILTIN HIDDEN hidden #}
{# BUILTIN INSTANCE instance′ #}
Arguments can be relevant or irrelevant:
data Relevance : Set where
relevant irrelevant : Relevance
{# BUILTIN RELEVANCE Relevance #}
{# BUILTIN RELEVANT relevant #}
{# BUILTIN IRRELEVANT irrelevant #}
Arguments also have a quantity:
data Quantity : Set where
quantity0 quantityω : Quantity
{# BUILTIN QUANTITY Quantity #}
{# BUILTIN QUANTITY0 quantity0 #}
{# BUILTIN QUANTITYω quantityω #}
Relevance and quantity are combined into a modality:
data Modality : Set where
modality : (r : Relevance) (q : Quantity) → Modality
{# BUILTIN MODALITY Modality #}
{# BUILTIN MODALITYCONSTRUCTOR modality #}
The visibility and the modality characterise the behaviour of an argument:
data ArgInfo : Set where
arginfo : (v : Visibility) (m : Modality) → ArgInfo
data Arg (A : Set) : Set where
arg : (i : ArgInfo) (x : A) → Arg A
{# BUILTIN ARGINFO ArgInfo #}
{# BUILTIN ARGARGINFO arginfo #}
{# BUILTIN ARG Arg #}
{# BUILTIN ARGARG arg #}
Name abstraction¶
data Abs (A : Set) : Set where
abs : (s : String) (x : A) → Abs A
{# BUILTIN ABS Abs #}
{# BUILTIN ABSABS abs #}
Terms¶
Terms, sorts, patterns, and clauses are mutually recursive and mapped
to the AGDATERM
, AGDASORT
, AGDAPATTERN
and AGDACLAUSE
builtins respectively. Types are simply terms. Terms and patterns use
de Bruijn indices to represent variables.
data Term : Set
data Sort : Set
data Pattern : Set
data Clause : Set
Type = Term
Telescope = List (Σ String λ _ → Arg Type)
data Term where
var : (x : Nat) (args : List (Arg Term)) → Term
con : (c : Name) (args : List (Arg Term)) → Term
def : (f : Name) (args : List (Arg Term)) → Term
lam : (v : Visibility) (t : Abs Term) → Term
patlam : (cs : List Clause) (args : List (Arg Term)) → Term
pi : (a : Arg Type) (b : Abs Type) → Term
agdasort : (s : Sort) → Term
lit : (l : Literal) → Term
meta : (x : Meta) → List (Arg Term) → Term
unknown : Term  Treated as '_' when unquoting.
data Sort where
set : (t : Term) → Sort  A Set of a given (possibly neutral) level.
lit : (n : Nat) → Sort  A Set of a given concrete level.
prop : (t : Term) → Sort  A Prop of a given (possibly neutral) level.
propLit : (n : Nat) → Sort  A Prop of a given concrete level.
inf : (n : Nat) → Sort  Setωi of a given concrete level i.
unknown : Sort
data Pattern where
con : (c : Name) (ps : List (Arg Pattern)) → Pattern
dot : (t : Term) → Pattern
var : (x : Nat ) → Pattern
lit : (l : Literal) → Pattern
proj : (f : Name) → Pattern
absurd : (x : Nat) → Pattern  Absurd patterns have de Bruijn indices
data Clause where
clause : (tel : Telescope) (ps : List (Arg Pattern)) (t : Term) → Clause
absurdclause : (tel : Telescope) (ps : List (Arg Pattern)) → Clause
{# BUILTIN AGDATERM Term #}
{# BUILTIN AGDASORT Sort #}
{# BUILTIN AGDAPATTERN Pattern #}
{# BUILTIN AGDACLAUSE Clause #}
{# BUILTIN AGDATERMVAR var #}
{# BUILTIN AGDATERMCON con #}
{# BUILTIN AGDATERMDEF def #}
{# BUILTIN AGDATERMMETA meta #}
{# BUILTIN AGDATERMLAM lam #}
{# BUILTIN AGDATERMEXTLAM patlam #}
{# BUILTIN AGDATERMPI pi #}
{# BUILTIN AGDATERMSORT agdasort #}
{# BUILTIN AGDATERMLIT lit #}
{# BUILTIN AGDATERMUNSUPPORTED unknown #}
{# BUILTIN AGDASORTSET set #}
{# BUILTIN AGDASORTLIT lit #}
{# BUILTIN AGDASORTPROP prop #}
{# BUILTIN AGDASORTPROPLIT propLit #}
{# BUILTIN AGDASORTINF inf #}
{# BUILTIN AGDASORTUNSUPPORTED unknown #}
{# BUILTIN AGDAPATCON con #}
{# BUILTIN AGDAPATDOT dot #}
{# BUILTIN AGDAPATVAR var #}
{# BUILTIN AGDAPATLIT lit #}
{# BUILTIN AGDAPATPROJ proj #}
{# BUILTIN AGDAPATABSURD absurd #}
{# BUILTIN AGDACLAUSECLAUSE clause #}
{# BUILTIN AGDACLAUSEABSURD absurdclause #}
Absurd lambdas λ ()
are quoted to extended lambdas with an absurd clause.
The builtin constructors AGDATERMUNSUPPORTED
and AGDASORTUNSUPPORTED
are translated to meta variables when unquoting.
Declarations¶
There is a builtin type AGDADEFINITION
representing definitions. Values of
this type is returned by the AGDATCMGETDEFINITION
builtin described
below.
data Definition : Set where
function : (cs : List Clause) → Definition
datatype : (pars : Nat) (cs : List Name) → Definition  parameters and constructors
recordtype : (c : Name) (fs : List (Arg Name)) →  c: name of record constructor
Definition  fs: fields
datacons : (d : Name) → Definition  d: name of data type
axiom : Definition
primfun : Definition
{# BUILTIN AGDADEFINITION Definition #}
{# BUILTIN AGDADEFINITIONFUNDEF function #}
{# BUILTIN AGDADEFINITIONDATADEF datatype #}
{# BUILTIN AGDADEFINITIONRECORDDEF recordtype #}
{# BUILTIN AGDADEFINITIONDATACONSTRUCTOR datacons #}
{# BUILTIN AGDADEFINITIONPOSTULATE axiom #}
{# BUILTIN AGDADEFINITIONPRIMITIVE primfun #}
Type errors¶
Type checking computations (see below)
can fail with an error, which is a list of ErrorPart
s. This
allows metaprograms to generate nice errors without having to
implement pretty printing for reflected terms.
 Error messages can contain embedded names and terms.
data ErrorPart : Set where
strErr : String → ErrorPart
termErr : Term → ErrorPart
nameErr : Name → ErrorPart
{# BUILTIN AGDAERRORPART ErrorPart #}
{# BUILTIN AGDAERRORPARTSTRING strErr #}
{# BUILTIN AGDAERRORPARTTERM termErr #}
{# BUILTIN AGDAERRORPARTNAME nameErr #}
Type checking computations¶
Metaprograms, i.e. programs that create other programs, run in a builtin type
checking monad TC
:
postulate
TC : ∀ {a} → Set a → Set a
returnTC : ∀ {a} {A : Set a} → A → TC A
bindTC : ∀ {a b} {A : Set a} {B : Set b} → TC A → (A → TC B) → TC B
{# BUILTIN AGDATCM TC #}
{# BUILTIN AGDATCMRETURN returnTC #}
{# BUILTIN AGDATCMBIND bindTC #}
The TC
monad provides an interface to the Agda type checker using the
following primitive operations:
postulate
 Unify two terms, potentially solving metavariables in the process.
unify : Term → Term → TC ⊤
 Throw a type error. Can be caught by catchTC.
typeError : ∀ {a} {A : Set a} → List ErrorPart → TC A
 Block a type checking computation on a metavariable. This will abort
 the computation and restart it (from the beginning) when the
 metavariable is solved.
blockOnMeta : ∀ {a} {A : Set a} → Meta → TC A
 Prevent current solutions of metavariables from being rolled back in
 case 'blockOnMeta' is called.
commitTC : TC ⊤
 Backtrack and try the second argument if the first argument throws a
 type error.
catchTC : ∀ {a} {A : Set a} → TC A → TC A → TC A
 Infer the type of a given term
inferType : Term → TC Type
 Check a term against a given type. This may resolve implicit arguments
 in the term, so a new refined term is returned. Can be used to create
 new metavariables: newMeta t = checkType unknown t
checkType : Term → Type → TC Term
 Compute the normal form of a term.
normalise : Term → TC Term
 Compute the weak head normal form of a term.
reduce : Term → TC Term
 Get the current context. Returns the context in reverse order, so that
 it is indexable by deBruijn index. Note that the types in the context are
 valid in the rest of the context. To use in the current context they need
 to be weakened by 1 + their position in the list.
getContext : TC (List (Arg Type))
 Extend the current context with a variable of the given type.
extendContext : ∀ {a} {A : Set a} → Arg Type → TC A → TC A
 Set the current context. Takes a context telescope entries in
 reverse order, as given by `getContext`. Each type should be valid
 in the context formed by the remaining elements in the list.
inContext : ∀ {a} {A : Set a} → List (Arg Type) → TC A → TC A
 Quote a value, returning the corresponding Term.
quoteTC : ∀ {a} {A : Set a} → A → TC Term
 Unquote a Term, returning the corresponding value.
unquoteTC : ∀ {a} {A : Set a} → Term → TC A
 Quote a value in Setω, returning the corresponding Term
quoteωTC : ∀ {A : Setω} → A → TC Term
 Create a fresh name.
freshName : String → TC Name
 Declare a new function of the given type. The function must be defined
 later using 'defineFun'. Takes an Arg Name to allow declaring instances
 and irrelevant functions. The Visibility of the Arg must not be hidden.
declareDef : Arg Name → Type → TC ⊤
 Declare a new postulate of the given type. The Visibility of the Arg
 must not be hidden. It fails when executed from commandline with safe
 option.
declarePostulate : Arg Name → Type → TC ⊤
 Define a declared function. The function may have been declared using
 'declareDef' or with an explicit type signature in the program.
defineFun : Name → List Clause → TC ⊤
 Get the type of a defined name. Replaces 'primNameType'.
getType : Name → TC Type
 Get the definition of a defined name. Replaces 'primNameDefinition'.
getDefinition : Name → TC Definition
 Check if a name refers to a macro
isMacro : Name → TC Bool
 Change the behaviour of inferType, checkType, quoteTC, getContext
 to normalise (or not) their results. The default behaviour is no
 normalisation.
withNormalisation : ∀ {a} {A : Set a} → Bool → TC A → TC A
 Prints the third argument to the debug buffer in Emacs
 if the verbosity level (set by the v flag to Agda)
 is higher than the second argument. Note that Level 0 and 1 are printed
 to the info buffer instead. For instance, giving v a.b.c:10 enables
 printing from debugPrint "a.b.c.d" 10 msg.
debugPrint : String → Nat → List ErrorPart → TC ⊤
 Only allow reduction of specific definitions while executing the TC computation
onlyReduceDefs : ∀ {a} {A : Set a} → List Name → TC A → TC A
 Don't allow reduction of specific definitions while executing the TC computation
dontReduceDefs : ∀ {a} {A : Set a} → List Name → TC A → TC A
 Makes the following primitives to reconstruct hidden parameters:
 getDefinition, normalise, reduce, inferType, checkType and getContext
withReconstructed : ∀ {a} {A : Set a} → TC A → TC A
 Fail if the given computation gives rise to new, unsolved
 "blocking" constraints.
noConstraints : ∀ {a} {A : Set a} → TC A → TC A
 Run the given TC action and return the first component. Resets to
 the old TC state if the second component is 'false', or keep the
 new TC state if it is 'true'.
runSpeculative : ∀ {a} {A : Set a} → TC (Σ A λ _ → Bool) → TC A
{# BUILTIN AGDATCMUNIFY unify #}
{# BUILTIN AGDATCMTYPEERROR typeError #}
{# BUILTIN AGDATCMBLOCKONMETA blockOnMeta #}
{# BUILTIN AGDATCMCATCHERROR catchTC #}
{# BUILTIN AGDATCMINFERTYPE inferType #}
{# BUILTIN AGDATCMCHECKTYPE checkType #}
{# BUILTIN AGDATCMNORMALISE normalise #}
{# BUILTIN AGDATCMREDUCE reduce #}
{# BUILTIN AGDATCMGETCONTEXT getContext #}
{# BUILTIN AGDATCMEXTENDCONTEXT extendContext #}
{# BUILTIN AGDATCMINCONTEXT inContext #}
{# BUILTIN AGDATCMQUOTETERM quoteTC #}
{# BUILTIN AGDATCMUNQUOTETERM unquoteTC #}
{# BUILTIN AGDATCMQUOTEOMEGATERM quoteωTC #}
{# BUILTIN AGDATCMFRESHNAME freshName #}
{# BUILTIN AGDATCMDECLAREDEF declareDef #}
{# BUILTIN AGDATCMDECLAREPOSTULATE declarePostulate #}
{# BUILTIN AGDATCMDEFINEFUN defineFun #}
{# BUILTIN AGDATCMGETTYPE getType #}
{# BUILTIN AGDATCMGETDEFINITION getDefinition #}
{# BUILTIN AGDATCMCOMMIT commitTC #}
{# BUILTIN AGDATCMISMACRO isMacro #}
{# BUILTIN AGDATCMWITHNORMALISATION withNormalisation #}
{# BUILTIN AGDATCMDEBUGPRINT debugPrint #}
{# BUILTIN AGDATCMONLYREDUCEDEFS onlyReduceDefs #}
{# BUILTIN AGDATCMDONTREDUCEDEFS dontReduceDefs #}
{# BUILTIN AGDATCMNOCONSTRAINTS noConstraints #}
{# BUILTIN AGDATCMRUNSPECULATIVE runSpeculative #}
Metaprogramming¶
There are three ways to run a metaprogram (TC
computation). To run
a metaprogram in a term position you use a macro. To
run metaprograms to create toplevel definitions you can use the
unquoteDecl
and unquoteDef
primitives (see Unquoting
Declarations).
Macros¶
Macros are functions of type t₁ → t₂ → .. → Term → TC ⊤
that are defined in
a macro
block. The last argument is supplied by the type checker and will
be the representation of a metavariable that should be instantiated with the
result of the macro.
Macro application is guided by the type of the macro, where Term
and
Name
arguments are quoted before passed to the macro. Arguments of any
other type are preserved asis.
For example, the macro application f u v w
where
f : Term → Name → Bool → Term → TC ⊤
desugars into:
unquote (f (quoteTerm u) (quote v) w)
where quoteTerm u
takes a u
of arbitrary type and returns its
representation in the Term
data type, and unquote m
runs a computation
in the TC
monad. Specifically, when checking unquote m : A
for some
type A
the type checker proceeds as follows:
 Check
m : Term → TC ⊤
. Create a fresh metavariable
hole : A
. Let
qhole : Term
be the quoted representation ofhole
. Execute
m qhole
. Return (the now hopefully instantiated)
hole
.
Reflected macro calls are constructed using the def
constructor, so given a
macro g : Term → TC ⊤
the term def (quote g) []
unquotes to a macro
call to g
.
Note
The quoteTerm
and unquote
primitives are available in the language,
but it is recommended to avoid using them in favour of macros.
Limitations:
 Macros cannot be recursive. This can be worked around by defining the recursive function outside the macro block and have the macro call the recursive function.
Silly example:
macro
plustotimes : Term → Term → TC ⊤
plustotimes (def (quote _+_) (a ∷ b ∷ [])) hole =
unify hole (def (quote _*_) (a ∷ b ∷ []))
plustotimes v hole = unify hole v
thm : (a b : Nat) → plustotimes (a + b) ≡ a * b
thm a b = refl
Macros lets you write tactics that can be applied without any syntactic overhead. For instance, suppose you have a solver:
magic : Type → Term
that takes a reflected goal and outputs a proof (when successful). You can then define the following macro:
macro
bymagic : Term → TC ⊤
bymagic hole =
bindTC (inferType hole) λ goal →
unify hole (magic goal)
This lets you apply the magic tactic as a normal function:
thm : ¬ P ≡ NP
thm = bymagic
Tactic Arguments¶
You can declare tactics to be used to solve a particular implicit argument
using a @(tactic t)
annotation. The provided tactic should be a term
t : Term → TC ⊤
. For instance,
defaultTo : {A : Set} (x : A) → Term → TC ⊤
defaultTo x hole = bindTC (quoteTC x) (unify hole)
f : {@(tactic defaultTo true) x : Bool} → Bool
f {x} = x
testf : f ≡ true
testf = refl
At calls to f, defaultTo true is called on the metavariable inserted for x if it is not given explicitly. The tactic can depend on previous arguments to the function. For instance,
g : (x : Nat) {@(tactic defaultTo x) y : Nat} → Nat
g x {y} = x + y
testg : g 4 ≡ 8
testg = refl
Record fields can also be annotated with a tactic, allowing them to be omitted in constructor applications, record constructions and copattern matches:
record Bools : Set where
constructor mkBools
field fst : Bool
@(tactic defaultTo fst) {snd} : Bool
open Bools
tt₀ tt₁ tt₂ tt₃ : Bools
tt₀ = mkBools true {true}
tt₁ = mkBools true
tt₂ = record{ fst = true }
tt₃ .fst = true
testtt : tt₁ ∷ tt₂ ∷ tt₃ ∷ [] ≡ tt₀ ∷ tt₀ ∷ tt₀ ∷ []
testtt = refl
Unquoting Declarations¶
While macros let you write metaprograms to create terms, it is also useful to
be able to create toplevel definitions. You can do this from a macro using the
declareDef
and defineFun
primitives, but there is no way to bring such
definitions into scope. For this purpose there are two toplevel primitives
unquoteDecl
and unquoteDef
that runs a TC
computation in a
declaration position. They both have the same form:
unquoteDecl x₁ .. xₙ = m
unquoteDef x₁ .. xₙ = m
except that the list of names can be empty for unquoteDecl
, but not for
unquoteDef
. In both cases m
should have type TC ⊤
. The main
difference between the two is that unquoteDecl
requires m
to both
declare (with declareDef
) and define (with defineFun
) the xᵢ
whereas unquoteDef
expects the xᵢ
to be already declared. In other
words, unquoteDecl
brings the xᵢ
into scope, but unquoteDef
requires them to already be in scope.
In m
the xᵢ
stand for the names of the functions being defined (i.e.
xᵢ : Name
) rather than the actual functions.
One advantage of unquoteDef
over unquoteDecl
is that
unquoteDef
is allowed in mutual blocks, allowing mutually
recursion between generated definitions and handwritten definitions.
Example usage:
arg′ : {A : Set} → Visibility → A → Arg A
arg′ v = arg (arginfo v (modality relevant quantityω))
 Defining: idname {A} x = x
defId : (idname : Name) → TC ⊤
defId idname = do
defineFun idname
[ clause
( ("A" , arg′ visible (agdasort (lit 0)))
∷ ("x" , arg′ visible (var 0 []))
∷ [])
( arg′ hidden (var 1)
∷ arg′ visible (var 0)
∷ [] )
(var 0 [])
]
id : {A : Set} (x : A) → A
unquoteDef id = defId id
mkId : (idname : Name) → TC ⊤
mkId idname = do
ty ← quoteTC ({A : Set} (x : A) → A)
declareDef (arg′ visible idname) ty
defId idname
unquoteDecl id′ = mkId id′
System Calls¶
It is possible to run system calls as part of a metaprogram, using the execTC
builtin. You can use this feature to implement type providers, or to call external solvers. For instance, the following example calls /bin/echo
from Agda:
postulate
execTC : (exe : String) (args : List String) (stdIn : String)
→ TC (Σ Nat (λ _ → Σ String (λ _ → String)))
{# BUILTIN AGDATCMEXEC execTC #}
macro
echo : List String → Term → TC ⊤
echo args hole = do
(exitCode , (stdOut , stdErr)) ← execTC "echo" args ""
unify hole (lit (string stdOut))
_ : echo ("hello" ∷ "world" ∷ []) ≡ "hello world\n"
_ = refl
The execTC
builtin takes three arguments: the basename of the executable (e.g., "echo"
), a list of arguments, and the contents of the standard input. It returns a triple, consisting of the exit code (as a natural number), the contents of the standard output, and the contents of the standard error.
It would be illadvised to allow Agda to make arbitrary system calls. Hence, the feature must be activated by passing the allowexec
option, either on the commandline or using a pragma. (Note that allowexec
is incompatible with safe
.) Furthermore, Agda can only call executables which are listed in the list of trusted executables, ~/.agda/executables
. For instance, to run the example above, you must add /bin/echo
to this file:
# contents of ~/.agda/executables
/bin/echo
The executable can then be called by passing its basename to execTC
, subtracting the .exe
on Windows.
Rewriting¶
Rewrite rules allow you to extend Agda’s evaluation relation with new computation rules.
Note
This page is about the rewriting
option and the
associated REWRITE builtin. You might be
looking for the documentation on the rewrite construct instead.
Rewrite rules by example¶
To enable rewrite rules, you should run Agda with the flag rewriting
and import the modules Agda.Builtin.Equality
and Agda.Builtin.Equality.Rewrite
:
{# OPTIONS rewriting #}
module language.rewriting where
open import Agda.Builtin.Equality
open import Agda.Builtin.Equality.Rewrite
Overlapping pattern matching¶
To start, let’s look at an example where rewrite rules can solve a
problem that is encountered by almost every newcomer to Agda. This
problem usually pops up as the question why 0 + m
computes to
m
, but m + 0
does not (and similarly, (suc m) + n
computes
to suc (m + n)
but m + (suc n)
does not). This problem
manifests itself for example when trying to prove commutativity of _+_
:
+comm : m + n ≡ n + m
+comm {m = zero} = refl
+comm {m = suc m} = cong suc (+comm {m = m})
Here, Agda complains that n != n + zero of type Nat
. The usual way
to solve this problem is by proving the equations m + 0 ≡ m
and
m + (suc n) ≡ suc (m + n)
and using an explicit rewrite
statement in the main proof (N.B.: Agda’s rewrite
keyword should not
be confused with rewrite rules, which are added by a REWRITE
pragma.)
By using rewrite rules, we can simulate the solution from our paper. First, we need to prove that the equations we want hold as propositional equalities:
+zero : m + zero ≡ m
+zero {m = zero} = refl
+zero {m = suc m} = cong suc +zero
+suc : m + (suc n) ≡ suc (m + n)
+suc {m = zero} = refl
+suc {m = suc m} = cong suc +suc
Next we mark the equalities as rewrite rules with a REWRITE
pragma:
{# REWRITE +zero +suc #}
Now the proof of commutativity works exactly as we wrote it before:
+comm : m + n ≡ n + m
+comm {m = zero} = refl
+comm {m = suc m} = cong suc (+comm {m = m})
Note that there is no way to make this proof go through without
rewrite rules: it is essential that _+_
computes both on its first
and its second argument, but there’s no way to define _+_
in such a
way using Agda’s regular pattern matching.
More examples¶
Additional examples of how to use rewrite rules can be found in a blog post by Jesper Cockx.
General shape of rewrite rules¶
In general, an equality proof eq
may be registered as a rewrite
rule using the pragma {# REWRITE eq #}
, provided the following
requirements are met:
 The type of
eq
is of the formeq : (x₁ : A₁) ... (xₖ : Aₖ) → f p₁ ... pₙ ≡ v
f
is a postulate, a defined function symbol, or a constructor applied to fully general parameters (i.e. the parameters must be distinct variables) Each variable
x₁
, …,xₖ
occurs at least once in a pattern position inp₁ ... pₙ
(see below for the definition of pattern positions)  The lefthand side
f p₁ ... pₙ
should be neutral, i.e. it should not reduce.
The following patterns are supported:
x y₁ ... yₙ
, wherex
is a pattern variable andy₁
, …,yₙ
are distinct variables that are bound locally in the patternf p₁ ... pₙ
, wheref
is a postulate, a defined function, a constructor, or a data/record type, andp₁
, …,pₙ
are again patternsλ x → p
, wherep
is again a pattern(x : P) → Q
, whereP
andQ
are again patternsy p₁ ... pₙ
, wherey
is a variable bound locally in the pattern andp₁
, …,pₙ
are again patternsSet p
orProp p
, wherep
is again a pattern Any other term
v
(here the variables inv
are not considered to be in a pattern position)
Once a rewrite rule has been added, Agda automatically rewrites all
instances of the lefthand side to the corresponding instance of the
righthand side during reduction. More precisely, a term
(definitionally equal to) f p₁σ ... pₙσ
is rewritten to vσ
,
where σ
is any substitution on the pattern variables x₁
,
… xₖ
.
Since rewriting happens after normal reduction, rewrite rules are only applied to terms that would otherwise be neutral.
Confluence checking¶
Agda can optionally check confluence of rewrite rules by enabling the
confluencecheck
flag. Concretely, it does so by enforcing two
properties:
 For any two lefthand sides of the rewrite rules that overlap (either at the root position or at a subterm), the most general unifier of the two lefthand sides is again a lefthand side of a rewrite rule. For example, if there are two rules
suc m + n = suc (m + n)
andm + suc n = suc (m + n)
, then there should also be a rulesuc m + suc n = suc (suc (m + n))
. Each rewrite rule should satisfy the triangle property: For any rewrite rule
u = w
and any singlestep parallel unfoldingu => v
, we should have another singlestep parallel unfoldingv => w
.
There is also a flag localconfluencecheck
that is less
restrictive but only checks local confluence of rewrite rules. In case
the rewrite rules are terminating (currently not checked), these two
properties are equivalent.
Advanced usage¶
Instead of importing Agda.Builtin.Equality.Rewrite
, a different
type may be chosen as the rewrite relation by registering it as the
REWRITE
builtin. For example, using the pragma {# BUILTIN
REWRITE _~_ #}
registers the type _~_
as the rewrite
relation. To qualify as the rewrite relation, the type must take at
least two arguments, and the final two arguments should be visible.
Runtime Irrelevance¶
From version 2.6.1 Agda supports runtime irrelevance (or erasure) annotations. Values marked as erased are not present at run time, and consequently the type checker enforces that no computations depend on erased values.
Syntax¶
A function or constructor argument is declared erased using the @0
or @erased
annotation.
For example, the following definition of vectors guarantees that the length argument to _∷_
is not
present at runtime:
data Vec (A : Set a) : @0 Nat → Set a where
[] : Vec A 0
_∷_ : ∀ {@0 n} → A → Vec A n → Vec A (suc n)
The GHC backend compiles this to a datatype where the cons constructor takes only two arguments.
Note
In this particular case, the compiler identifies that the length argument can be erased also without the annotation, using Brady et al’s forcing analysis [1]. Marking it erased explictly, however, ensures that it is erased without relying on the analysis.
Erasure annotations can also appear in function arguments (both firstorder and higherorder). For instance, here is
an implementation of foldl
on vectors:
foldl : (B : @0 Nat → Set b)
→ (f : ∀ {@0 n} → B n → A → B (suc n))
→ (z : B 0)
→ ∀ {@0 n} → Vec A n → B n
foldl B f z [] = z
foldl B f z (x ∷ xs) = foldl (λ n → B (suc n)) (λ {n} → f {suc n}) (f z x) xs
Here the length arguments to foldl
and to f
have been marked erased. As a result it gets compiled to the following
Haskell code (modulo renaming):
foldl f z xs
= case xs of
[] > z
x ∷ xs > foldl f (f _ z x) xs
In contrast to constructor arguments, erased arguments to higherorder functions are not removed completely, but
instead replaced by a placeholder value _
. The crucial optimization enabled by the erasure annotation is compiling
λ {n} → f {suc n}
to simply f
, removing a terrible space leak from the program. Compare to the result of
compiling without erasure:
foldl f z xs
= case xs of
[] > z
x ∷ xs > foldl (\ n > f (1 + n)) (f 0 z x) xs
It is also possible to mark toplevel definitions as erased. This guarantees that they are only used in erased arguments and can be useful to ensure that code intended only for compiletime evaluation is not executed at run time. (One can also use erased things in the bodies of erased definitions.) For instance,
@0 spec : Nat → Nat  slow, but easy to verify
impl : Nat → Nat  fast, but hard to understand
proof : ∀ n → spec n ≡ impl n
Erased record fields become erased arguments to the record constructor and the projection functions are treated as erased definitions.
Constructors can also be marked as erased. Here is one example:
Isproposition : Set a → Set a
Isproposition A = (x y : A) → x ≡ y
data ∥_∥ (A : Set a) : Set a where
∣_∣ : A → ∥ A ∥
@0 trivial : Isproposition ∥ A ∥
rec : @0 Isproposition B → (A → B) → ∥ A ∥ → B
rec p f ∣ x ∣ = f x
rec p f (trivial x y i) = p (rec p f x) (rec p f y) i
In the code above the constructor trivial
is only available at
compiletime, whereas ∣_∣
is also available at runtime. Clauses
that match on erased constructors in nonerased positions are omitted
by (at least some) compiler backends, so one can use erased names in
the bodies of such clauses.
Rules¶
The typing rules are based on Conor McBride’s “I Got Plenty o’Nuttin’” [2] and Bob Atkey’s “The Syntax and Semantics of Quantitative Type Theory” [3]. In essence the type checker keeps track of whether it is running in runtime mode, checking something that is needed at run time, or compiletime mode, checking something that will be erased. In compiletime mode everything to do with erasure can safely be ignored, but in runtime mode the following restrictions apply:
 Cannot use erased variables or definitions.
 Cannot pattern match on erased arguments, unless there is at most
one valid case (not counting erased constructors). If
withoutK
is enabled and there is one valid case, then the datatype must also not be indexed.
Consider the function foo
taking an erased vector argument:
foo : (n : Nat) (@0 xs : Vec Nat n) → Nat
foo zero [] = 0
foo (suc n) (x ∷ xs) = foo n xs
This is okay (when the K rule is on), since after matching on the
length, the matching on the vector does not provide any computational
information, and any variables in the pattern (x
and xs
in
this case) are marked erased in turn. On the other hand, if we don’t
match on the length first, the type checker complains:
foo : (n : Nat) (@0 xs : Vec Nat n) → Nat
foo n [] = 0
foo n (x ∷ xs) = foo _ xs
 Error: Cannot branch on erased argument of datatype Vec Nat n
The type checker enters compiletime mode when
 checking erased arguments to a constructor or function,
 checking the body of an erased definition,
 checking the body of a clause that matches on an erased constructor,
 checking the domain of an erased Π type, or
 checking a type, i.e. when moving to the right of a
:
, with some exceptions: Compiletime mode is not entered for the domains of nonerased Π types.
 If the K rule is off then compiletime mode is not entered for nonerased constructors (of fibrant type) or record fields.
Note that the type checker does not enter compiletime mode based on
the type a term is checked against (except that a distinction is
sometimes made between fibrant and nonfibrant types). In particular,
checking a term against Set
does not trigger compiletime mode.
Subtyping of runtimeirrelevant function spaces¶
Normally, if f : (@0 x : A) → B
then we have λ x → f x : (x : A)
→ B
but not f : (x : A) → B
. When the option subtyping
is
enabled, Agda will make use of the subtyping rule (@0 x : A) → B <:
(x : A) → B
, so there is no need for etaexpanding the function
f
.
References¶
[1] Brady, Edwin, Conor McBride, and James McKinna. “Inductive Families Need Not Store Their Indices.” International Workshop on Types for Proofs and Programs. Springer, Berlin, Heidelberg, 2003.
[2] McBride, Conor. “I Got Plenty o’Nuttin’.” A List of Successes That Can Change the World. Springer, Cham, 2016.
[3] Atkey, Robert. “The Syntax and Semantics of Quantitative Type Theory”. In LICS ‘18: Oxford, United Kingdom. 2018.
Safe Agda¶
By using the option safe
(as a pragma option, or on the
commandline), a user can specify that Agda should ensure that
features leading to possible inconsistencies should be disabled.
Here is a list of the features safe
is incompatible with:
postulate
; can be used to assume any axiom.allowunsolvedmetas
; forces Agda to accept unfinished proofs.allowincompletematches
; forces Agda to accept unfinished proofs.nopositivitycheck
; makes it possible to write nonterminating programs by structural “induction” on non strictly positive datatypes.noterminationcheck
; gives loopy programs any type.typeintype
andomegainomega
; allow the user to encode the GirardHurken paradox.injectivetypeconstructors
; together with excluded middle leads to an inconsistency via ChungKil Hur’s construction.guardedness
together withsizedtypes
; currently can be used to define a type which is both inductive and coinductive, which leads to an inconsistency. This might be fixed in the future.experimentalirrelevance
andirrelevantprojections
; enables potentially unsound irrelevance features (irrelevant levels, irrelevant data matching, and projection of irrelevant record fields, respectively).rewriting
; turns any equation into one that holds definitionally. It can at the very least break convergence.cubical
together withwithK
; the univalence axiom is provable using cubical constructions, which falsifies the K axiom. The
primEraseEquality
primitive together withwithoutK
; usingprimEraseEquality
, one can derive the K axiom. allowexec
; allows system calls during type checking.
The option safe
is coinfective (see
Consistency checking of options used); if a module is declared safe,
then all its imported modules must also be declared safe.
Note
The guardedness
and sizedtypes
options are
both on by default. However, unless they have been set explicitly
by the user, setting the safe
option will turn them both
off. That is to say that
{# OPTIONS safe #}
will correspond to safe
, noguardedness
, and
nosizedtypes
. When both
{# OPTIONS safe guardedness #}
and
{# OPTIONS guardedness safe #}
will turn on safe
, guardedness
, and
nosizedtypes
.
Setting both sizedtypes
and guardedness
whilst demanding that the module is safe
will lead to an
error as combining these options currently is inconsistent.
Sized Types¶
Note
This is a stub.
Sizes help the termination checker by tracking the depth of data structures across definition boundaries.
The builtin combinators for sizes are described in Sized types.
Example for coinduction: finite languages¶
See Abel 2017 and Traytel 2017.
Decidable languages can be represented as infinite trees. Each node has as many
children as the number of characters in the alphabet A
. Each path from the root
of the tree to a node determines a possible word in the language. Each node
has a boolean label, which is true
if and only if the word corresponding
to that node is in the language. In particular, the root node of the tree
is labelled true
if and only if the word ε
belongs to the language.
These infinite trees can be represented as the following coinductive datatype:
record Lang (i : Size) (A : Set) : Set where
coinductive
field
ν : Bool
δ : ∀{j : Size< i} → A → Lang j A
open Lang
As we said before, given a language a : Lang A
, ν a ≡ true
iff
ε ∈ a
. On the other hand, the language δ a x : Lang A
is the
Brzozowski derivative of a
with respect to the character x
, that is,
w ∈ δ a x
iff xw ∈ a
.
With this data type, we can define some regular languages. The first one, the
empty language, contains no words; so all the nodes are labelled false
:
∅ : ∀ {i A} → Lang i A
ν ∅ = false
δ ∅ _ = ∅
The second one is the language containing a single word; the empty word. The
root node is labelled true
, and all the others are labelled false
:
ε : ∀ {i A} → Lang i A
ν ε = true
δ ε _ = ∅
To compute the union (or sum) of two languages, we do a pointwise or
operation on the labels of their nodes:
_+_ : ∀ {i A} → Lang i A → Lang i A → Lang i A
ν (a + b) = ν a ∨ ν b
δ (a + b) x = δ a x + δ b x
infixl 10 _+_
Now, lets define concatenation.
The base case (ν
) is straightforward: ε ∈ a · b
iff ε ∈ a
and ε ∈ b
.
For the derivative (δ
), assume that we have a word w
, w ∈ δ (a · b)
x
. This means that xw = αβ
, with α ∈ a
and β ∈ b
.
We have to consider two cases:
ε ∈ a
. Then, either:
α = ε
, andβ = xw
, wherew ∈ δ b x
.α = xα’
, withα’ ∈ δ a x
, andw = α’β ∈ δ a x · b
.ε ∉ a
. Then, only the second case above is possible:
α = xα’
, withα’ ∈ δ a x
, andw = α’β ∈ δ a x · b
.
_·_ : ∀ {i A} → Lang i A → Lang i A → Lang i A
ν (a · b) = ν a ∧ ν b
δ (a · b) x = if ν a then δ a x · b + δ b x else δ a x · b
infixl 20 _·_
Here is where sized types really shine. Without sized types, the termination
checker would not be able to recognize that _+_
or if_then_else
are not
inspecting the tree, which could render the definition nonproductive. By contrast,
with sized types, we know that the a + b
is defined to the same
depth as a
and b
are.
In a similar spirit, we can define the Kleene star:
_* : ∀ {i A} → Lang i A → Lang i A
ν (a *) = true
δ (a *) x = δ a x · a *
infixl 30 _*
Again, because the types tell us that _·_ preserves the size of its
inputs, we can have the recursive call to a *
under a function
call to _·_
.
Testing¶
First, we want to give a precise notion of membership in a language.
We consider a word as a List
of characters.
_∈_ : ∀ {i} {A} → List i A → Lang i A → Bool
[] ∈ a = ν a
(x ∷ w) ∈ a = w ∈ δ a x
Note how the size of the word we test for membership cannot be larger than the depth to which the language tree is defined.
If we want to use regular, nonsized lists, we need to ask for the
language to have size ∞
.
_∈_ : ∀ {A} → List A → Lang ∞ A → Bool
[] ∈ a = ν a
(x ∷ w) ∈ a = w ∈ δ a x
Intuitively, ∞
is a Size
larger than the size of any term
than one could possibly define in Agda.
Now, let’s consider binary strings as words. First, we
define the languages ⟦ x ⟧
containing the single word “x” of length 1,
for alphabet A = Bool
:
⟦_⟧ : ∀ {i} → Bool → Lang i Bool
ν ⟦ _ ⟧ = false
δ ⟦ false ⟧ false = ε
δ ⟦ true ⟧ true = ε
δ ⟦ false ⟧ true = ∅
δ ⟦ true ⟧ false = ∅
Now we can define the bipbop language, consisting of strings of even length alternating letters “true” and “false”.
bipbop = (⟦ true ⟧ · ⟦ false ⟧)*
Let’s test a few words for membership in the language bipbop
!
test₁ : (true ∷ false ∷ true ∷ false ∷ true ∷ false ∷ []) ∈ bipbop ≡ true
test₁ = refl
test₂ : (true ∷ false ∷ true ∷ false ∷ true ∷ []) ∈ bipbop ≡ false
test₂ = refl
test₃ : (true ∷ true ∷ false ∷ []) ∈ bipbop ≡ false
test₃ = refl
Sort System¶
Sorts (also known as universes) are types whose members themselves are
again types. The fundamental sort in Agda is named Set
and it
denotes the universe of small types. But for some applications, other
sorts are needed. This page explains the need for additional sorts and
describes all the sorts that are used by Agda.
Introduction to universes¶
Russell’s paradox implies that the collection of all sets is not
itself a set. Namely, if there were such a set U
, then one could
form the subset A ⊆ U
of all sets that do not contain
themselves. Then we would have A ∈ A
if and only if A ∉ A
, a
contradiction.
Likewise, MartinLöf’s type theory had originally a rule Set : Set
but Girard showed that it is inconsistent. This result is known as
Girard’s paradox. Hence, not every Agda type is a Set
. For
example, we have
Bool : Set
Nat : Set
but not Set : Set
. However, it is often convenient for Set
to
have a type of its own, and so in Agda, it is given the type Set₁
:
Set : Set₁
In many ways, expressions of type Set₁
behave just like
expressions of type Set
; for example, they can be used as types of
other things. However, the elements of Set₁
are potentially
larger; when A : Set₁
, then A
is sometimes called a large
set. In turn, we have
Set₁ : Set₂
Set₂ : Set₃
and so on. A type whose elements are types is called a sort or a
universe; Agda provides an infinite number of universes Set
,
Set₁
, Set₂
, Set₃
, …, each of which is an element of the
next one. In fact, Set
itself is just an abbreviation for
Set₀
. The subscript n
is called the level of the universe
Setₙ
.
Note
You can also write Set1
, Set2
, etc., instead of
Set₁
, Set₂
. To enter a subscript in the Emacs mode, type
“\_1
”.
Universe example¶
So why are universes useful? Because sometimes it is necessary to
define, and prove theorems about, functions that operate not just on
sets but on large sets. In fact, most Agda users sooner or later
experience an error message where Agda complains that Set₁ !=
Set
. These errors usually mean that a small set was used where a
large one was expected, or vice versa.
For example, suppose you have defined the usual datatypes for lists and cartesian products:
data List (A : Set) : Set where
[] : List A
_::_ : A → List A → List A
data _×_ (A B : Set) : Set where
_,_ : A → B → A × B
infixr 5 _::_
infixr 4 _,_
infixr 2 _×_
Now suppose you would like to define an operator Prod
that inputs
a list of n
sets and takes their cartesian product, like this:
Prod (A :: B :: C :: []) = A × B × C
There is only one small problem with this definition. The type of
Prod
should be
Prod : List Set → Set
However, the definition of List A
specified that A
had to be a
Set
. Therefore, List Set
is not a valid type. The solution is
to define a special version of the List
operator that works for
large sets:
data List₁ (A : Set₁) : Set₁ where
[] : List₁ A
_::_ : A → List₁ A → List₁ A
With this, we can indeed define:
Prod : List₁ Set → Set
Prod [] = ⊤
Prod (A :: As) = A × Prod As
Universe polymorphism¶
To allow definitions of functions and datatypes that work for all
possible universes Setᵢ
, Agda provides a type Level
of
universe levels and levelpolymorphic universes Set ℓ
where ℓ :
Level
. For more information, see the page on universe levels.
Agda’s sort system¶
The implementation of Agda’s sort system is closely based on the theory of pure type systems. The full sort system of Agda consists of the following sorts:
Setᵢ
and its universepolymorphic variantSet ℓ
Propᵢ
and its universepolymorphic variantProp ℓ
Setωᵢ
Sorts Setᵢ
and Set ℓ
¶
As explained in the introduction, Agda has a hierarchy of sorts Setᵢ
: Setᵢ₊₁
, where i
is any concrete natural number, i.e. 0
,
1
, 2
, 3
, … The sort Set
is an abbreviation for
Set₀
.
You can also refer to these sorts with the alternative syntax
Seti
. That means that you can also write Set0
, Set1
,
Set2
, etc., instead of Set₀
, Set₁
, Set₂
.
In addition, Agda supports the universepolymorphic version Set ℓ
where ℓ : Level
(see universe levels).
Sorts Propᵢ
and Prop ℓ
¶
In addition to the hierarchy Setᵢ
, Agda also supports a second
hierarchy Propᵢ : Setᵢ₊₁
(or Propi
) of proofirrelevant
propositions. Like Set
, Prop
also has a
universepolymorphic version Prop ℓ
where ℓ : Level
.
Sorts Setωᵢ
¶
To assign a sort to types such as (ℓ : Level) → Set ℓ
, Agda
further supports an additional sort Setω
that stands above all
sorts Setᵢ
.
Just as for Set
and Prop
, Setω
is the lowest level at an
infinite hierarchy Setωᵢ : Setωᵢ₊₁
where Setω = Setω₀
. You can
also refer to these sorts with the alternative syntax Setωi
. That
means that you can also write Setω0
, Setω1
, Setω2
, etc.,
instead of Setω₀
, Setω₁
, Setω₂
.
Now it is allowed, for instance, to declare a datatype in Setω
.
This means that Setω
before the implementation of this hierarchy,
Setω
used to be a term, and there was no bigger sort that it in
Agda. Now a type can be assigned to it, in this case, Setω₁
.
However, unlike the standard hierarchy of universes Setᵢ
, this
second hierarchy Setωᵢ
does not support universe
polymorphism. This means that it is not possible to quantify over
all Setωᵢ at once. For example, the expression ∀ {i} (A : Setω i)
→ A → A
would not be a wellformed agda term. See the section
on Setω
on the page on universe levels for more
information.
Concerning other applications, It should not be necessary to refer to these sorts during normal usage of Agda, but they might be useful for defining reflectionbased macros.
Note
When omegainomega
is enabled, Setωᵢ
is
considered to be equal to Setω
for all i
(thus rendering
Agda inconsistent).
Sort metavariables and unknown sorts¶
Under universe polymorphism, levels can be arbitrary terms, e.g., a level that contains free variables. Sometimes, we will have to check that some expression has a valid type without knowing what sort it has. For this reason, Agda’s internal representation of sorts implements a constructor (sort metavariable) representing an unknown sort. The constraint solver can compute these sort metavariables, just like it does when computing regular term metavariables.
However, the presence of sort metavariables also means that sorts of
other types can sometimes not be computed directly. For this reason,
Agda’s internal representation of sorts includes three additional
constructors funSort
, univSort
, and piSort
. These
constructors compute to the proper sort once enough metavariables in
their arguments have been solved.
Note
funSort
, univSort
and piSort
are internal constructors
that may be printed when evaluating a term. The user can not enter
them, nor introduce them in agda code. All these constructors do
not represent new sorts but instead, they compute to the right sort
once their arguments are known.
funSort¶
The constructor funSort
computes the sort of a function type
even if the sort of the domain and the sort of the codomain are still
unknown.
To understand how funSort
works in general, let us assume the following
scenario:
sA
andsB
are two (possibly different) sorts.A : sA
, meaning thatA
is a type that has sortsA
.B : sB
, meaning thatB
is a (possibly different) type that has sortsB
.
Under these conditions, we can build the function type
A → B : funSort sA sB
. This type signature means that the function type
A → B
has a (possibly unknown) but welldefined sort funSort sA sB
,
specified in terms of the sorts of its domain and codomain.
If sA
and sB
happen to be known, then funSort sA sB
can be computed
to a sort value. We list below all the possible computations that funSort
can perform:
funSort Setωᵢ Setωⱼ = Setωₖ (where k = max(i,j))
funSort Setωᵢ (Set b) = Setωᵢ
funSort Setωᵢ (Prop b) = Setωᵢ
funSort (Set a) Setωⱼ = Setωⱼ
funSort (Prop a) Setωⱼ = Setωⱼ
funSort (Set a) (Set b) = Set (a ⊔ b)
funSort (Prop a) (Set b) = Set (a ⊔ b)
funSort (Set a) (Prop b) = Prop (a ⊔ b)
funSort (Prop a) (Prop b) = Prop (a ⊔ b)
Example: the sort of the function type ∀ {A} → A → A
with normal form
{A : _5} → A → A
evaluates to funSort (univSort _5) (funSort _5 _5)
where:
_5
is a metavariable that represents the sort ofA
.funSort _5 _5
is the sort ofA → A
.
Note
funSort
can admit just two arguments, so it will be
iterated when the function type has multiple arguments. E.g. the
function type ∀ {A} → A → A → A
evaluates to funSort (univSort
_5) (funSort _5 (funSort _5 _5))
univSort¶
univSort
returns the successor sort of a given sort.
Example: the sort of the function type ∀ {A} → A
with normal form
{A : _5} → A
evaluates to funSort (univSort _5) _5
where:
univSort _5
is the sort where the sort ofA
lives, ie. the successor level of_5
.
We list below all the possible computations that univSort
can perform:
univSort (Set a) = Set (lsuc a)
univSort (Prop a) = Set (lsuc a)
univSort Setωᵢ = Setωᵢ₊₁
piSort¶
Similarly, piSort s1 s2
is a constructor that computes the sort of
a Πtype given the sort s1
of its domain and the sort s2
of its
codomain as arguments.
To understand how piSort
works in general, we set the following scenario:
sA
andsB
are two (possibly different) sorts.A : sA
, meaning thatA
is a type that has sortsA
.x : A
, meaning thatx
has typeA
.B : sB
, meaning thatB
is a type (possibly different thanA
) that has sortsB
.
Under these conditions, we can build the dependent function type
(x : A) → B : piSort sA (λ x → sB)
. This type signature means that the
dependent function type (x : A) → B
has a (possibly unknown) but
welldefined sort piSort sA sB
, specified in terms of the element
x : A
and the sorts of its domain and codomain.
We list below all the possible computations that piSort
can perform:
piSort s1 (λ x → s2) = funSort s1 s2 (if x does not occur freely in s2)
piSort (Set ℓ) (λ x → s2) = Setω (if x occurs rigidly in s2)
piSort (Prop ℓ) (λ x → s2) = Setω (if x occurs rigidly in s2)
piSort Setωᵢ (λ x → s2) = Setωᵢ (if x occurs rigidly in s2)
With these rules, we can compute the sort of the function type ∀ {A}
→ ∀ {B} → B → A → B
(or more explicitly, {A : _9} {B : _7} → B → A
→ B
) to be piSort (univSort _9) (λ A → funSort (univSort _7)
(funSort _7 (funSort _9 _7)))
More examples:
piSort Level (λ l → Set l)
evaluates toSetω
piSort (Set l) (λ _ → Set l')
evaluates toSet (l ⊔ l')
univSort (Set l)
evaluates toSet (lsuc l)
piSort s (λ x > Setωi)
evaluates tofunSort s Setω
Syntactic Sugar¶
Donotation¶
A doblock consists of the layout keyword
do
followed by a sequence of dostatements, where
dostmt ::= pat ← expr [where lamclauses]
 let decls
 expr
lamclause ::= pat → expr
The where
clause of a bind is used to handle the cases not matched by the pattern
left of the arrow. See details below.
Note
Arrows can use either unicode (←
/→
) or ASCII (<
/>
) variants.
For example:
filter : {A : Set} → (A → Bool) → List A → List A
filter p xs = do
x ← xs
true ← p x ∷ []
where false → []
x ∷ []
Donotation is desugared before scope checking and is translated into calls to _>>=_
and _>>_
, whatever those happen to be bound in the context of the doblock. This means that doblocks are not tied to any particular notion of monad. In fact if there are no monadic statements in the do block it can be used as sugar for a let
:
puredo : Nat → Nat
puredo n = do
let p2 m = m * m
p4 m = p2 (p2 m)
p4 n
checkpuredo : puredo 5 ≡ 625
checkpuredo = refl
Desugaring¶
Statement  Sugar  Desugars to 

Simple bind  do x ← m
m'

m >>= λ x →
m'

Pattern bind  do p ← m
where pᵢ → mᵢ
m'

m >>= λ where
p → m'
pᵢ → mᵢ

Absurd match  do () ← m

m >>= λ ()

Nonbinding statement  do m
m'

m >>
m'

Let  do let ds
m'

let ds in
m'

If the pattern in the bind is exhaustive, the whereclause can be omitted.
Example¶
Donotation becomes quite powerful together with pattern matching on indexed data. As an example, let us write a correctbyconstruction type checker for simply typed λcalculus.
First we define the raw terms, using de Bruijn indices for variables and explicit type annotations on the lambda:
infixr 6 _=>_
data Type : Set where
nat : Type
_=>_ : (A B : Type) → Type
data Raw : Set where
var : (x : Nat) → Raw
lit : (n : Nat) → Raw
suc : Raw
app : (s t : Raw) → Raw
lam : (A : Type) (t : Raw) → Raw
Next up, welltyped terms:
Context = List Type
 A proof of x ∈ xs is the index into xs where x is located.
infix 2 _∈_
data _∈_ {A : Set} (x : A) : List A → Set where
zero : ∀ {xs} → x ∈ x ∷ xs
suc : ∀ {y xs} → x ∈ xs → x ∈ y ∷ xs
data Term (Γ : Context) : Type → Set where
var : ∀ {A} (x : A ∈ Γ) → Term Γ A
lit : (n : Nat) → Term Γ nat
suc : Term Γ (nat => nat)
app : ∀ {A B} (s : Term Γ (A => B)) (t : Term Γ A) → Term Γ B
lam : ∀ A {B} (t : Term (A ∷ Γ) B) → Term Γ (A => B)
Given a welltyped term we can mechanically erase all the type information (except the annotation on the lambda) to get the corresponding raw term:
rawIndex : ∀ {A} {x : A} {xs} → x ∈ xs → Nat
rawIndex zero = zero
rawIndex (suc i) = suc (rawIndex i)
eraseTypes : ∀ {Γ A} → Term Γ A → Raw
eraseTypes (var x) = var (rawIndex x)
eraseTypes (lit n) = lit n
eraseTypes suc = suc
eraseTypes (app s t) = app (eraseTypes s) (eraseTypes t)
eraseTypes (lam A t) = lam A (eraseTypes t)
Now we’re ready to write the type checker. The goal is to have a function that takes a raw term and either fails with a type error, or returns a welltyped term that erases to the raw term it started with. First, lets define the return type. It’s parameterised by a context and the raw term to be checked:
data WellTyped Γ e : Set where
ok : (A : Type) (t : Term Γ A) → eraseTypes t ≡ e → WellTyped Γ e
We’re going to need a corresponding type for variables:
data InScope Γ n : Set where
ok : (A : Type) (i : A ∈ Γ) → rawIndex i ≡ n → InScope Γ n
Lets also have a type synonym for the case when the erasure proof is refl
:
infix 2 _ofType_
pattern _ofType_ x A = ok A x refl
Since this is a donotation example we had better have a monad. Lets use the either monad with string errors:
TC : Set → Set
TC A = Either String A
typeError : ∀ {A} → String → TC A
typeError = left
For the monad operations, we are using instance arguments to infer which monad is being used.
We are going to need to compare types for equality. This is our first opportunity to take advantage of pattern matching binds:
_=?=_ : (A B : Type) → TC (A ≡ B)
nat =?= nat = pure refl
nat =?= (_ => _) = typeError "type mismatch: nat ≠ _ => _"
(_ => _) =?= nat = typeError "type mismatch: _ => _ ≠ nat"
(A => B) =?= (A₁ => B₁) = do
refl ← A =?= A₁
refl ← B =?= B₁
pure refl
We will also need to look up variables in the context:
lookupVar : ∀ Γ n → TC (InScope Γ n)
lookupVar [] n = typeError "variable out of scope"
lookupVar (A ∷ Γ) zero = pure (zero ofType A)
lookupVar (A ∷ Γ) (suc n) = do
i ofType B ← lookupVar Γ n
pure (suc i ofType B)
Note how the proof obligation that the welltyped deBruijn index erases to
the given raw index is taken care of completely under the hood (in this case
by the refl
pattern in the ofType
synonym).
Finally we are ready to implement the actual type checker:
infer : ∀ Γ e → TC (WellTyped Γ e)
infer Γ (var x) = do
i ofType A ← lookupVar Γ x
pure (var i ofType A)
infer Γ (lit n) = pure (lit n ofType nat)
infer Γ suc = pure (suc ofType nat => nat)
infer Γ (app e e₁) = do
s ofType A => B ← infer Γ e
where _ ofType nat → typeError "numbers cannot be applied to arguments"
t ofType A₁ ← infer Γ e₁
refl ← A =?= A₁
pure (app s t ofType B)
infer Γ (lam A e) = do
t ofType B ← infer (A ∷ Γ) e
pure (lam A t ofType A => B)
In the app
case we use a whereclause to handle the error case when the
function to be applied is welltyped, but does not have a function type.
Idiom brackets¶
Idiom brackets is a notation used to make it more convenient to work with applicative
functors, i.e. functors F
equipped with two operations
pure : ∀ {A} → A → F A
_<*>_ : ∀ {A B} → F (A → B) → F A → F B
As donotation, idiom brackets desugar before scope checking, so whatever the names pure
and _<*>_
are bound to gets used when desugaring the idiom brackets.
The syntax for idiom brackets is
( e a₁ .. aₙ )
or using unicode lens brackets ⦇
(U+2987) and ⦈
(U+2988):
⦇ e a₁ .. aₙ ⦈
This expands to (assuming left associative _<*>_
)
pure e <*> a₁ <*> .. <*> aₙ
Idiom brackets work well with operators, for instance
( if a then b else c )
desugars to
pure if_then_else_ <*> a <*> b <*> c
Idiom brackets also support none or multiple applications. If the applicative functor has an additional binary operation
_<>_ : ∀ {A B} → F A → F A → F A
then idiom brackets support multiple applications separated by a vertical bar 
, i.e.
( e₁ a₁ .. aₙ  e₂ a₁ .. aₘ  ..  eₖ a₁ .. aₗ )
which expands to (assuming right associative _<>_
)
(pure e₁ <*> a₁ <*> .. <*> aₙ) <> ((pure e₂ <*> a₁ <*> .. <*> aₘ) <> (pure eₖ <*> a₁ <*> .. <*> aₗ))
Idiom brackets without any application ()
or ⦇⦈
expend to empty
if
empty : ∀ {A} → F A
is in scope. An applicative functor with empty
and _<>_
is typically
called Alternative
.
Note that pure
, _<*>_
, and _<>_
need not be in scope to use ()
.
Limitations:
Binding syntax and operator sections cannot appear immediately inside idiom brackets.
The toplevel application inside idiom brackets cannot include implicit applications, so
( foo {x = e} a b )
is illegal. In case the
e
is pure you can write( (foo {x = e}) a b )
which desugars to
pure (foo {x = e}) <*> a <*> b
Syntax Declarations¶
Note
This is a stub
It is now possible to declare userdefined syntax that binds identifiers. Example:
record Σ (A : Set) (B : A → Set) : Set where
constructor _,_
field fst : A
snd : B fst
syntax Σ A (λ x → B) = [ x ∈ A ] × B
witness : ∀ {A B} → [ x ∈ A ] × B → A
witness (x , _) = x
The syntax declaration for Σ
implies that x
is in scope in
B
, but not in A
.
You can give fixity declarations along with syntax declarations:
infix 5 Σ
syntax Σ A (λ x → B) = [ x ∈ A ] × B
The fixity applies to the syntax, not the name; syntax declarations are also restricted to ordinary, nonoperator names. The following declaration is disallowed:
syntax _==_ x y = x === y
Syntax declarations must also be linear; the following declaration is disallowed:
syntax wrong x = x + x
Syntax declarations can have implicit arguments. For example:
id : ∀ {a}{A : Set a} > A > A
id x = x
syntax id {A} x = x ∈ A
Unlike mixfix operators that can be used unapplied using the name including all the underscores, or partially applied by replacing only some of the underscores by arguments, syntax must be fully applied.
Telescopes¶
Note
This is a stub.
Irrefutable Patterns in Binding Positions¶
Since Agda 2.6.1, irrefutable patterns can be used at every binding site in a telescope to take the bound value of record type apart. The type of the second projection out of a dependent pair will for instance naturally mention the value of the first projection. Its type can be defined directly using an irrefutable pattern as follows:
proj₂ : ((a , _) : Σ A B) → B a
And this second projection can be implemented with a lambaabstraction using one of these irrefutable patterns taking the pair apart:
proj₂ = λ (_ , b) → b
Using an aspattern makes it possible to name the argument and to take it apart at the same time. We can for instance prove that any pair is equal to the pairing of its first and second projections, a property commonly called etaequality:
eta : (p@(a , b) : Σ A B) → p ≡ (a , b)
eta p = refl
Termination Checking¶
Not all recursive functions are permitted  Agda accepts only these recursive schemas that it can mechanically prove terminating.
Primitive recursion¶
In the simplest case, a given argument must be exactly one constructor smaller in each recursive call. We call this scheme primitive recursion. A few correct examples:
plus : Nat → Nat → Nat
plus zero m = m
plus (suc n) m = suc (plus n m)
natEq : Nat → Nat → Bool
natEq zero zero = true
natEq zero (suc m) = false
natEq (suc n) zero = false
natEq (suc n) (suc m) = natEq n m
Both plus
and natEq
are defined by primitive recursion.
The recursive call in plus
is OK because n
is a subexpression
of suc n
(so n
is structurally smaller than suc n
). So
every time plus is recursively called the first argument is getting
smaller and smaller. Since a natural number can only have a finite
number of suc constructors we know that plus will always terminate.
natEq
terminates for the same reason, but in this case we can say
that both the first and second arguments of natEq are decreasing.
Structural recursion¶
Agda’s termination checker allows more definitions than just primitive recursive ones – it allows structural recursion.
This means that we require recursive calls to be on a (strict)
subexpression of the argument (see fib
below)  this is more
general that just taking away one constructor at a time.
fib : Nat → Nat
fib zero = zero
fib (suc zero) = suc zero
fib (suc (suc n)) = plus (fib n) (fib (suc n))
It also means that arguments may decrease in an lexicographic order 
this can be thought of as nested primitive recursion (see ack
below).
ack : Nat → Nat → Nat
ack zero m = suc m
ack (suc n) zero = ack n (suc zero)
ack (suc n) (suc m) = ack n (ack (suc n) m)
In ack
either the first argument decreases or it stays the same and the second one decreases.
This is the same as a lexicographic ordering.
Withfunctions¶
Pragmas and Options¶
The
NON_TERMINATING
pragmaThis is a safer version of TERMINATING which doesn’t treat the affected functions as terminating. This means that
NON_TERMINATING
functions do not reduce during type checking. They do reduce at runtime and when invokingCc Cn
at toplevel (but not in a hole). The pragma was added in Agda 2.4.2.
The
TERMINATING
pragmaSwitches off termination checker for individual function definitions and mutual blocks and marks them as terminating. Since Agda 2.4.2.1 replaced the
NO_TERMINATION_CHECK
pragma.The pragma must precede a function definition or a mutual block. The pragma cannot be used in
safe
mode.Examples:
Skipping a single definition: before type signature:
{# TERMINATING #} a : A a = a
Skipping a single definition: before first clause:
b : A {# TERMINATING #} b = b
Skipping an oldstyle mutual block: Before mutual keyword:
{# TERMINATING #} mutual c : A c = d d : A d = c
Skipping an oldstyle mutual block: Somewhere within mutual block before a type signature or first function clause:
mutual {# TERMINATING #} e : A e = f f : A f = e
Skipping a newstyle mutual block: Anywhere before a type signature or first function clause in the block:
g : A h : A g = h {# TERMINATING #} h = g
Universe Levels¶
Agda’ type system includes an infinite hierarchy of universes Setᵢ :
Setᵢ₊₁
. This hierarchy enables quantification over arbitrary types
without running into the inconsistency that follows from Set :
Set
. These universes are further detailed on the page on
Agda’s sort system.
However, when working with this hierarchy it can quickly get tiresome
to repeat the same definition at different universe levels. For
example, we might be forced to define new datatypes data List (A :
Set) : Set
, data List₁ (A : Set₁) : Set₁
, etc. Also every
function on lists (such as append
) must be redefined, and every
theorem about such functions must be reproved, for every possible
level.
The solution to this problem is universe polymorphism. Agda provides a
special primitive type Level
, whose elements are possible levels
of universes. In fact, the notation for the n
th universe,
Setₙ
, is just an abbreviation for Set n
, where n : Level
is a level. We can use this to write a polymorphic List
operator
that works at any level. The library Agda.Primitive
must be
imported to access the Level
type. The definition then looks like
this:
open import Agda.Primitive
data List {n : Level} (A : Set n) : Set n where
[] : List A
_::_ : A → List A → List A
This new operator works at all levels; for example, we have
List Nat : Set
List Set : Set₁
List Set₁ : Set₂
Level arithmetic¶
Even though we don’t have the number of levels specified, we know that
there is a lowest level lzero
, and for each level n
, there
exists some higher level lsuc n
; therefore, the set of levels is
infinite. In addition, we can also take the least upper bound n
⊔ m
of two levels. In summary, the following (and only the
following) operations on levels are provided:
lzero : Level
lsuc : (n : Level) → Level
_⊔_ : (n m : Level) → Level
This is sufficient for most purposes; for example, we can define the cartesian product of two types of arbitrary (and not necessarily equal) levels like this:
data _×_ {n m : Level} (A : Set n) (B : Set m) : Set (n ⊔ m) where
_,_ : A → B → A × B
With this definition, we have, for example:
Nat × Nat : Set
Nat x Set : Set₁
Set × Set : Set₁
Intrinsic level properties¶
Levels and their associated operations have some properties which are internally and automatically solved by the compiler. This means that we can replace some expressions with others, without worrying about the expressions for their corresponding levels matching exactly.
For example, we can write:
_ : {F : (l : Level) → Set l} {l1 l2 : Level} → F (l1 ⊔ l2) → F (l2 ⊔ l1)
_ = λ x → x
and Agda does the conversion from F (l1 ⊔ l2)
to F (l2 ⊔ l1)
automatically.
Here is a list of the level properties:
 Idempotence:
a ⊔ a
is the same asa
.  Associativity:
(a ⊔ b) ⊔ c
is the same asa ⊔ (b ⊔ c)
.  Commutativity:
a ⊔ b
is the same asb ⊔ a
.  Distributivity of
lsuc
over⊔
:lsuc (a ⊔ b)
is the same aslsuc a ⊔ lsuc b
.  Neutrality of
lzero
:a ⊔ lzero
is the same asa
.  Subsumption:
a ⊔ lsuc a
is the same aslsuc a
. Notably, this also holds for arbitrarily manylsuc
usages:a ⊔ lsuc (lsuc a)
is also the same aslsuc (lsuc a)
.
forall
notation¶
From the fact that we write Set n
, it can always be inferred that
n
is a level. Therefore, when defining universepolymorphic
functions, it is common to use the ∀ (or forall) notation. For
example, the type of the universepolymorphic map
operator on
lists can be written
map : ∀ {n m} {A : Set n} {B : Set m} → (A → B) → List A → List B
which is equivalent to
map : {n m : Level} {A : Set n} {B : Set m} → (A → B) → List A → List B
Expressions of sort Setω
¶
In a sense, universes were introduced to ensure that every Agda
expression has a type, including expressions such as Set
,
Set₁
, etc. However, the introduction of universe polymorphism
inevitably breaks this property again, by creating some new terms that
have no type. Consider the polymorphic singleton set Unit n :
Setₙ
, defined by
data Unit (n : Level) : Set n where
<> : Unit n
It is welltyped, and has type
Unit : (n : Level) → Set n
However, the type (n : Level) → Set n
, which is a valid Agda
expression, does not belong to any universe in the Set
hierarchy.
Indeed, the expression denotes a function mapping levels to sorts, so
if it had a type, it should be something like Level → Sort
, where
Sort
is the collection of all sorts. But if Agda were to support a
sort Sort
of all sorts, it would be a sort itself, so in
particular we would have Sort : Sort
. Just like Type : Type
,
this would leads to circularity and inconsistency.
Instead, Agda introduces a new sort Setω
that stands above all
sorts Set ℓ
, but is not itself part of the hierarchy. For example,
Agda assigns the expression (n : Level) → Set n
to be of type
Setω
.
Setω
is itself the first step in another infinite hierarchy
Setωᵢ : Setωᵢ₊₁
. However, this hierarchy does not support universe
polymorphism, i.e. there are no sorts Setω ℓ
for ℓ : Level
.
Allowing this would require a new universe Set2ω
, which would then
naturally lead to Set2ω₁
, and so on. Disallowing universe
polymorphism for Setωᵢ
avoids the need for such even larger
sorts. This is an intentional design decision.
Pragmas and options¶
 The option
typeintype
disables the checking of universe level consistency for the whole file.
 The option
omegainomega
enables the typing ruleSetω : Setω
(thus making Agda inconsistent) but otherwise leaves universe checks intact.
The pragma
{# NO_UNIVERSE_CHECK #}
can be put in front of a data or record type to disable universe consistency checking locally. Example:{# NO_UNIVERSE_CHECK #} data U : Set where el : Set → U
This pragma applies only to the check that the universe level of the type of each constructor argument is less than or equal to the universe level of the datatype, not to any other checks.
New in version 2.6.0.
The options typeintype
and omegainomega
and the pragma
{# NO_UNIVERSE_CHECK #}
cannot be used with –safe.
WithAbstraction¶
 Usage
 Generalisation
 Nested withabstractions
 Simultaneous abstraction
 Making withabstractions hidden and/or irrelevant
 Using underscores and variables in pattern repetition
 Irrefutable With
 Rewrite
 Withabstraction equality
 Alternatives to withabstraction
 Termination checking
 Performance considerations
 Technical details
Withabstraction was first introduced by Conor McBride [McBride2004] and lets you pattern match on the result of an intermediate computation by effectively adding an extra argument to the lefthand side of your function.
Usage¶
In the simplest case the with
construct can be used just to discriminate on
the result of an intermediate computation. For instance
filter : {A : Set} → (A → Bool) → List A → List A
filter p [] = []
filter p (x ∷ xs) with p x
filter p (x ∷ xs)  true = x ∷ filter p xs
filter p (x ∷ xs)  false = filter p xs
The clause containing the withabstraction has no righthand side. Instead it
is followed by a number of clauses with an extra argument on the left,
separated from the original arguments by a vertical bar (
).
When the original arguments are the same in the new clauses you can use the
...
syntax:
filter : {A : Set} → (A → Bool) → List A → List A
filter p [] = []
filter p (x ∷ xs) with p x
...  true = x ∷ filter p xs
...  false = filter p xs
In this case ...
expands to filter p (x ∷ xs)
. There are three cases
where you have to spell out the lefthand side:
 If you want to do further pattern matching on the original arguments.
 When the pattern matching on the intermediate result refines some of the other arguments (see Dot patterns).
 To disambiguate the clauses of nested withabstractions (see Nested withabstractions below).
Generalisation¶
The power of withabstraction comes from the fact that the goal type
and the type of the original arguments are generalised over the value
of the scrutinee. See Technical details below for the details.
This generalisation is important when you have to prove properties
about functions defined using with
. For instance, suppose we want
to prove that the filter
function above satisfies some property
P
. Starting out by pattern matching of the list we get the
following (with the goal types shown in the holes)
postulate P : ∀ {A} → List A → Set
postulate pnil : ∀ {A} → P {A} []
postulate Q : Set
postulate qnil : Q
proof : {A : Set} (p : A → Bool) (xs : List A) → P (filter p xs)
proof p [] = {! P [] !}
proof p (x ∷ xs) = {! P (filter p (x ∷ xs)  p x) !}
In the cons case we have to prove that P
holds for filter p (x ∷ xs)  p x
.
This is the syntax for a stuck withabstraction—filter
cannot reduce
since we don’t know the value of p x
. This syntax is used for printing, but
is not accepted as valid Agda code. Now if we withabstract over p x
, but
don’t pattern match on the result we get:
proof : {A : Set} (p : A → Bool) (xs : List A) → P (filter p xs)
proof p [] = pnil
proof p (x ∷ xs) with p x
...  r = {! P (filter p (x ∷ xs)  r) !}
Here the p x
in the goal type has been replaced by the variable r
introduced for the result of p x
. If we pattern match on r
the
withclauses can reduce, giving us:
proof : {A : Set} (p : A → Bool) (xs : List A) → P (filter p xs)
proof p [] = pnil
proof p (x ∷ xs) with p x
...  true = {! P (x ∷ filter p xs) !}
...  false = {! P (filter p xs) !}
Both the goal type and the types of the other arguments are generalised, so it
works just as well if we have an argument whose type contains filter p xs
.
proof₂ : {A : Set} (p : A → Bool) (xs : List A) → P (filter p xs) → Q
proof₂ p [] _ = qnil
proof₂ p (x ∷ xs) H with p x
...  true = {! H : P (x ∷ filter p xs) !}
...  false = {! H : P (filter p xs) !}
The generalisation is not limited to scrutinees in other withabstractions. All occurrences of the term in the goal type and argument types will be generalised.
Note that this generalisation is not always type correct and may result in a (sometimes cryptic) type error. See Illtyped withabstractions below for more details.
Nested withabstractions¶
Withabstractions can be nested arbitrarily. The only thing to keep in mind in
this case is that the ...
syntax applies to the closest withabstraction.
For example, suppose you want to use ...
in the definition below.
compare : Nat → Nat → Comparison
compare x y with x < y
compare x y  false with y < x
compare x y  false  false = equal
compare x y  false  true = greater
compare x y  true = less
You might be tempted to replace compare x y
with ...
in all the
withclauses as follows.
compare : Nat → Nat → Comparison
compare x y with x < y
...  false with y < x
...  false = equal
...  true = greater
...  true = less  WRONG
This, however, would be wrong. In the last clause the ...
is interpreted as
belonging to the inner withabstraction (the whitespace is not taken into
account) and thus expands to compare x y  false  true
. In this case you
have to spell out the lefthand side and write
compare : Nat → Nat → Comparison
compare x y with x < y
...  false with y < x
...  false = equal
...  true = greater
compare x y  true = less
Simultaneous abstraction¶
You can abstract over multiple terms in a single withabstraction. To do this
you separate the terms with vertical bars (
).
compare : Nat → Nat → Comparison
compare x y with x < y  y < x
...  true  _ = less
...  _  true = greater
...  false  false = equal
In this example the order of abstracted terms does not matter, but in general it does. Specifically, the types of later terms are generalised over the values of earlier terms. For instance
postulate pluscommute : (a b : Nat) → a + b ≡ b + a
postulate P : Nat → Set
thm : (a b : Nat) → P (a + b) → P (b + a)
thm a b t with a + b  pluscommute a b
thm a b t  ab  eq = {! t : P ab, eq : ab ≡ b + a !}
Note that both the type of t
and the type of the result eq
of
pluscommute a b
have been generalised over a + b
. If the terms in the
withabstraction were flipped around, this would not be the case. If we now
pattern match on eq
we get
thm : (a b : Nat) → P (a + b) → P (b + a)
thm a b t with a + b  pluscommute a b
thm a b t  .(b + a)  refl = {! t : P (b + a) !}
and can thus fill the hole with t
. In effect we used the
commutativity proof to rewrite a + b
to b + a
in the type of
t
. This is such a useful thing to do that there is special syntax
for it. See Rewrite below.
A limitation of generalisation is that only occurrences of the term that are
visible at the time of the abstraction are generalised over, but more instances
of the term may appear once you start filling in the righthand side or do
further matching on the left. For instance, consider the following contrived
example where we need to match on the value of f n
for the type of q
to
reduce, but we then want to apply q
to a lemma that talks about f n
:
postulate
R : Set
P : Nat → Set
f : Nat → Nat
lemma : ∀ n → P (f n) → R
Q : Nat → Set
Q zero = ⊥
Q (suc n) = P (suc n)
proof : (n : Nat) → Q (f n)