1 Introduction
Almost since the beginning of computational complexity theory, we have had results about oracles and their effect on the running times of computations. For example Baker et al. [Reference Baker, Gill and Solovay3] showed that on the one hand there are oracles A such that
$\mathsf {P}^A=\mathsf {NP}^A$
and on the other hand there are oracles B such that
$\mathsf {P}^B \not = \mathsf {NP}^B$
, thus demonstrating that methods that relativize will not suffice to solve basic questions like
$\mathsf {P}$
vs.
$\mathsf {NP}$
. An underlying question is whether oracle results can say things about complexity questions in the unrelativized world. The answer seems to be yes. For example, Allender together with Buhrman and Koucký [Reference Allender, Buhrman and Koucký1] and with Friedman and Gasarch [Reference Allender, Friedman and Gasarch2] showed that oracle access to sets of random strings can give insight into basic complexity questions. In [Reference Allender, Friedman and Gasarch2] Allender et al. showed that
$\bigcap _U \mathsf {P}^{R_{K_U}}\cap \mathsf {COMP}\subseteq \mathsf {PSPACE}$
where
$R_{K_U}$
denotes the strings whose prefix-free Kolmogorov complexity (relative to universal machine U) is at least their length, and
$\mathsf {COMP}$
denotes the collection of computable sets. Later the “
$\cap \mathsf {COMP}$
” was removed by Cai et al. [Reference Cai, Downey, Epstein, Lempp and Miller6]. Thus we conclude that reductions to very complex sets like the random strings somehow gives insight into very simple things like computable sets.
One of the classical notions in computability theory is that of lowness. An oracle is low for a specific type of problem if that oracle does not help to solve that problem. A language A is low if the halting problem relative to A has the same Turing degree (and hence the same computational content) as the halting problem. Slaman and Solovay [Reference Slaman, Solovay, Warmuth and Valiant7] characterized languages L where oracles are of no help in Gold-style learning theory:
$EX^L=EX$
if and only if L is low and 1-generic. Inspired by this and other lowness results in classical computability, Allender asked whether there were non-trivial sets which were “low for speed” in that, as oracles, they did not accelerate running times of computations by more than a polynomial amount. Of course, as stated this makes little sense since using any X as oracle, we can decide membership in X in linear time, while without an oracle X may not even be computable at all! Thus, what we are really interested in is the set of oracles which do not speed up the computation of computable sets by more than a polynomial amount. More precisely, an oracle X is low for speed if for any computable language L, if some Turing machine M with access to oracle X decides L in time f, then there is a Turing machine
$M'$
without any oracle and a polynomial p such that
$M'$
decides
$\mathcal {L}$
in time
$p \circ f$
. (Here the computation time of an oracle computation is counted in the usual complexity-theoretic fashion: we have a query tape on which we can write strings, and once a string x is written on this tape, we get to ask the oracle whether x belongs to it in time
$O(1)$
.)
There are trivial examples of such sets, namely oracles that belong to
$\mathsf {P}$
, because any query to such an oracle can be replaced by a polynomial-time computation. Allender’s precise question was therefore:

Such an X, if it exists, has to be non-computable, for the same reason as above: if X is computable and low for speed, then X is decidable in linear time using oracle X, thus—by lowness—decidable in polynomial time without oracle, i.e.,
$X \in \mathsf {P}$
.
A partial answer was given by Lance Fortnow (unpublished), who observed the following.
Theorem 1.1 (Fortnow).
If X is a hypersimple and computably enumerable oracle, then X is low for polynomial time, in that if
$L \in \mathsf {P}^X$
is computable, then
$L \in \mathsf {P}$
.
Allender’s question was finally solved by Bayer and Slaman, who showed the following.
Theorem 1.2 (Bayer–Slaman [Reference Bayer4]).
There are non-computable, computably enumerable, sets X which are low for speed.
Bayer showed that whether 1-generic sets were low for speed depended on whether
$\mathsf {P} = \mathsf {NP}$
. In [Reference Bienvenu and Downey5], Bienvenu and Downey began an analysis of precisely what kind of sets/languages could be low for speed. The showed, for instance, randomness always accelerates some computation in that no Schnorr random set is low for speed. They also constructed a perfect
$\Pi _1^0$
class all of whose members were low for speed. Among other results, they demonstrated that being low for speed did not seem to align very well to having low complexity in that no set of low computably enumerable Turing degree could also be low for speed.
From one point of view the sets with barely non-computable information are those of minimal Turing degree. Here we recall that
$\mathbf {a}$
is a minimal Turing degree if it is nonzero and there is no degree
$\mathbf {b} $
with
$\mathbf {0}<\mathbf {b}<\mathbf {a}$
. It is quite easy to construct a set of minimal Turing degree which is not low for speed, and indeed any natural minimality construction seems to give this. That is because natural Spector-forcing style dynamics seem to entail certain delays in any construction, even a full approximation one, which cause problems with the polynomial time simulation of the oracle computations being emulated. In view of this, Bienvenu and Downey asked the following question:
Question 1.3. Can a set A of minimal Turing degree be low for speed?
In the present paper we answer this question affirmatively:
Theorem 1.4. There is a set A which is both of minimal Turing degree and low for speed.
The proof is a complicated construction in which the interaction between different requirements is quite involved. The main tension is between the minimality requirements, which might want to wait a long time looking for an e-split while building an e-splitting tree, and the lowness requirements, which need to make decisions very quickly in order to have only a polynomial-time delay. The basic strategies for meeting the requirements are not too difficult, but there are some very intricate complications that arise when trying to resolve this, some of which only show up when four requirements interact.
The proof is similar to a forcing construction, in the sense that we meet the minimality requirements one-by-one by placing the set A on more and more refined computable trees. Like a forcing construction, these trees are not uniformly computable; the choice of trees depends on the outcomes of the minimality requirements. (Once could probably explicitly define a forcing poset, but the trees are so complicated that this would just add more complexity to the construction.) The reader might wonder whether the proof could be simplified by instead using a full approximation construction. We think that such a construction would not be simpler; a full approximation construction would need the same devices used in our proof (as they deal with complexity inherent in the combinatorics of the problem), but would also have extra complexity purely for keeping track of the approximations.
2 The requirements and basic strategies
We will construct a set A meeting the following requirements:

Here,
$R_{i}$
is a partial computable function. The requirements
$\mathcal {P}_e$
make A non-computable, while the requirements
$\mathcal {M}_e$
make A of minimal degree. The requirements
$\mathcal {L}_{\langle e,i \rangle }$
make A low for speed.
2.1 Meeting one
$\mathcal {P}$
requirement
To meet the requirement
$\mathcal {P}_e$
, we just need to choose an initial segment of A which ensures that
$A \neq W_e$
. There is no dynamic action to take.
2.2 Meeting one
$\mathcal {M}$
requirement
When working with Spector-style forcing, it is common to define a tree to be a map
$T \colon 2^{< \omega } \to 2^{< \omega }$
such that
$\sigma \preceq \tau $
implies
$T(\sigma ) \preceq T(\tau )$
. We will need to allow a node to have more than two children; so for the purposes of this proof a tree will be a computable subset of
$2^{< \omega }$
so that each node
$\sigma $
on the tree has finitely many children
$\tau \succeq \sigma $
. The children of
$\sigma $
may be of any length, where by length we mean the length as a binary string. Our trees will be perfect and have no dead ends, and in fact every node will have at least two children. Moreover, the trees will be what one might call strongly computable: for each node on the tree, one can compute a complete finite list of all of its children. As usual,
$[T]$
denotes the collection of paths through T. For a Turing functional
$\Phi _e$
, we will say that T is e-splitting if for any two distinct paths
$\pi _1$
and
$\pi _2$
through T, there is x with

If
$\tau _1$
and
$\tau _2$
are initial segments of
$\pi _1$
or
$\pi _2$
respectively witnessing this, i.e., with

and with a common predecessor
$\sigma $
, we say that they e-split over
$\sigma $
, or that they are an e-split over
$\sigma $
. Note that this definition is slightly weaker than the usual definition of e-splitting (which requires all children of a node to e-split with each other), but because the trees are strongly computable it is still the case that for an e-splitting tree T and
$B \in [T]$
, if
$\Phi _e^B$
is total then
$\Phi _e^B \geq _T B$
.
The requirements
$\mathcal {M}_e$
will be satisfied by an interesting new mix of forcing and full approximation. Following the standard Spector argument, to satisfy
$\mathcal {M}_e$
we attempt to make A a path on a computable tree T with either:
-
(1) T is e-splitting, and so for any path
$B \in [T]$ , if
$\Phi _e^B$ is total then
$\Phi _e^B \geq _T B$ ; or
-
(2) for all paths
$B_1,B_2 \in [T]$ and all x, if
$\Phi _e^{B_1}(x) \downarrow $ and
$\Phi _e^{B_2}(x) \downarrow $ then
$\Phi _e^{B_1}(x) = \Phi _e^{B_2}(x)$ , and so
$\Phi _e^B$ is either partial or computable for any
$B \in [T]$ .
Given such a tree, any path on T satisfies
$\mathcal {M}_e$
.
The standard argument for building a minimal degree is a forcing argument. Suppose that we want to meet just the
$\mathcal {M}$
and
$\mathcal {P}$
requirements. We can begin with a perfect tree
$T_{-1}$
. Then there is a computable tree
$T_0 \subseteq T_{-1}$
which is either
$0$
-splitting or forces
$\Phi _0^A$
to be either computable or partial. We can then choose
$A_0 \in T_0$
such that
$A_0$
is not an initial segment of
$W_e$
. Then there is a computable tree
$T_1 \subseteq T_0$
with root
$A_0$
which is either
$1$
-splitting or forces
$\Phi _1^A$
to be either computable or partial. We pick
$A_1 \in T_1$
so that
$A_1$
is not an initial segment of
$W_1$
, then
$T_2 \subseteq T_1$
with root
$A_1$
, and so on. Then
$A = \bigcup A_i$
is non-computable and is a path through each
$T_i$
, and so is a minimal degree. Though each
$T_i$
is computable, they are not uniformly computable; given
$T_i$
, to compute
$T_{i+1}$
we must know whether
$T_{i+1}$
is to be
$(i+1)$
-splitting, to force partiality, or to force computability.
Our construction will be much more complicated, as while building the splitting trees we will need to take into account the lowness requirements. Nevertheless, the basic idea is the same: either we can find “enough” e-splits, and put A on an e-splitting tree, or above some node we can find a subtree with no e-splits which forces that A is either partial or computable. Sometimes, in addition to options (1) and (2) above, we will sometimes choose to put A on a tree that forces
$\Phi _e^A$
to be partial:
-
(3) there is an x such that for all paths
$B \in [T]$ ,
$\Phi _e^B(x) \uparrow $ .
2.3 Meeting one
$\mathcal {L}$
requirement
We use something similar to the Slaman–Beyer strategy from [Reference Bayer4] to meet the lowness requirements. The entire construction will take place on a tree
$T_{-1}$
with the property that it is polynomial in
$|\sigma |$
to determine whether
$\sigma \in T_{-1}$
, and that moreover, for each n, there are polynomially many in n strings of length n on
$T_{-1}$
. For example, let
$\sigma \in T_{-1}$
if it is of the form

where each
$a_i \in \{0,1\}$
. So, for example,
$100111100000000 \in T_{-1}$
. In this tree, there are
$2^n$
nodes of height
$2^n$
on the nth level of the tree.
First we will show how to meet
$\mathcal {L}_{\langle e,i \rangle }$
in the absence of any other requirements. For simplicity drop the subscripts e and i so that we write
$\Psi = \Psi _{e}$
and
$R = R_{i}$
. The idea is to construct a computable simulation
$\Xi $
of
$\Psi ^A$
, with
$\Xi (x)$
computable in time polynomial in the running time of
$\Psi ^A(x)$
, so that if
$\Psi ^A = R$
then
$\Xi = \Psi ^A$
. We compute
$\Xi (x)$
as follows. We computably search over
$\sigma \in T_{-1}$
(i.e., over potential initial segments
$\sigma $
of A) and simulate the computations
$\Psi ^{\sigma }(x)$
. When we find
$\sigma $
with
$\Psi ^{\sigma }(x) \downarrow $
, we set
$\Xi (x) = \Psi ^{\sigma }(x)$
for the first such
$\sigma $
. Of course,
$\sigma $
might not be an initial segment of A, and so
$\Xi $
might not be equal to
$\Psi ^A$
; this only matters if
$\Psi ^A = R$
is total, as otherwise
$\mathcal {L}$
is satisfied vacuously. Consider two cases:
-
(1) If x is such that
$\Xi (x) \downarrow \neq R(x) \downarrow $ , then there is some
$\sigma \in T_{-1}$ witnessing that
$\Psi ^{\sigma }(x) = \Xi (x)$ . In this case the requirement
$\mathcal {L}$ asks that A extend
$\sigma $ , so that
$\Psi ^A(x) \neq R(x)$ and
$\mathcal {L}$ is satisfied. This is the finitary outcome.
-
(2) If there is no x such that
$\Xi (x) \downarrow \neq R(x) \downarrow $ , then if
$\Xi $ and R both total, then
$\Xi = R$ . So either
$\mathcal {L}$ is vacuously satisfied, or
$\Xi = \Psi ^A = R$ . The is the infinitary outcome.
In the second case we need to make sure that
$\Xi $
is only polynomially slower than
$\Psi ^A$
. We can do this by appropriately arranging the simulations so that if
$\Psi ^{\sigma }(x) \downarrow $
in time
$t(x)$
, the simulation
$\Xi $
will test this computation in a time which is only polynomially slower than
$t(x)$
, and will define
$\Xi (x) = \Psi ^{\sigma }(x)$
if it is not already defined, so that
$\Xi (x) \downarrow $
in time which is only polynomially slower than
$t(x)$
. It is important here that
$T_{-1}$
has only polynomially many nodes at height n and that we can test membership in
$T_{-1}$
in polynomial time. (If
$T_{-1}$
was, for example, the full binary tree, then there would be
$2^n$
different finite oracles
$\sigma $
of length n for which we must simulate computations
$\Psi ^{\sigma }(x)$
; it might be that one of these computations converges in time
$\leq n$
, but that it takes time exponential in n before we get to simulating this computation and defining
$\Xi (x)$
.)
Think of the simulations
$\Xi $
as being greedy and taking any computation that they find; and then, at the end, we can non-uniformly choose the initial segment of A to get a diagonalization
$\Phi ^A \neq R$
or to force that either the simulation
$\Xi $
is actually correct.
In the rest of the paper, we will sometimes say that
$\mathcal {L}$
simulates the computation
$\Psi ^{\sigma }(x)$
, that
$\mathcal {L}$
simulates
$\sigma $
, or that
$\mathcal {L}$
simulates computations above
$\sigma $
. These are informal notions which will be helpful to explain the construction; the simulation process is defined more formally in Section 8.3. When we say that
$\mathcal {L}$
simulates the computation
$\Psi ^{\sigma }(x)$
, what we mean is that
$\mathcal {L}$
computes
$\Psi ^{\sigma }(x)$
and, if it converges and
$\Xi (x)$
is so far undefined, sets
$\Xi (x) = \Psi ^{\sigma }(x)$
. When we say that
$\mathcal {L}$
simulates
$\sigma $
, what we mean is that
$\mathcal {L}$
simulates the computations
$\Psi ^{\sigma }(x)$
for
$x \leq |\sigma |$
, setting
$\Xi (x) = \Psi ^{\sigma }(x)$
if possible. Finally, when we say that
$\mathcal {L}$
simulates computations above
$\sigma $
, we mean that
$\mathcal {L}$
simulates computations
$\Psi ^\tau (x)$
for various
$\tau $
extending
$\sigma $
, and sets
$\Xi (x) = \Psi ^\tau (x)$
whenever it finds such a computation for which
$\Xi (x)$
has not yet been defined. We will also sometimes say that
$\mathcal {L}$
stops simulating computations above
$\sigma $
.
Convention 2.1. When computing
$\Phi ^{\sigma }(x)$
, the computation may run for only
$|\sigma |$
many steps and we require that
$x < |\sigma |$
.
2.4 Meeting one
$\mathcal {M}$
requirement and one
$\mathcal {L}$
requirement
The interactions between the requirements get more complicated. Consider now two requirements
$\mathcal {M} = \mathcal {M}_e$
and
$\mathcal {L}$
, with
$\mathcal {M}$
is of higher priority than
$\mathcal {L}$
. Write
$\Phi = \Phi _e$
.
$\mathcal {L}$
will work with
$\Psi $
, R, and
$\Xi $
.
Assume that for each
$\sigma \in T_{-1}$
, there are x and
$\tau _1,\tau _2 \succeq \sigma $
such that
$\Phi ^{\tau _1}(x) \downarrow \neq \Phi ^{\tau _2}(x) \downarrow $
; and that for each
$\sigma \in T_{-1}$
and x there is
$\tau \succeq \sigma $
such that
$\Phi ^{\tau }(x) \downarrow $
. Otherwise, we could find a subtree of
$T_{-1}$
which forces that
$\Phi ^A$
is either partial or computable, and satisfy
$\mathcal {M}$
by restricting to that subtree. This assumption implies that we can also find any finite number of extensions of various nodes that pairwise e-split, e.g., given
$\sigma _1$
and
$\sigma _2$
, there are extensions of
$\sigma _1$
and
$\sigma _2$
that e-split. Indeed, find extensions
$\tau _1,\tau _2$
of
$\sigma _1$
that e-split, say
$\Phi ^{\tau _1}(x) \downarrow \neq \Phi ^{\tau _2}(x) \downarrow $
, and an extension
$\rho $
of
$\sigma _2$
with
$\Phi ^{\tau }(x) \downarrow $
. Then
$\rho \ e$
-splits with one of
$\tau _1$
or
$\tau _2$
.
The requirement
$\mathcal {L}$
non-uniformly guesses at whether or not
$\mathcal {M}$
will succeed at building an e-splitting tree. Suppose that it guesses that
$\mathcal {M}$
successfully builds such a tree.
$\mathcal {M}$
begins with the special tree
$T_{-1}$
described above, and it must build an e-splitting tree
$T \subseteq T_{-1}$
. (Of course in general
$\mathcal {M}$
must build an e-splitting subtree of the tree produced by some other minimality requirement, which will introduce additional complexity; we will deal with this later.)
While
$\mathcal {M}$
is building the tree,
$\mathcal {L}$
will be simulating various computations
$\Psi ^{\sigma }$
. The tree T might be built very slowly, while
$\mathcal {L}$
has to simulate computations relatively quickly in order to define
$\Xi $
with only a polynomial delay. So when a node is removed from T,
$\mathcal {L}$
will stop simulating computations above it; but
$\mathcal {L}$
will have to simulate computations
$\Psi ^{\sigma }$
for nodes
$\sigma $
which are extensions of nodes in T as it has been defined so far, but which have not yet been determined to be in or not in T. This leads to the following problem: Suppose that
$\gamma $
is a leaf of T at stage s,
$\rho $
extends
$\gamma $
, and
$\mathcal {L}$
simulates
$\Psi ^\rho (x)$
and sees that it converges, and so defines
$\Xi (x) = \Psi ^\rho (x)$
. But then the requirement
$\mathcal {M}$
finds an e-split
$\gamma _1,\gamma _2 \succeq \gamma $
and wants to set
$\gamma _1$
and
$\gamma _2$
to be the successors of
$\gamma $
on T, with both
$\gamma _1$
and
$\gamma _2$
incompatible with
$\rho $
. If we allow
$\mathcal {M}$
to do this, then since
$\mathcal {M}$
has higher priority than
$\mathcal {L}$
,
$\mathcal {M}$
has determined that A cannot extend
$\rho $
as
$\mathcal {M}$
restricts A to be a path through T. So
$\mathcal {L}$
has lost its ability to diagonalize and it might be that
$\Psi ^A = R$
(say, because this happens on all paths through T) but
$\Psi ^A(x) \neq \Psi ^\rho (x) = \Xi (x)$
. This means that
$\mathcal {M}$
needs to take some action to keep computations that
$\mathcal {L}$
has found on the tree. We begin by describing the most basic strategy for keeping a single node
$\rho $
on the tree.
Suppose that at stage s the requirement
$\mathcal {M}$
wants to add children to a leaf node
$\gamma $
on T, and that
$\mathcal {L}$
is simulating computations above
$\gamma $
(i.e.,
$\gamma $
is an observation node for
$\mathcal {L}$
, to be defined). The tree is supposed to be an e-splitting tree, so we need to look for e-splits above
$\gamma $
. While looking for e-splits, we also need to simulate computations by
$\Psi $
. It might be that, perhaps before finding any e-splits at all, we simulate a computation
$\Psi _t^\rho (y) \downarrow $
and set
$\Xi (y)$
to be equal to this simulated computation. For the sake of
$\mathcal {L}$
we must keep
$\rho $
on the tree. (Soon we will extend the strategy to worry about what happens if we have found multiple such computations, but for now assume that there is just one.) Eventually we will find some e-splits, but this might be much later; in particular, we look for
$\gamma _1,\gamma _2,\gamma _3$
extending
$\gamma $
such that they pairwise e-split: for any two
$\gamma _i,\gamma _j$
, there is
$x_{i,j}$
such that
$\Phi _e^{\gamma _i}(x_{i,j}) \downarrow \neq \Phi _e^{\gamma _j}(x_{i,j}) \downarrow $
.
To begin, we stop simulating any computations extending
$\rho $
. This means that we are now free to extend the tree however we like above
$\rho $
without worrying about how this affects the
$\mathcal {L}$
-simulations. We also stop simulating any other computations not compatible with
$\gamma _1$
,
$\gamma _2$
, or
$\gamma _3$
. We keep simulating above
$\gamma _1$
,
$\gamma _2$
, and
$\gamma _3$
with the expectation that, in the infinitary outcome where
$\Xi = R$
, A will extend one of these.

Now look for an extension
$\rho ^*$
of
$\rho $
that e-splits with at least two of
$\gamma _1$
,
$\gamma _2$
, and
$\gamma _3$
. We can find such a
$\rho ^*$
by looking for one with
$\Phi ^{\rho ^*}$
defined on the values
$x_{i,j}$
witnessing the e-splitting of
$\gamma _1$
,
$\gamma _2$
, and
$\gamma _3$
, e.g., if
$\Phi ^{\gamma _i}(x_{i,j}) \neq \Phi ^{\gamma _j}(x_{i,j})$
, and
$\Phi ^{\rho ^*}(x_{i,j}) \downarrow $
, then
$\rho ^*$
must e-split with either
$\gamma _i$
or
$\gamma _j$
. Say that
$\rho ^* \ e$
-splits with
$\gamma _1$
and
$\gamma _2$
. Then we define the children of
$\gamma $
to be
$\gamma _1$
,
$\gamma _2$
, and
$\rho ^*$
and stop simulating computations above
$\gamma _3$
.

So of the extensions of
$\gamma $
, some are still simulated by
$\mathcal {L}$
, and others are not. Recall that
$\Psi ^{\rho }(y)$
converged, and we defined
$\Xi (y) = \Psi ^{\rho }(y)$
. If
$R(y) \neq \Psi ^{\rho }(y)$
then
$\mathcal {L}$
will insist that A extend
$\rho ^*$
to diagonalize; this is the finitary outcome of
$\mathcal {L}$
. Since in this case
$\mathcal {L}$
is satisfied with the finitary outcome by having
$\Psi ^A \neq R$
, computations extending
$\rho ^*$
do not have to be simulated. We call
$\rho ^*$
a diagonalization node (for
$\mathcal {L}$
); it is kept on the tree so that we can, if needed, meet the requirement
$\mathcal {L}$
with the finitary outcome. On the other hand, if
$\mathcal {L}$
fails to diagonalize and has the infinitary outcome, then A will need to extend the nodes
$\gamma _1$
and
$\gamma _2$
that are simulated. (If A extended
$\rho ^*$
, then it might be that
$\Psi ^A(z) \downarrow $
using an oracle extending
$\rho ^*$
, but
$\Xi (z)$
was not defined because
$\mathcal {L}$
did not simulate this computation.) We say that
$\gamma _1$
and
$\gamma _2$
are observation nodes (for
$\mathcal {L}$
); by this we mean that computations above them are still simulated by
$\mathcal {L}$
. We need two observation children
$\gamma _1$
and
$\gamma _2$
rather than just one in order to meet the requirements
$\mathcal {P}$
. The terms diagonalization node and observation node will not be used formally in the construction, but they will make explaining the construction easier.
We also note that we assumed that
$\mathcal {L}$
was still simulating computations above
$\gamma $
at the start of the construction. What we mean is that
$\gamma $
itself was an observation node for
$\mathcal {L}$
, as was its parent, and so on, back to some node which
$\mathcal {L}$
knows, via its guesses at the higher priority requirements, is an initial segment of A.
There are still some gaps in the above strategy that we must fix. What if, while looking for
$\rho ^*$
, we simulate a computation
$\Psi ^{\gamma _3}(z) \downarrow $
, and set
$\Xi (z) = \Psi ^{\gamma _3}(z)$
, and then only after this find that
$\rho ^* \ e$
-splits with
$\gamma _1$
and
$\gamma _2$
? We can no longer remove
$\gamma _3$
from the tree. Moreover, there might be many different nodes
$\rho $
that we cannot remove from the tree—indeed, it might be that around stage s, we cannot remove any nodes at height s from the tree, because each of them has some computation that we have simulated. To deal with this, we have to build e-splitting trees in a weaker way. It will no longer be the case that every pair of children of a node
$\sigma \ e$
-split, but we will still make sure that every pair of paths e-split.
So suppose again that we are trying to extend
$\gamma $
. Look for a pair of nodes
$\gamma _1,\gamma _2$
that e-split. Suppose that
$\rho _1,\ldots ,\rho _n$
are nodes for which we have found computations
$\Psi ^{\rho _i}(y_i) \downarrow $
and set
$\Xi (y_i) = \Psi ^{\rho _i}(y_i)$
. Like
$\rho $
above, we must keep
$\rho _1,\ldots ,\rho _n$
on the tree.Footnote
1
We stop simulating computations above
$\rho _1,\ldots ,\rho _n$
.

The nodes
$\gamma _1$
and
$\gamma _2$
are observation nodes for
$\mathcal {L}$
and the
$\rho _i$
are diagonalization nodes for
$\mathcal {L}$
.
Now at the next step we need to add extensions to
$\gamma _1$
and
$\gamma _2$
just as we added extensions of
$\gamma $
. We look for extensions
$\gamma _1^*$
and
$\gamma _1^{**}$
of
$\gamma _1$
,
$\gamma _2^*$
and
$\gamma _2^{**}$
of
$\gamma _2$
, and
$\rho _i^*$
and
$\rho _i^{**}$
of
$\rho _i$
such that all of these pairwise e-split. While we are looking for these,
$\mathcal {L}$
must simulate more computations at nodes
$\tau $
above
$\gamma _1$
and
$\gamma _2$
and for some of them we might see that
$\Psi ^\tau (y) \downarrow $
for some y and set
$\Xi (y) = \Psi ^\tau (y)$
. Just as we had to ensure that
$\rho _1,\rho _2,\ldots $
remained on the tree, we also have to ensure that these
$\tau $
remain on the tree. On the other hand, we no longer simulate computations above the
$\rho _i$
, and so we can just look for e-splits above each
$\rho _i$
without having to worry about
$\mathcal {L}$
.

Here
$\gamma _1^*$
,
$\gamma _1^{**}$
,
$\gamma _2^*$
, and
$\gamma _2^{**}$
are observation nodes for
$\mathcal {L}$
, since we still simulate computations above them. The
$\tau $
are diagonalization nodes for
$\mathcal {L}$
. The
$\rho _i^*$
and
$\rho _i^{**}$
are neither observation nodes nor diagonalization nodes for
$\mathcal {L}$
(though of course they are extensions of diagonalization nodes).
Now at the next step of extending the tree we need to extend all of these nodes to extensions that e-split with each other; but in doing so we will introduce further extensions (above
$\gamma _1^{*}$
,
$\gamma _1^{**}$
,
$\gamma _2^*$
, and
$\gamma _2^{**}$
) that do not e-split. So at no finite step do we get that everything e-splits with each other, but in the end every pair of paths will e-split with each other.
3 Outcomes and the general structure of the argument
Now that we have seen the basic strategies we return to talking about the priority ordering and the outcomes of the requirements. Order the requirements
$\mathcal {M}_e$
,
$\mathcal {L}_e$
, and
$\mathcal {P}_e$
as follows, from highest priority to lowest:

Each requirement has various possible outcomes:
-
• A requirement
$\mathcal {M}_e$ can either build an e-splitting tree, or it can build a tree forcing that
$\Phi _e$ is either partial or computable. In the former case, when
$\mathcal {M}_e$ builds an e-splitting tree, we say that
$\mathcal {M}_e$ has the infinitary outcome
$\infty $ . In the latter case, there is a node
$\sigma $ above which we do not find any more e-splittings. We say that
$\mathcal {M}_e$ has the finitary outcome
$\sigma $ .
-
• A requirement
$\mathcal {L}_{\langle e,i\rangle }$ can either have the simulation
$\Xi $ of
$\Psi _{e}$ be equal to
$R_{i}$ whenever they are both defined, or
$\mathcal {L}_{\langle e,i\rangle }$ can force A to extend a node
$\sigma $ , with
$\Psi _e^{\sigma }(x) \neq R_i(x)$ for some x. In the first case, we say that
$\mathcal {L}_{\langle e,i\rangle }$ has the infinitary outcome
$\infty $ , and in the latter case we say that
$\mathcal {L}_{\langle e,i\rangle }$ has the finitary outcome
$\sigma $ .
-
• A requirement
$\mathcal {P}_e$ chooses an initial segment
$\sigma $ of A that ensures that A is not equal to the eth c.e. set
$W_e$ . This node
$\sigma $ is the outcome of
$\mathcal {P}_e$ .
The tree of outcomes is the tree of finite strings
$\eta $
where
$\eta (3e)$
is an outcome for
$\mathcal {M}_e$
,
$\eta (3e+1)$
is an outcome for
$\mathcal {L}_e$
, and
$\eta (3e+2)$
is an outcome for
$\mathcal {P}_e$
. For convenience, given a requirement
$\mathcal {R}$
we write
$\eta (\mathcal {R})$
for the outcome of
$\mathcal {R}$
according to
$\eta $
:
$\eta (\mathcal {M}_e) = \eta (3e)$
,
$\eta (\mathcal {L}_e) = \eta (3e+1)$
, and
$\eta (\mathcal {P}_e) = \eta (3e+2)$
. Using this notation allows us to avoid having to remember exactly how we have indexed the entries of
$\eta $
. Given a requirement
$\mathcal {R}$
, we say that
$\eta $
is a guess by
$\mathcal {R}$
if
$\eta $
has an outcome for each requirement of higher priority than
$\mathcal {R}$
, e.g., a guess by
$\mathcal {L}_e$
is a string
$\eta $
of length
$3e+1$
with

Just as in the standard forcing construction of a minimal degree, we will meet the
$\mathcal {M}$
requirements one-by-one by asking, non-effectively for each e, whether there are enough e-splits on the tree
$T_{e-1}$
built by
$\mathcal {M}_{e-1}$
. If there are, then we build an e-splitting tree (by a complicated procedure taking into account the lower priority
$\mathcal {L}$
requirements), and if not, then we can find a tree forcing that
$\Psi _e$
is partial or computable. Let
$T_e$
be the e-splitting tree in the former case, or the tree forcing that
$\Psi _e$
is partial or computable in the latter case. In the former case,
$\mathcal {M}_e$
has the infinitary outcome, and in the latter case the finitary outcome. Having constructed
$T_{-1},\ldots ,T_e$
, we non-effectively determine the outcomes of
$\mathcal {L}_e$
and
$\mathcal {P}_e$
. (It is important here that
$T_{-1},\ldots ,T_e$
were specially constructed taking into account
$\mathcal {L}_e$
.) Then, having determined these outcomes, we turn to
$\mathcal {M}_{e+1}$
. The outcome of each requirement determines some initial segment of A, giving A as the limit of these extensions.
When
$\mathcal {M}_e$
is constructing
$T_e$
and taking into account the lower priority
$\mathcal {L}$
requirements, say some particular one
$\mathcal {L}$
,
$\mathcal {M}_e$
does not know the outcomes of the intermediate requirements of lower priority than
$\mathcal {M}_e$
but of higher priority than
$\mathcal {L}$
.
$\mathcal {L}$
, on the other hand, does know these outcomes. So
$\mathcal {M}_e$
will have to consider, for each guess
$\eta $
by
$\mathcal {L}$
at the higher priority outcomes, an instance
$\mathcal {L}^\eta $
of
$\mathcal {L}$
which makes this guess. Of course, once we come to actually meeting
$\mathcal {L}$
, there will only be one instance taking action, namely the one corresponding to the true outcomes of the higher priority requirements, but
$\mathcal {M}_e$
does not know which this is and so has to consider all possible instances.Footnote
2
The formal construction will be somewhat “static” in the sense that while
$\mathcal {M}_e$
is constructing
$T_e$
, it is not actually running, in a dynamic way, the simulation procedures defining
$\Xi $
for the low priority
$\mathcal {L}$
requirements. (See footnote 1.) Nevertheless it will be helpful to think of a dynamic construction for now, to see what kinds of interactions there are between the requirements; then in Section 5 we will develop a labeling system capturing these interactions in a static way. The e-splitting part of the construction will of course always be dynamic.
4 Interactions between three or more requirements
We now need to consider in more detail the interactions between the requirements. We saw in the previous section that an
$\mathcal {M}$
requirement must take into account lower priority
$\mathcal {L}$
requirements. In the full construction, we will have not only many different lower priority
$\mathcal {L}$
requirements, but also many different instances of each one that the
$\mathcal {M}$
requirement must take into account.
4.1 One
$\mathcal {M}$
requirement, two
$\mathcal {L}$
requirements
Consider three requirements:
$\mathcal {M} = \mathcal {M}_e$
of highest priority,
$\mathcal {L}_0$
of middle priority, and
$\mathcal {L}_1$
of lowest priority. Write
$\Psi _0$
,
$R_0$
, and
$\Xi _0$
and
$\Psi _1$
,
$R_1$
, and
$\Xi _1$
for the functionals and sets corresponding to
$\mathcal {L}_0$
and
$\mathcal {L}_1$
respectively. Suppose that both
$\mathcal {L}_0$
and
$\mathcal {L}_1$
correctly guess that
$\mathcal {M}$
has the infinitary outcome, building a splitting tree. Suppose also that
$\mathcal {L}_1$
correctly guesses that
$\mathcal {L}_0$
has the infinitary outcome, which means that if
$\Psi _0^A = R_0$
is total, then we must have
$\Xi _0 = \Psi _0^A$
; this means that
$\mathcal {L}_0$
must simulate every initial segment of A, i.e., they must all be observation nodes for
$\mathcal {L}_0$
.
Suppose that the requirement
$\mathcal {M}$
is trying to extend a node
$\gamma $
. We use the same strategy as before to extend
$\gamma $
, keeping certain nodes
$\rho _1,\rho _2,\ldots $
on the tree. These
$\rho _i$
might be kept on the tree either for the requirement
$\mathcal {L}_0$
(because we defined
$\Xi _0(y_i) = \Psi _0^{\rho _i}(y_i)$
) or for the requirement
$\mathcal {L}_1$
(because we defined
$\Xi _1(z_i) = \Psi _1^{\rho _i}(z_i)$
). Indeed, for each
$\rho _i$
, it might be that both requirements want to keep
$\rho _i$
on the tree.

The requirement
$\mathcal {L}_1$
has guessed that the outcome of
$\mathcal {L}_0$
is the infinitary outcome, which means that
$\mathcal {L}_0$
does not need to force A to extend one of
$\rho _1,\rho _2,\ldots $
. This means that
$\mathcal {L}_1$
can use the same strategy as in the previous section with only one lowness requirement, and stop simulating computations above the
$\rho _i$
.

Now depending on the outcome of
$\mathcal {L}_1$
it might force A to extend any one of these nodes. (For example, if
$\mathcal {L}_1$
has the finitary outcome
$\rho _i$
, then A must extend
$\rho _i$
in order to obtain a diagonalization
$\Psi _1^A \neq R_1$
; and if
$\mathcal {L}_1$
has the infinitary outcome, then A will extend one of
$\gamma _1$
or
$\gamma _2$
.)
$\mathcal {L}_0$
must simulate initial segments of A, but it does not know the outcome of
$\mathcal {L}_1$
which is a lower priority requirement. So
$\mathcal {L}_0$
must simulate computations through all of these nodes.

Here
$\gamma _1$
and
$\gamma _2$
are the observation nodes for
$\mathcal {L}_1$
, while the
$\rho _i$
are diagonalization nodes for
$\mathcal {L}_1$
. All of
$\gamma _1$
,
$\gamma _2$
, and the
$\rho _i$
are observation nodes for
$\mathcal {L}_0$
. All of the observed and diagonalization nodes for
$\mathcal {L}_1$
are observation nodes for
$\mathcal {L}_0$
.
Now in the next step we found extensions
$\gamma _1^{*}$
and
$\gamma _1^{**}$
of
$\gamma _1$
,
$\gamma _2^{*}$
and
$\gamma _2^{**}$
of
$\gamma _2$
, and
$\rho _1^*$
and
$\rho _1^{**}$
of
$\rho _1$
that e-split with each other. Before, we could simply extend
$\rho _1$
to
$\rho _1^*$
and
$\rho _1^{**}$
. Now, while we are looking for the extensions,
$\mathcal {L}_0$
might simulate other computations, say
$\tau _1$
extending
$\rho _1$
,
$\tau _2$
extending
$\rho _2$
, and so on. If
$\mathcal {L}_0$
sees that
$\Psi _0^{\tau _i}(z_i) \downarrow $
and sets
$\Xi _0(z_i) = \Psi _0^{\tau _i}(z_i)$
, then
$\tau _i$
must be kept on the tree. (Of course, in general, each
$\rho _i$
might have many such nodes extending it.) Above each
$\rho _i$
, we are essentially in the case from the previous section of having only one
$\mathcal {L}$
requirement; so as before,
$\mathcal {L}_0$
stops simulating above the
$\tau _i$
but continues simulating above the
$\rho _i^*$
and
$\rho _i^{**}$
. Above
$\gamma _1$
and
$\gamma _2$
, we do the same thing that we did above
$\gamma $
.

Here
$\gamma _1^*$
,
$\gamma _1^{**}$
,
$\gamma _2^{*}$
, and
$\gamma _2^{**}$
are observation nodes for
$\mathcal {L}_1$
, while the other (unnamed) children of
$\gamma _1$
and
$\gamma _2$
are diagonalization nodes for
$\mathcal {L}_1$
. The
$\rho _i^*$
,
$\rho _i^{**}$
, and
$\tau _i$
are neither observation nodes nor diagonalization nodes for
$\mathcal {L}_1$
but they extend the diagonalization nodes
$\rho _i$
. All of the children of
$\gamma _1$
and
$\gamma _2$
, together with the
$\rho _i^*$
and
$\rho _i^{**}$
, are observation nodes for
$\mathcal {L}_0$
. The
$\tau _i$
are diagonalization nodes for
$\mathcal {L}_0$
. Again, all of the observed and diagonalization nodes for
$\mathcal {L}_1$
are observation nodes for
$\mathcal {L}_0$
.
Note that, for example,
$\tau _1$
may not e-split with
$\gamma _1^{*}$
. This is because we were not able to choose
$\rho _1$
or
$\tau _1$
; they were both forced onto the tree by the lowness requirements. But we have still made some progress: computations above
$\tau _1$
are not being simulated, and so when we extend
$\tau _1$
, we can find extensions that e-split with the e-splitting extensions of the other nodes.
Note that the number of levels we wait to find e-splits is now two, since there are two lowness requirements, whereas in Section 2.4 we only had to wait one level. In general, the more lowness requirements we are considering, the longer we have to wait before we can find e-splits along all paths. So it is important that we only deal with finitely many lowness requirements at a time, so that we eventually arrive at a point where parts of the tree are no longer being simulated by any lowness requirement. In the full construction, there will be some important bookkeeping to manage which lowness requirements are being considered at any particular time, so that we sufficiently delay the introduction of new lowness requirements. (This will be accomplished by giving each element of the tree a scope; see Section 5.)
Note also that every node simulated by
$\mathcal {L}_1$
is also simulated by the lower priority requirement
$\mathcal {L}_0$
, but that there are nodes simulated by
$\mathcal {L}_0$
which are not simulated by
$\mathcal {L}_1$
. Moreover, every diagonalization node for
$\mathcal {L}_1$
is an observation node for
$\mathcal {L}_0$
and so the nodes above diagonalization nodes for
$\mathcal {L}_1$
are simulated by
$\mathcal {L}_0$
. This is the first example of a relationship between the nodes simulated by one requirement and the nodes simulated by another requirement. There will be further such relationships as well.
4.2 Two
$\mathcal {M}$
requirements and one
$\mathcal {L}$
requirement
Consider three requirements:
$\mathcal {M}_0$
of highest priority,
$\mathcal {M}_1$
of middle priority, and
$\mathcal {L}$
of lowest priority. Suppose that
$\mathcal {M}_0$
is trying to build a
$0$
-splitting tree
$T_0$
, and that
$\mathcal {M}_1$
and
$\mathcal {L}$
correctly guess that it is successful in doing so and has the infinitary outcome. Suppose further that
$\mathcal {M}_1$
is trying to build a
$1$
-splitting tree
$T_1$
, and
$\mathcal {L}$
is working with
$\Xi $
,
$\Psi $
, and R.
When
$\mathcal {M}_0$
is building
$T_0$
we follow the same strategy as before. When extending a node
$\gamma $
which is simulated by
$\mathcal {L}$
, we have two children which are observation nodes, and a number of other children which are diagonalization nodes.

So we can think of the tree
$T_0$
as a finitely branching tree which has, embedded in it, a subtree of nodes which are observation nodes for
$\mathcal {L}$
; each such node
$\mathcal {L}$
has at least two children which are themselves observation nodes for
$\mathcal {L}$
. We can picture the tree
$T_0$
as looking something like this. We show the observation nodes with filled in circles, and the diagonalization nodes with open circles.

The children along bold lines are observations nodes which are children of observation nodes and so on back to the root node.Footnote 3
4.2.1
$\mathcal {M}_1$
has the infinitary outcome.
Suppose that
$\mathcal {M}_1$
succeeds in building a
$1$
-splitting subtree
$T_1$
of
$T_0$
, and that
$\mathcal {L}$
correctly guesses this.
Since
$\mathcal {L}$
is of lower priority than
$\mathcal {M}_1$
,
$\mathcal {M}_1$
does not know the outcome of
$\mathcal {L}$
. So when
$\mathcal {L}$
simulates a computation
$\Psi ^\rho (x)\downarrow $
and sets
$\Xi (x) = \Psi ^\rho (x)$
,
$\mathcal {M}_1$
must keep
$\rho $
on the tree
$T_1$
. That is,
$\mathcal {M}_1$
must keep the diagonalization nodes for
$\mathcal {L}$
on
$T_1$
.
$\mathcal {L}$
will have the finitary outcome and have A extend one of these
$\rho $
if, by doing so, it can ensure that
$\Psi ^A \neq R$
.
Now suppose that
$\mathcal {L}$
has the infinitary outcome, though of course
$\mathcal {M}_1$
does not know this. Then:
-
(1) Because
$\mathcal {L}$ has the infinitary outcome,
$\mathcal {L}$ must simulate all of the initial segments of A, i.e., the initial segments of A must be observation nodes on
$T_0$ .
-
(2) On the other hand, to meet
$\mathcal {M}_1$ , A must be on the
$1$ -splitting tree
$T_1$ .
Every path through
$T_1$
contains, as initial segments, many nodes which are part of a
$1$
-split. In order to reconcile (1) and (2) it must be that when
$\mathcal {M}_1$
is looking for 1-splits above an observation node for
$\mathcal {L}$
, it cannot look for just any 1-split; it should look through extensions that are also observation nodes for
$\mathcal {L}$
. In the picture of
$T_0$
above with the bold lines and dashed lines, this means that when
$\mathcal {M}_1$
is looking for a 1-split above a bold node, it should look through the bold subtree. (Of course in building
$T_1$
the requirement
$\mathcal {M}_1$
must also implement the rest of the strategy given above, keeping certain nodes
$\rho $
on the tree, etc.)
4.2.2
$\mathcal {M}_1$
has the finitary outcome.
Suppose that
$\mathcal {M}_1$
fails to build a 1-splitting subtree of
$T_0$
and that
$\mathcal {L}$
correctly guesses this. This means that there is some node
$\gamma $
on
$T_0$
such that
$\mathcal {M}_1$
cannot find a 1-split extending
$\gamma $
, or it cannot find a convergence of
$\Psi $
on some input above
$\gamma $
. In the standard construction of a minimal degree, we would then define a tree
$T_1^*$
as the full subtree of
$T_0$
below
$\gamma $
;
$T_1^*$
forces that
$\Phi ^A_1$
is partial or computable for every
$A \in [T_1^*]$
. However, we just saw that
$\mathcal {M}_1$
might not look for
$1$
-splits through all of the extensions of
$\gamma $
on
$T_0$
, but just through some subtree of
$T_0$
above
$\gamma $
. Then we must define
$T_1^*$
not to be the full subtree of
$T_0$
above
$\gamma $
, but rather to be the subtree of
$T_0$
above
$\gamma $
where we
$\mathcal {M}_1$
looked for (and failed to find) a
$1$
-split.
We meet
$\mathcal {M}_1$
by having A be a path through
$T_1^*$
. Since
$\mathcal {L}$
correctly guessed that
$\mathcal {M}_1$
has the finitary outcome,
$\gamma $
, we must ensure that
$\mathcal {L}$
is satisfied. When we defined
$T_0$
, there were certain nodes
$\rho $
(the diagonalization nodes for
$\mathcal {L}$
) for which
$\mathcal {L}$
found computations
$\Psi _1^{\rho }(x) \downarrow $
and set
$\Xi _1(x) = \Psi _1^{\rho }(x) \downarrow $
.
$\mathcal {M}_0$
was required to ensure that these nodes
$\rho $
stayed on the tree
$T_0$
. Similarly, these nodes
$\rho $
must stay on the tree
$T_1^*$
; if they were not on
$T_1^*$
, then we would lose the opportunity to achieve diagonalizations
$\Psi ^A \neq R$
. However, the tree
$T_1^*$
is not being defined dynamically, so we cannot take some dynamic action to keep these nodes on the tree. Instead, we must ensure that these nodes are among those we look for 1-splits through, so that they end up on
$T_1^*$
.
4.3 Two
$\mathcal {M}$
requirements and two
$\mathcal {L}$
requirements
Now consider four requirements:
$\mathcal {M}_0$
of highest priority,
$\mathcal {M}_1$
of middle priority, and
$\mathcal {L}_0$
and
$\mathcal {L}_1$
of lowest priority. As above, suppose that
$\mathcal {M}_0$
has the infinitary outcome, successfully building a
$0$
-splitting tree
$T_0$
, and that all of the other requirements correctly guess this. Suppose that
$\mathcal {L}_0$
guesses that
$\mathcal {M}_1$
has the infinitary outcome, successfully building a
$1$
-splitting subtree
$T_1$
of
$T_0$
, and suppose that
$\mathcal {L}_1$
guesses that
$\mathcal {M}_1$
has the finitary outcome, failing to build a
$1$
-splitting tree. (
$\mathcal {L}_0$
and
$\mathcal {L}_1$
might be of any priority ordering relative to each other, or might even be instances of the same lowness requirement which have different guesses about
$\mathcal {M}_1$
.) This is the combination of the situations described in Sections 4.2.1 and 4.2.2. The observations there create a dependence between
$\mathcal {L}_0$
and
$\mathcal {L}_1$
:
-
(1) When
$\mathcal {M}_1$ looks for a
$1$ -split extending an observation node for
$\mathcal {L}_0$ , it should look through extensions which are also observation nodes (Section 4.2.1).
-
(2) The diagonalization nodes for
$\mathcal {L}_1$ must be among those through which
$\mathcal {M}_1$ looks for
$1$ -splits (Section 4.2.2).
Taken together, this means that
$\mathcal {L}_0$
must simulate computations at the diagonalization nodes for
$\mathcal {L}_1$
, i.e., the diagonalization nodes for
$\mathcal {L}_1$
should be observation nodes for
$\mathcal {L}_0$
. This might seem somewhat odd, because
$\mathcal {L}_0$
and
$\mathcal {L}_1$
are incompatible in the sense that they have different guesses at the outcome of the higher priority requirement
$\mathcal {M}_1$
, and because
$\mathcal {L}_0$
and
$\mathcal {L}_1$
might be of any priority ordering relative to each other. Nevertheless, they are entangled; this is a significant source of combinatorial complexity in the full construction.
Now let us return to the construction of
$T_0$
by
$\mathcal {M}_0$
in the presence of these two requirements
$\mathcal {L}_0$
and
$\mathcal {L}_1$
. The construction goes in exactly the same way as in Section 4.1, though some of the reasoning is slightly different. In Section 4.1 the requirement
$\mathcal {L}_0$
was of higher priority than
$\mathcal {L}_1$
, and
$\mathcal {L}_1$
guessed that
$\mathcal {L}_0$
has the infinitary outcome. Now
$\mathcal {L}_0$
might be of lower priority than
$\mathcal {L}_1$
(or
$\mathcal {L}_0$
and
$\mathcal {L}_1$
might even be different instances of the same requirement); but there is some minimality requirement
$\mathcal {M}_1$
such that
$\mathcal {L}_0$
guesses that
$\mathcal {M}_1$
has the infinitary outcome and
$\mathcal {L}_1$
guesses that it has the finitary outcome. As
$\mathcal {M}_0$
has higher priority than
$\mathcal {M}_1$
,
$\mathcal {M}_0$
does not know the outcome of
$\mathcal {M}_1$
, and hence does not know which of
$\mathcal {L}_0$
or
$\mathcal {L}_1$
has guessed correctly. Thus it has to take both of them into account.
As in Section 4.1 suppose that
$\mathcal {M}_0$
is extending a node
$\gamma $
which is simulated by both
$\mathcal {L}_0$
and
$\mathcal {L}_1$
, and suppose that
$\mathcal {M}_0$
must keep nodes
$\rho _1,\rho _2,\ldots $
on
$T_0$
because we have set
$\Xi _1(x_i) = \Psi _1^{\rho _i}(x_i)$
. As in Section 4.1 we will make the extension as follows:

This is exactly the same as in Section 4.1, but now the argument that
$\mathcal {L}_0$
must continue to simulate computations above
$\rho _1,\rho _2,\ldots $
is different from before. It is now because the
$\rho _i$
are diagonalization nodes for
$\mathcal {L}_1$
and, as argued above, as a consequence of (1) and (2) these must be observation nodes for
$\mathcal {L}_0$
.
4.4 The relation of watching
We have already seen two situations in which a requirement
$\mathcal {L}_1$
must ensure that the diagonalization nodes for another requirement
$\mathcal {L}_0$
are all observation nodes for
$\mathcal {L}_1$
, thereby ensuring that
$\mathcal {L}_1$
simulates computations above these diagonalization nodes for
$\mathcal {L}_0$
.
We will introduce a relation
$\mathcal {L}_{d_1}^{\nu _1} \rightsquigarrow \mathcal {L}_{d_2}^{\nu _2}$
to represent this. We suggest reading
$\mathcal {L}_{d_1}^{\nu _1} \rightsquigarrow \mathcal {L}_{d_2}^{\nu _2}$
as “
$\mathcal {L}_{d_1}^{\nu _1}$
watches
$\mathcal {L}_{d_2}^{\nu _2}$
.” The two situations we have seen are:
-
(1) If
$d_1 < d_2$ and
$\nu _1 \prec \nu _2$ then we should have
$\mathcal {L}_{d_1}^{\nu _1} \rightsquigarrow \mathcal {L}_{d_2}^{\nu _2}$ (Section 4.1).
-
(2) If
$\mathcal {L}_{d_1}^{\nu _1}$ and
$\mathcal {L}_{d_2}^{\nu _2}$ first disagree on the outcome of a minimality requirement
$\mathcal {M}$ ,
$\mathcal {L}_{d_1}^{\nu _1}$ guesses that
$\mathcal {M}$ will have the infinitary outcome, and
$\mathcal {L}_{d_2}^{\nu _2}$ guesses that it will have the finitary outcome, then we should have
$\mathcal {L}_{d_1}^{\nu _1} \rightsquigarrow \mathcal {L}_{d_2}^{\nu _2}$ (Section 4.3).
The reader will immediately notice that this resembles a lexicographic ordering. We will give a formal definition of
$\rightsquigarrow $
which captures (1) and (2).
Definition 4.1. Given a string of outcomes
$\nu $
, define
$\Delta (\nu ) \in \{\mathsf {f},\infty \}^{< \omega }$
to be the string

except that we replace any entry which is not
$\infty $
with
$\mathsf {f}$
. Put an ordering
$\precsim $
on these using the lexicographic order with
$\infty < \mathsf {f}$
.
Definition 4.2. Suppose that
$\nu _1$
and
$\nu _2$
are guesses by
$\mathcal {L}_{d_1}$
and
$\mathcal {L}_{d_2}$
respectively at the outcomes of the higher priority requirements. Define
$\mathcal {L}_{d_1}^{\nu _1} \rightsquigarrow \mathcal {L}_{d_2}^{\nu _2}$
if and only if
$\Delta (\nu _1) \prec \Delta (\nu _2)$
in the ordering just defined.
$\mathcal {L}_{d_1}^{\nu _1} \rightsquigarrow \mathcal {L}_{d_2}^{\nu _2}$
means that, given a node
$\sigma $
which is an observation node for both requirements, any child of
$\sigma $
which is an observation or diagonalization node for
$\mathcal {L}_{d_2}^{\nu _2}$
should be an observation node for
$\mathcal {L}_{d_1}^{\nu _1}$
.
4.5 Two
$\mathcal {M}$
requirements and one
$\mathcal {L}$
requirement, sandwiched
In Section 4.2 we considered the case of four requirements:
$\mathcal {M}_0$
of highest priority,
$\mathcal {M}_1$
of middle priority, and
$\mathcal {L}$
of lowest priority. Consider now three requirements:
$\mathcal {M}_e$
of highest priority,
$\mathcal {L}_e$
of middle priority, and
$\mathcal {M}_{e+1}$
of lowest priority. Suppose that
$\mathcal {M}_e$
successfully builds an e-splitting tree
$T_e$
, and that
$\mathcal {L}$
correctly guess this.
Consider the diagram from Section 4.2 of the tree
$T_e$
, with the observation nodes for
$\mathcal {L}$
given in bold. We show the observation nodes with filled in circles, and the diagonalization nodes with open circles.

Suppose moreover that
$\mathcal {L}_e$
has the infinitary outcome. This means that
$\mathcal {L}_e$
was not able to obtain a diagonalization by having A extend one of the diagonalization nodes. So as previously described,
$\mathcal {L}_e$
requires that A be a path through the observation nodes for
$\mathcal {L}_e$
. The way that we will enforce this is to ensure that when
$\mathcal {M}_{e+1}$
builds an
$e+1$
-splitting tree
$T_1$
, or a tree
$T_{e+1}^*$
with no
$e+1$
-splits, it builds it as a subtree of the observation nodes for
$\mathcal {L}_e$
. (Recall also from Section 4.4 that
$\mathcal {L}_e$
is minimal under the watching relation among all instances of lowness requirements of lower priority than
$\mathcal {M}_e$
; this means that all of the diagonalization nodes for these requirements are observation nodes for
$\mathcal {L}_e$
, and so these diagonalization nodes can all be preserved on the tree built by
$\mathcal {M}_{e+1}$
.)
5 The labeling system
In the full construction there will be many things to keep track of, such as which nodes are observation nodes or diagonalization nodes for which
$\mathcal {L}$
requirements, how many
$\mathcal {L}$
requirements we are considering, and so on. In this section we will describe a labeling system that will allow us to keep track of all of this. These labels have nothing to do with the labels in a full approximation argument.
One simplification we can make is to note that the relation
$\mathcal {L}_{d_1}^{\nu _1} \rightsquigarrow \mathcal {L}_{d_2}^{\nu _2}$
depends entirely on
$\Delta (\eta _1)$
and
$\Delta (\eta _2)$
, that is, only on the guesses at the
$\mathcal {M}$
outcomes, and only at whether the outcome is finitary or infinitary. Requirements
$\mathcal {L}_{d_1}^{\nu _1}$
and
$\mathcal {L}_{d_2}^{\nu _2}$
with
$\Delta (\eta _1) = \Delta (\eta _2)$
can then be treated the same in terms of the labeling. Of course, there are some differences; for example, if
$\mathcal {L}_{d_1}^{\nu _1}$
knows from its guesses at the higher priority requirements that A must extend a node
$\sigma $
, then it will not simulate any computation incompatible with
$\sigma $
. So the labels on their own do not completely determine which nodes are simulated by
$\mathcal {L}_{d_1}^{\nu _1}$
, or which nodes are observation nodes or diagonalization nodes, but they are sufficient together with the initial segment of A guessed by
$\nu _1$
. Think of this initial segment of A as being the first observation node of
$\mathcal {L}_{d_1}^{\nu _1}$
, so that there is a subtree of observation nodes with it as a root.
Consider the e-splitting tree T constructed by
$\mathcal {M}_e$
. Let
$\xi $
be the outcomes of lower priority requirements. When building T,
$\mathcal {M}_e$
must take into account instances of lowness requirements
$\mathcal {L}^\eta _d$
with
$d \geq e$
and
$\eta $
extending
$\xi \textrm{\^{}} \infty $
. (Since T is being built under the assumption that
$\mathcal {M}_e$
has the infinitary outcome, it only needs to respect lowness requirements that guess that this is the case.) When considering
$\mathcal {L}_d^\eta $
while building T, we will only need to know the guesses by
$\mathcal {L}_d^\eta $
at the outcomes of the
$\mathcal {M}$
requirements
$\mathcal {M}_{e+1},\ldots ,\mathcal {M}_d$
of lower priority than
$\mathcal {M}_e$
but higher priority than
$\mathcal {L}_d$
(because the only lowness requirements we need to consider have the same guesses at the requirements
$\mathcal {M}_1,\ldots ,\mathcal {M}_e$
), and moreover we will only care about whether the guess is the infinitary outcome or a finitary outcome.
Definition 5.1. Given an instance
$\mathcal {L}_{e+n}^\eta $
of a lowness requirement of lower priority than
$\mathcal {M}_e$
, we write
$\Delta _{> e}(\eta )$
for

Recall that
$\Delta (\eta )$
just replaces the entries of
$\eta $
by
$\mathsf {f}$
or
$\infty $
;
$\Delta _{> e}(\eta )$
is the string which consists of the guesses by
$\eta $
at the outcomes (finitary
$\mathsf {f}$
or infinitary
$\infty $
) of
$\mathcal {M}_{e+1},\ldots ,\mathcal {M}_d$
.
The tree T will be a labeled tree. These labels have nothing to do with the labels used in a full approximation construction, which represent a guess at whether we can find a splits above a node. Instead, these signify whether a node is an observation node, a diagonalization node, or neither for the various
$\mathcal {L}$
requirements. Figure 1 below may help the reader to understand the structure of the tree. In Section 6 we will also revisit some of the examples from Sections 2 and 4 and show how the labels work there.

Figure 1 An example of what the labeled tree might look like. We draw the main children with a solid line and the secondary children with a dashed line. Expansionary nodes are shown by a circle. To fit the tree onto a single page, we have made some simplifications: (a) we have omitted some nodes from the diagram; (b) we have assumed that each node has only one secondary child; and (c) we have assumed that the second expansionary level
$e_2$
is much smaller than the value of 128 that we set in the construction. We show the second expansionary level with the long horizontal dashed line.
Recall from Section 4.1 that at any point in the construction we will consider only finitely many
$\mathcal {L}$
requirements. Each node
$\sigma $
is given a scope
$\operatorname {\mathrm {scope}}(\sigma )$
, which is a natural number that represents the number of lowness requirements that are being considered at this level of the tree, i.e., if the scope of a node is n, then we are considering lowness requirements
$\mathcal {L}_e,\ldots ,\mathcal {L}_{e+n}$
. Each node
$\sigma $
will also be given label
$\ell (\sigma )$
. The label is an element of

If a node
$\sigma $
has scope n, then the label of
$\sigma $
will be an element of

Note that the label might have length less than n, and might even be the empty string; an element of
$\mathtt {Labels}_n$
of length m corresponds to a guess by
$\mathcal {L}_{e+m}$
at the outcomes of
$\mathcal {M}_{e+1},\ldots ,\mathcal {M}_{e+m}$
. We order the labels lexicographically with
$\infty < \mathsf {f}$
, and with
$\top $
as the greatest element. For example, in
$\mathtt {Labels}_2$
, we have

We use
$\preceq $
for this ordering, which is of course the same ordering that defined the watching relation
$\rightsquigarrow $
. We often think of this ordering as being an ordering
$\preceq _n$
on
$\mathtt {Labels}_n$
, and write
$\operatorname {\mathrm {pred}}_n(\eta )$
for the predecessor of
$\eta $
in
$\mathtt {Labels}_n$
. Though
$\mathtt {Labels}$
is well-founded, it does not have order type
$\omega $
, and so we need to restrict to
$\mathtt {Labels}_n$
to make sense of the predecessor operator. The label
$\mathsf {f} \infty $
, for example, represents the instances of
$\mathcal {L}_{e+2}$
which guesses that
$\mathcal {M}_{e+1}$
has the finitary outcome and
$\mathcal {M}_{e+2}$
has the infinitary outcome; the label
$\varnothing $
represents
$\mathcal {L}_e$
, which has no
$\mathcal {M}$
requirements of lower priority than
$\mathcal {M}_e$
to guess at. In
$\mathtt {Labels}_2$
, we have
$\operatorname {\mathrm {pred}}_2(\mathsf {f}) = \infty \mathsf {f}$
, whereas in
$\mathtt {Labels}_1$
we have
$\operatorname {\mathrm {pred}}_1(\mathsf {f}) = \infty $
.
We think of elements of
$\{\mathsf {f},\infty \}^n$
as guesses by
$\mathcal {L}_{e+n}$
at the outcomes of
$\mathcal {M}_{e+1},\ldots ,\mathcal {M}_{e+n}$
. Think of a label
$\ell $
of length n as being associated with all of the lowness requirements
$\mathcal {L}^\eta _{e+n}$
with
$\eta $
extending
$\xi \textrm {\^{}} \infty $
and with
$\Delta _{>e}(\eta ) = \ell $
. Given two requirements
$\mathcal {L}_{d_1}^{\nu _1}$
and
$\mathcal {L}_{d_2}^{\nu _2}$
with
$\nu _1,\nu _2$
extending
$\xi \textrm {\^{}} \infty $
,
$\mathcal {L}_{d_1}^{\nu _1} \rightsquigarrow \mathcal {L}_{d_2}^{\nu _2}$
if and only if
$\Delta _{> e}(\nu _1) \prec \Delta _{> e}(\nu _2)$
lexicographically.
The labels
$\ell (\sigma ^*)$
are the formal replacement for the more informal notions of diagonalization nodes and observation nodes. Suppose that
$\sigma ^*$
is a child of
$\sigma $
on T. Giving
$\sigma ^*$
the label
$\ell (\sigma ^*)$
means that
$\sigma ^*$
is a diagonalization node on T for requirements
$\mathcal {L}_{e+|\ell (\sigma ^*)|}^\eta $
with
$\Delta _{>e}(\eta ) = \ell (\sigma ^*)$
and for which
$\sigma $
was an observation node. Combining this with the relation
$\rightsquigarrow $
, we get that if
$\mathcal {L}_d^\eta $
is an instance of a lowness requirement, and
$\sigma ^*$
is the child of
$\sigma $
on T, with
$d \leq e+\operatorname {\mathrm {scope}}(\sigma ^*)$
, then:
-
• if
$ \ell (\sigma ^*) \succ \Delta _{> e}(\eta )$ , then if
$\sigma $ is an observation node for
$\mathcal {L}_d^\eta $ then
$\sigma ^*$ is an observation node for
$\mathcal {L}_d^\eta $ and
-
• if
$ \ell (\sigma ^*) = \Delta _{> e}(\eta )$ , then if
$\sigma $ is an observation node for
$\mathcal {L}_d^\eta $ then
$\sigma ^*$ is a diagonalization node for
$\mathcal {L}_d^\eta $ .
If
$\ell (\sigma ^*) = \top $
then
$\sigma ^*$
is an observation node for every requirement
$\mathcal {L}_d^\eta $
for which the parent node
$\sigma $
was an observation node. These are nodes such as
$\gamma _1$
and
$\gamma _2$
from the examples of Sections 2.4 and 4.1. If
$\ell (\sigma ^*) = \varnothing $
then
$\sigma ^*$
is not an observation node for any requirement (but, if
$\sigma $
was an observation node for
$\mathcal {L}_e$
, then
$\sigma ^*$
is a diagonalization node for
$\mathcal {L}_e$
). These are nodes such as the
$\rho $
from Section 2.4 and the
$\tau $
from Section 4.1; the
$\rho $
in Section 4.1 are observation nodes for
$\mathcal {L}_0$
and hence would not be labeled
$\varnothing $
.
The following notation will be helpful for talking about observation nodes among other things.
Definition 5.2. For any labeled tree S, node
$\tau \in S$
, and relation
$R(\sigma ^*)$
(or even a relation
$R(\sigma ^*,\sigma )$
between a node
$\sigma ^*$
and its parent
$\sigma $
), we can define the subtree
as the tree with root node
$\tau $
, and such that whenever
, the children
$\sigma ^*$
of
$\sigma $
on
are exactly the children
$\sigma ^*$
of
$\sigma $
on S such that
$R(\sigma ^*)$
holds (or
$R(\sigma ^*,\sigma )$
).
We will use this notation for the trees
and
for
$\eta \in \mathtt {Labels}$
, and for
(to be defined). So:
-
• if
$\sigma $ is an observation node for
$\mathcal {L}_d^\eta $ then any element of
is also an observation node for
$\mathcal {L}_d^\eta $ and
-
• if
$\sigma $ is an observation node for
$\mathcal {L}_d^\eta $ ,
, and
$\sigma ^*$ is a child of
$\sigma '$ with
$\ell (\sigma ^*) = \Delta _{>e}(\eta )$ , then
$\sigma ^*$ is a diagonalization node for
$\mathcal {L}_d^\eta $ .
We need a system by which we consider more and more requirements
$\mathcal {L}$
as we move up the tree. We must be careful about how we introduce new requirements, because introducing a new requirement could be disruptive to our strategy. Certain levels of the tree
$T_e$
will be called expansionary levels. From the nth expansionary level of the tree on, we will consider requirements
$\mathcal {L}_{e+1},\ldots ,\mathcal {L}_{e+n}$
, using guesses from at the outcomes of
$\mathcal {M}_{e+1},\ldots ,\mathcal {M}_{e+n}$
. The nodes at the nth expansionary level or higher, but below the (
$n+1$
)st expansionary level, will be said to be in the nth strip. We write
$e_1,e_2,e_3,\ldots $
for the expansionary levels. Conveniently we can precompute the expansionary levels; they are defined statically by
$e_1 = 0$
and

One might expect that if
$\sigma $
is in the nth strip, then
$\operatorname {\mathrm {scope}}(\sigma )$
will be n. This will not quite be the case; an expansionary level is where we start considering more requirements, but this might not happen immediately for particular nodes. Instead, if
$\sigma $
is in the nth strip, we will have
$\operatorname {\mathrm {scope}}(\sigma ) = n-1$
or
$\operatorname {\mathrm {scope}}(\sigma ) = n$
. The scope of a child will always be at least the scope of its parent. We say that
$\sigma ^*$
, a child of
$\sigma $
, is an expansionary node if
$\operatorname {\mathrm {scope}}(\sigma ^*)> \operatorname {\mathrm {scope}}(\sigma )$
. We say that an expansionary node
$\sigma ^*$
is an nth expansionary node if
$\operatorname {\mathrm {scope}}(\sigma ^*) = n$
. The scopes will be non-decreasing along paths on the tree. Along any path in the tree, there is one expansionary node for each n; it is clear from the fact that the scopes are non-decreasing that there is at most one, and we will show in Lemma 7.3 that there is at least one. Moreover, the nth expansionary node along a path will occur in the nth strip.
Suppose that
$\sigma ^*$
is a child of
$\sigma $
, with
$\operatorname {\mathrm {scope}}(\sigma ) = n$
. If
$\sigma ^*$
is an
$(n+1)$
st expansionary node, then we will have
$\operatorname {\mathrm {scope}}(\sigma ^*) = n+1$
and
$\ell (\sigma ^*) = \top $
. Otherwise,
$\operatorname {\mathrm {scope}}(\sigma ^*) = \operatorname {\mathrm {scope}}(\sigma ) = n$
and we will have either
$\ell (\sigma ^*) = \ell (\sigma )$
or
$\ell (\sigma ^*) \prec \ell (\sigma )$
.
Definition 5.3. If
$\ell (\sigma ^*) = \ell (\sigma )$
(or if
$\sigma ^*$
is an expansionary node, and
$\ell (\sigma ^*) = \top $
) then we say that
$\sigma ^*$
is a main child of
$\sigma $
. Otherwise, if
$\ell (\sigma ^*) \prec \ell (\sigma )$
, then we say that
$\sigma ^*$
is a secondary child of
$\sigma $
.
Each node will have two main children and any finite number (including zero) of secondary children. In Section 6 we will return to the examples of Sections 2 and 4 and explain which children are main children and which are secondary children. One should think of the main children as those where the construction “went smoothly”: they are the nodes where we found e-splittings. They are not diagonalization nodes for any lowness requirement, and are observation nodes for any lowness requirement for which the parent was an observation node. In contrast, the secondary children are those we were forced to keep on the tree by lowness requirements; they are all diagonalization nodes for some requirement.
Using the notation of Definition 5.2, we will often consider
where
$main(\sigma ^*,\sigma )$
is the relation of being a main child. So for example
is the tree consisting of main children of main children of… main children of
$\sigma $
.
Consider the construction by
$\mathcal {M}_e$
of an e-splitting subtree of
$T_{e-1}$
. Recall from Section 4.3 that when we look for e-splits, we do so through only a subtree of
$T_{e-1}$
. Namely, if we are looking for an e-split above
$\tau \in T_{e-1}$
, and if
$\tau $
is an observation node for a requirement
$\mathcal {L}_{d}^\eta $
which guesses that
$\mathcal {M}_e$
has the infinitary outcome, then we should look for that e-split among extensions of
$\tau $
which are also observation nodes for
$\mathcal {L}_{e+d}^\eta $
. Let us translate this into the language of labels.
$T_{e-1}$
itself will be a labeled tree, with scopes
$\operatorname {\mathrm {scope}}_{e-1} \colon T_{e-1} \to \omega $
and labels
$\ell _{e-1} \colon T_{e-1} \to \mathtt {Labels}$
. The nodes on
$T_{e-1}$
which are observation nodes for all requirements
$\mathcal {L}_{d}^\eta $
which guess that
$\mathcal {M}_e$
has the infinitary outcome are the nodes with label
$\ell _{e-1}(\sigma ) = \mathsf {f} \cdots $
or
$\ell _{e-1}(\sigma ) = \top $
; equivalently, we may write
$\ell _{e-1}(\sigma ) \succeq \mathsf {f}$
. Let
be the subtree of
$T_{e-1}$
above
$\tau $
(so that
$\tau $
is the root node of
) such that given
$\sigma $
on
, the children of
$\sigma $
in
are the children
$\sigma ^*$
of
$\sigma $
on
$T_{e-1}$
with
$\ell _{e-1}(\sigma ^*) \succ \mathsf {f}$
. When we look for a splitting extension of
$\tau $
, we look through
.
The input tree
$T_{e-1}$
will have similar properties to those described above.
Definition 5.4. We say that a tree T with labels
$\ell $
and
$\operatorname {\mathrm {scope}}$
is admissible if:
-
(1) Each
$\sigma \in T$ has two main children
$\sigma ^*$ and
$\sigma ^{**}$ with
$\ell (\sigma ^*) \succeq \ell (\sigma )$ and
$\ell (\sigma ^{**}) \succeq \ell (\sigma )$ .
-
(2) If
$\sigma ^*$ is a child of
$\sigma $ , then
$\operatorname {\mathrm {scope}}(\sigma ^*) \geq \operatorname {\mathrm {scope}}(\sigma )$ .
-
(3) For each n, each path through T contains a node
$\sigma $ with
$\ell (\sigma ) = \top $ and
$\operatorname {\mathrm {scope}}(\sigma ) \geq n$ .
We must also begin with an admissible tree. As in Section 2.3 let
$T_{-1}$
be the tree consisting of all nodes in
$2^{< \omega }$
of the form

where each
$a_i \in \{0,1\}$
. We put
$\ell _{-1}(\sigma ) = \top $
and
$\operatorname {\mathrm {scope}}_{-1}(\sigma ) = |\sigma |$
for each
$\sigma \in T_{-1}$
. This
$T_{-1}$
is admissible.
6 Three or four requirements revisited
Now that we have described the labeling system formally, let us return to the situations described in Sections 2 and 4 to see what happens with the labels.
6.1 One
$\mathcal {M}$
requirement and one
$\mathcal {L}$
requirement, redux
We begin by revisiting Section 2.4. We have two requirements
$\mathcal {M} = \mathcal {M}_e$
and
$\mathcal {L} = \mathcal {L}_e$
, so that
$\mathcal {M}$
is of higher priority than
$\mathcal {L}$
. Let
$T_{e-1}$
be the tree built by
$\mathcal {M}_{e-1}$
, and assume that we have enough e-splits on
$T_{e-1}$
.
$\mathcal {M}$
is building an e-splitting tree T and
$\mathcal {L}$
is guessing that it is successful.
Suppose that we have determined that
$\gamma $
is on T and we are trying to extend it. Suppose also that
$\operatorname {\mathrm {scope}}_e(\gamma ) = 0$
so that we are only concerned with the single requirement
$\mathcal {L}$
. We are then assigning labels from
$\mathtt {Labels}_1 = \{ \top \succ \varnothing \}$
. Suppose that
$\gamma $
has the label
$\top $
, and is an observation node for
$\mathcal {L}$
. We look for a pair of nodes
$\gamma _1,\gamma _2$
on

that e-split. Since we assumed that these exist, at some point we will find them at stage s. Now let
$\rho _1,\ldots ,\rho _n$
be the other nodes on
$T_{e-1}[s]$
.
$\mathcal {L}$
stops simulating computations above
$\rho _1,\ldots ,\rho _n$
.

The nodes
$\gamma _1$
and
$\gamma _2$
are the main children of
$\gamma $
and are also labeled
$\top $
, while the
$\rho _i$
are secondary children and are labeled
$\varnothing $
.
Now at the next step we look for
$\gamma _1^*$
and
$\gamma _1^{**}$
on

,
$\gamma _2^*$
and
$\gamma _2^{**}$
on

, and
$\rho _i^*$
and
$\rho _i^{**}$
on

such that all of these pairwise e-split. The nodes
$\gamma _1$
and
$\gamma _2$
also get children
$\tau $
.

Here
$\gamma _1^*$
,
$\gamma _1^{**}$
,
$\gamma _2^*$
, and
$\gamma _2^{**}$
are labeled
$\top $
, and the
$\tau $
,
$\rho _i^*$
, and
$\rho _i^{**}$
are labeled
$\varnothing $
. The main children of
$\gamma _1$
are
$\gamma _1^*$
and
$\gamma _1^{**}$
; the main children of
$\gamma _2$
are
$\gamma _2^*$
and
$\gamma _2^{**}$
; and the main children of
$\rho _i$
are
$\rho _i^*$
and
$\rho _i^{**}$
. The
$\tau $
are secondary children.
6.2 One
$\mathcal {M}$
requirement and two
$\mathcal {L}$
requirements, redux
We now revisit Section 4.1. Consider three requirements:
$\mathcal {M}_e$
of highest priority,
$\mathcal {L}_{e}$
of middle priority, and
$\mathcal {L}_{e+1}$
of lowest priority. Because
$\mathcal {L}_{e+1}$
is guessing at the outcome of
$\mathcal {M}_{e+1}$
, and
$\mathcal {M}_e$
does not know this outcome,
$\mathcal {M}_e$
has to consider both instances
$\mathcal {L}_{e+1}^{\mathsf {f}}$
and
$\mathcal {L}_{e+1}^{\infty }$
which guess that
$\mathcal {M}_{e+1}$
has the finitary or infinitary outcome respectively. (We omit from the superscript of the instance the guesses at the outcomes of requirements
$\mathcal {M}_0,\ldots ,\mathcal {M}_e$
.)
Suppose that the requirement
$\mathcal {M}_e$
is trying to extend a node
$\gamma $
and that
$\operatorname {\mathrm {scope}}(\gamma ) = 1$
so that we are only concerned with
$\mathcal {L}_e$
,
$\mathcal {L}_{e+1}^{\mathsf {f}}$
, and
$\mathcal {L}_{e+1}^{\infty }$
. We have

The labels
$\varnothing $
,
$\mathsf {f}$
, and
$\infty $
correspond respectively to
$\mathcal {L}_e$
,
$\mathcal {L}_{e+1}^{\mathsf {f}}$
and
$\mathcal {L}_{e+1}^{\infty }$
. So we have

At the first level, we have:

At the next level, we have:

It is only at the following stage that we get some nodes which are labeled
$\varnothing $
and which are not observation nodes for
$\mathcal {L}_{e+1}^\infty $
.
6.3 Two
$\mathcal {M}$
requirements and one
$\mathcal {L}$
requirement, sandwiched, redux
Finally, we revisit Section 4.5. We consider three requirements:
$\mathcal {M}_e$
of highest priority,
$\mathcal {L}_{e}$
of middle priority, and
$\mathcal {M}_{e+1}$
of lowest priority. Suppose that
$\mathcal {M}_e$
successfully builds an e-splitting tree
$T_e$
, and that
$\mathcal {L}_e$
correctly guesses this. If
$\mathcal {L}_e$
has the infinitary outcome, then we need A to live within the observation nodes for
$\mathcal {L}_e$
. Recall that the way that we ensure that this happens is that
$\mathcal {M}_{e+1}$
will work only within the observation nodes for
$\mathcal {L}_e$
when it builds its
$1$
-splitting tree.
Let
$\eta $
be the guesses by
$\mathcal {L}_e$
at the higher priority requirements. Since
$\mathcal {L}_e$
is the highest priority requirement after
$\mathcal {M}_e$
, we have
$\Delta _{> e}(\eta ) = \varnothing $
. Thus the nodes labeled
$\varnothing $
are those that might be diagonalization nodes for
$\mathcal {L}_e$
, and those with labels
$\succ \varnothing $
are observation nodes for
$\mathcal {L}_e$
. Thus if
$\rho $
is the node at which we want to start building
$T_{e+1}$
, then we want to build
$T_{e+1}$
as a subtree of
.
6.4 A new issue
There is one more issue which we have glossed over previously (and so some of the previous examples are somewhat misleading without keeping this issue in mind). Consider the requirements
$\mathcal {M}_{e-1}$
, with corresponding tree
$T_{e-1}$
, and
$\mathcal {M}_e$
, which is building an e-splitting subtree of
$T_{e-1}$
. Suppose that
$\mathcal {M}_e$
has the infinitary outcome. Let
$\mathcal {L}^\nu $
be an instance of a requirement of lower priority than
$\mathcal {M}_e$
. The issue is that the labels on
$T_{e-1}$
might affect the labels on
$T_e$
: essentially, if we determine that a node on
$T_{e-1}$
is not an observation node for
$\mathcal {L}^\nu $
, then this means that
$\mathcal {L}^\nu $
stops simulating computations above that node, and so no node extending that node should be an observation node on
$T_{e}$
.
Let
$\eta = \Delta _{> e}(\nu )$
. Assume that
$\mathcal {L}$
guesses correctly that
$\mathcal {M}_e$
has the infinitary outcome, so that
$\Delta _{> e-1}(\nu ) = \infty \eta $
.
Suppose that we have determined on
$T_e$
that
$\sigma ^*$
should be a child of
$\sigma $
, with
$\sigma $
an observation node for
$\mathcal {L}$
on both
$T_{e-1}$
and
$T_e$
. On
$T_{e-1}$
, we might have a sequence of children
$\sigma = \sigma _0,\sigma _1,\ldots ,\sigma _n = \sigma ^*$
. If any of these is not an observation node for
$\mathcal {L}^\nu $
on
$T_{e-1}$
—that is, if
$\ell _{e-1}(\sigma _i) \nsucc \infty \eta $
—then
$\sigma ^*$
should not be an observation node for
$\mathcal {L}^\nu $
on
$T_e$
—that is, we should have
$\ell _e(\sigma ^*) \nsucc \eta $
.
7 Construction
7.1 Procedure for constructing splitting trees
We are now ready to describe the procedure for constructing splitting trees. Given the tree
$T_{e-1}$
constructed by
$\mathcal {M}_{e-1}$
, we will describe the (attempted) construction by
$\mathcal {M}_{e}$
of an e-splitting subtree T. This construction will be successful if there are enough e-splittings in
$T_{e-1}$
. The construction will happen above some root node
$\rho $
, which is determined by
$T_{e-1}$
together with the outcomes of the requirements
$\mathcal {L}_{e-1}$
and
$\mathcal {P}_{e-1}$
. For example, if
$\mathcal {L}_{e-1}$
has the finitary outcome, then
$\rho $
will extend this.
The construction will be given by a procedure Procedure(e,
$\rho $
,
$T_{e-1}$
) for building an e-splitting tree T with root
$\rho $
in
$T_{e-1}$
. We write
$T[n]$
for the tree up to and including the nth level.
Procedure(e,
$\rho $
,
$T_{e-1}$
):
Input: A value
$e \geq 0$
, an admissible labeled tree
$T_{e-1}$
with labels
$\ell _{e-1}(\cdot )$
and scopes
$\operatorname {\mathrm {scope}}_{e-1}(\cdot )$
, and a node
$\rho $
on
$T_{e-1}$
with
$\ell _{e-1}(\rho ) = \top $
.
Output: A possibly partial labeled e-splitting tree T, built stage-by-stage.
Construction. To begin, the root node of T is
$\rho $
with
$\operatorname {\mathrm {scope}}(\rho ) = 1$
and
$\ell (\rho ) = \top $
. This is the zeroth level of the tree,
$T[0]$
. At each stage of the construction, if we have so far built T up to the nth level
$T[n]$
, we try to add an additional (
$n+1$
)st level.
At stage s, suppose that we have defined the tree up to and including level n, and the last expansionary level
$\leq n$
was
$e_t$
. Look for a length l such that for each leaf
$\sigma $
of
$T[n]$
, there is an extension
$\sigma '$
of
$\sigma $
with
with
$\ell _{e-1}(\sigma ') = \top $
, and there are extensions
$\sigma ^*$
and
$\sigma ^{**}$
of
$\sigma $
on
, such that
$\sigma ^*$
and
$\sigma ^{**}$
are of length l, and such that all of these extensions pairwise e-split, i.e., for each pair of leaves
$\sigma ,\tau $
of
$T[n]$
, these extensions
$\sigma ^*$
,
$\sigma ^{**}$
,
$\tau ^*$
, and
$\tau ^{**}$
all e-split with each other. (At stage s, we look among the first s-many extensions of these leaves, and we run computations looking for e-splits up to stage s. If we do not find such extensions, move on to stage
$s+1$
.)
If we do find such extensions, we will define
$T[n+1]$
as follows. To begin, we must wait for
$T_{e-1}[s]$
to be defined. In the meantime, we designate each
$\sigma $
as waiting with main children
$\sigma ^*$
and
$\sigma ^{**}$
.Footnote
4
While waiting, we still count through stages of the construction, so that after we resume the next stage of the construction will not be stage
$s+1$
but some other stage
$t> s$
depending on how long we wait. Once
$T_{e-1}[s]$
has been defined, for each leaf
$\sigma $
of
$T[n]$
, the children of
$\sigma $
in
$T[n+1]$
will be:
-
•
$\sigma ^*$ , with:
-
– If no predecessor of
$\sigma $ on T, including
$\sigma $ itself, is a tth expansionary node, set
$\operatorname {\mathrm {scope}}({\sigma }^*) = \operatorname {\mathrm {scope}}(\sigma )+1$ and
$\ell ({\sigma }^*) = \top $ (so that
$\sigma ^*$ is a tth expansionary node).
-
– Otherwise, set
$\operatorname {\mathrm {scope}}({\sigma }^*) = \operatorname {\mathrm {scope}}(\sigma )$ and
$\ell ({\sigma }^*) = \ell (\sigma )$ .
-
-
•
$\sigma ^{**}$ , with:
-
– If no predecessor of
$\sigma $ on T, including
$\sigma $ itself, is a tth expansionary node, set
$\operatorname {\mathrm {scope}}({\sigma }^{**}) = \operatorname {\mathrm {scope}}(\sigma )+1$ and
$\ell ({\sigma }^{**}) = \top $ (so that
$\sigma ^{**}$ is a tth expansionary node).
-
– Otherwise, set
$\operatorname {\mathrm {scope}}({\sigma }^{**}) = \operatorname {\mathrm {scope}}(\sigma )$ and
$\ell ({\sigma }^{**}) = \ell (\sigma )$ .
-
-
• If
$\ell (\sigma ) \succ \varnothing $ , each other maximal extension
$\sigma ^\dagger $ of
$\sigma $ on
which is incompatible with
$\sigma ^*$ and
$\sigma ^{**}$ will be a child of
$\sigma $ on T.Footnote 5 Put
$\operatorname {\mathrm {scope}}(\sigma ^\dagger ) = \operatorname {\mathrm {scope}}(\sigma )$ . Define
$\ell (\sigma ^\dagger )$ as follows. Let
$n = \operatorname {\mathrm {scope}}(\sigma )$ . Let
$\eta \in \mathtt {Labels}_{n}$ be greatest such that
. Then:
-
– If
$\eta $ is
$\top $ or begins with
$\mathsf {f}$ , then let
$\ell (\sigma ^\dagger ) = \operatorname {\mathrm {pred}}_{n}(\ell (\sigma ))$ .
-
– If
$\eta $ begins with
$\infty $ , say
$\eta = \infty \eta ^*$ , then
$\ell (\sigma ^\dagger )$ will be the minimum, in
$\mathtt {Labels}_{n}$ , of
$\operatorname {\mathrm {pred}}_{n}(\ell (\sigma ))$ and
$\eta ^*$ .Footnote 6
Note that
$\operatorname {\mathrm {pred}}_n(\ell (\sigma ))$ exists because
$\ell (\sigma ) \succ \varnothing $ . (Recall that
$\operatorname {\mathrm {pred}}_n$ is the predecessor relation on labels of length at most n.)
-
The children
${\sigma }^*$
and
${\sigma }^{**}$
are the main children of
$\sigma $
, and the
$\sigma ^\dagger $
, if they exist, are secondary children. This ends the construction at stage s.
End construction.
We say that the procedure is successful if it never gets stuck, and constructs the nth level of the tree T for every n. The next lemma is the formal statement that if
$T_{e-1}$
has enough e-splits, then the procedure is successful.
Lemma 7.1. Fix e, an admissible labeled tree
$T_{e-1}$
, and
$\rho \in T_{e-1}$
with
$\ell _{e-1}(\rho ) = \top $
. Suppose that for all
with
$\ell _{e-1}(\sigma ) = \top $
,
-
• for all n, there is
such that
$\Phi _e^\tau (n) \downarrow $ and
-
• there are n and
such that
$$ \begin{align*} \Phi_e^{\tau_1} (n) \neq \Phi_e^{\tau_2}(n).\end{align*} $$
Then Procedure(e,
$\rho $
,
$T_{e-1}$
) is successful.
As part of proving this lemma, we will use the following remark, which follows easily from the construction:
Remark 7.2. Every node on T is a node on
.
Proof of Lemma 7.1. If we have built T up to level n, and
$T[n]$
has leaves
$\sigma _1,\ldots ,\sigma _k$
, then as
$T_{e-1}$
is admissible, for each i there are
$\sigma _i'$
on
with
$\ell (\sigma _i') = \top $
. By the remark, each
. Then using the assumption of the lemma and standard arguments there are
$\sigma _i^{*},\sigma _i^{**}$
on
such that all of the
$\sigma _i^*$
and
$\sigma _i^{**}$
pairwise e-split. For sufficiently large stages s, we will find these extensions.⊣
The remaining lemmas of this section give properties of the tree constructed by the procedure. The next few lemmas show that the tree T has expansionary levels and is e-splitting. As a result, we will see that T is admissible.
Lemma 7.3. Suppose that Procedure(e,
$\rho $
,
$T_{e-1}$
) successfully constructs T. For each
$\sigma ^* \in T[e_{n+1}]$
, there is a predecessor
$\sigma $
of
$\sigma ^*$
which is n-expansionary.
Proof. Let
$\sigma _0 \in T[e_{n}]$
be the predecessor of
$\sigma ^*$
at the nth expansionary level, and let
$\sigma _0,\sigma _1,\sigma _2,\ldots ,\sigma _k = \sigma ^*$
be the sequence of predecessors of
$\sigma ^*$
between
$\sigma _0$
and
$\sigma ^*$
. If any
$\sigma _{i+1}$
were a main child of
$\sigma _i$
, then either
$\sigma _{i+1}$
would be n-expansionary, or
$\sigma _i$
or one of its predecessors would be n-expansionary as desired. If none of these are expansionary, then we must have

with all of these in
$\mathtt {Labels}_{n-1}$
. Since
$e_{n+1}> e_n + 2^{n+1} > |\mathtt {Labels}_{n-1}|$
, this cannot be the case, and so some predecessor of
$\sigma ^*$
must be expansionary.⊣
The following lemma is easy to see by inspecting the construction.
Lemma 7.4. Suppose that Procedure(e,
$\rho $
,
$T_{e-1}$
) successfully constructs T. Given distinct leaves
$\sigma $
and
$\sigma $
of
$T[n]$
, and
$\sigma ^*,\tau ^* \in T[n+1]$
which are children of
$\sigma $
and
$\tau $
respectively, either
$:$
-
•
$\sigma ^*$ and
$\tau ^*$ are main children of
$\sigma $ and
$\tau $ respectively, and
$\sigma ^*$ and
$\tau ^* \ e$ -split,
-
•
$\sigma ^*$ is a secondary child of
$\sigma $ , and
$\operatorname {\mathrm {scope}}(\sigma ^*) = \operatorname {\mathrm {scope}}(\sigma )$ and
$\ell (\sigma ^*) \prec \ell (\sigma )$ , or
-
•
$\tau ^*$ is a secondary child of
$\tau $ , and
$\operatorname {\mathrm {scope}}(\tau ^*) = \operatorname {\mathrm {scope}}(\tau )$ and
$\ell (\tau ^*) \prec \ell (\tau )$ .
Lemma 7.5. Suppose that Procedure(e,
$\rho $
,
$T_{e-1}$
) successfully constructs T. Given distinct
$\sigma $
and
$\tau $
in T at the nth expansionary level of the tree, and
$\sigma ^*,\tau ^*$
which are extensions of
$\sigma $
and
$\tau $
respectively at the
$(n+1)$
st expansionary level of the tree,
$\sigma ^*$
and
$\tau ^*$
are e-splitting.
Proof. Let
$\sigma _0 = \sigma ,\sigma _1,\sigma _2,\ldots ,\sigma _k = \sigma ^*$
be the sequence of predecessors of
$\sigma ^*$
between
$\sigma $
and
$\sigma ^*$
, and similarly for
$\tau _0 = \tau ,\tau _1,\tau _2,\ldots ,\tau _k = \tau ^*$
. Since
$\sigma _0$
and
$\tau _0$
are at the level
$e_n$
, and
$\sigma ^*$
and
$\tau ^*$
are at the level
$e_{n+1}$
, we have
$k \geq 2^{n+5}$
. If, for any i, both
$\sigma _{i+1}$
and
$\tau _{i+1}$
are main children of
$\sigma _i$
and
$\tau _i$
, then by Lemma 7.4,
$\sigma _{i+1}$
and
$\tau _{i+1}$
are e-splitting. We argue that this must happen for some
$i < k$
.
For each i, either (a)
$\sigma _{i+1}$
is n-expansionary and
$\ell (\sigma _{i+1}) = \top $
, or
$\operatorname {\mathrm {scope}}(\sigma _{i+1}) = \operatorname {\mathrm {scope}}(\sigma _i)$
and either (b)
$\ell (\sigma _{i+1}) \prec \ell (\sigma _{i})$
, or (c)
$\ell (\sigma _{i+1}) = \ell (\sigma _{i})$
. There is at most one i for which (a) is the case. Thus there are at most
$|\mathtt {Labels}_{n}|+|\mathtt {Labels}_{n+1}| \leq 2^{n+3}$
values of i for which (b) is the case. The same is true for the
$\tau _i$
. So, as
$k \geq 2^{n+5}$
, there must be some i for which neither (a) nor (b) is the case for either the
$\sigma _i$
or the
$\tau _i$
. For this i, we have both
$\sigma _{i+1}$
and
$\tau _{i+1}$
are main children of
$\sigma _i$
and
$\tau _i$
respectively, and so
$\sigma _{i+1}$
and
$\tau _{i+1}$
are e-splitting. Thus
$\sigma ^*$
and
$\tau ^*$
are e-splitting.⊣
Lemma 7.6. Suppose that Procedure(e,
$\rho $
,
$T_{e-1}$
) successfully constructs T. T is an e-splitting tree
$:$
any two paths in T are e-splitting.
Proof. Choose
$\sigma _1$
and
$\sigma _2$
initial segments of the two paths, long enough that they are distinct, which are at the nth expansionary level
$T[e_n]$
. Let
$\tau _1,\tau _2$
be the longer initial segments of the paths at the (
$n+1$
)st expansionary level
$T[e_{n+1}]$
. Then by the previous lemma,
$\tau _1$
and
$\tau _2$
are e-splitting, and so the two paths are e-splitting.⊣
The next lemmas relate the labels of T to the labels on
$T_{e-1}$
. If a node is labeled on
$T_{e-1}$
so that it is not an observation node for some lowness requirement
$\mathcal {L}$
, then it should also be labeled on T to not be an observation node for
$\mathcal {L}$
. (The converse is not necessary; T might determine that some node is not an observation node for
$\mathcal {L}$
even if that was not determined by
$T_{e-1}$
.) See Section 6.4. Translating this into labels, we get the following lemmas:
Lemma 7.7. Suppose that Procedure(e,
$\rho $
,
$T_{e-1}$
) successfully constructs T. Given
$\sigma \in T$
, with
$\ell _{e-1}(\sigma ) = \top $
, and
we have
. In particular, if
$\ell _e(\sigma ^*) \succ \eta $
, then
$\ell _{e-1}(\sigma ^*) \succ \infty \eta $
.
Proof. It suffices to prove the lemma when
$\sigma ^*$
is a child of
$\sigma $
on T, and
$\ell (\sigma ^*) \succ \eta $
. We have two cases, depending on whether
$\sigma ^*$
is a main child or secondary child of
$\sigma $
on T.
-
• If
$\sigma ^*$ is a main child of
$\sigma $ on T, then
and
, and
$\ell _{e-1}(\sigma ') = \top $ .Since
$\ell _{e-1}(\sigma ) = \top $ , for every
$\tau $ on
$T_{e-1}$ between
$\sigma $ and
$\sigma '$ we have
$\ell _{e-1}(\tau ) = \top $ . Thus
$\ell _{e-1}(\sigma ') = \top $ .
Now for every
$\tau $ on
$T_{e-1}$ between
$\sigma '$ and
$\sigma ^*$ , we have
$\ell _{e-1}(\tau ) \succeq \mathsf {f} \succ \infty \eta $ . So
$\ell _{e-1}(\sigma ^*) \succ \infty \eta $ .
-
• If
$\sigma ^*$ is a secondary child of
$\sigma $ , let
$n = \operatorname {\mathrm {scope}}(\sigma )$ and let
$\nu \in \mathtt {Labels}_{n}$ be least such that
. If
$\nu $ is
$\top $ or begins with
$\mathsf {f}$ , then
$\nu \succ \infty \eta $ and so
. Otherwise, if
$\nu $ begins with
$\infty $ , say
$\nu = \infty \nu ^*$ , then
$\ell (\sigma ^*) \succ \eta $ is the minimum, in
$\mathtt {Labels}_{n}$ , of
$\operatorname {\mathrm {pred}}_{n}(\ell (\sigma ))$ and
$\nu ^*$ . Thus
$\nu ^* \succ \eta $ , which means that
$\infty \nu ^* \succ \infty \eta $ , and so
.
This proves the lemma.⊣
Similarly, we can prove the same lemma but replacing
$\succ $
with
$\succeq $
. We have:
Lemma 7.8. Suppose that Procedure(e,
$\rho $
,
$T_{e-1}$
) successfully constructs T. Given
$\sigma \in T$
, with
$\ell _{e-1}(\sigma ) = \top $
, if
$:$
-
•
, we have
. In particular, if
$\ell _e(\sigma ^*) \succeq \eta $ , then
$\ell _{e-1}(\sigma ^*) \succeq \infty \eta $ .
-
•
, we have
. In particular, if
$\ell _e(\sigma ^*) = \top $ , then
$\ell _{e-1}(\sigma ^*) = \top $ .
Finally, putting together results from all of these lemmas, we have:
Lemma 7.9. Suppose that Procedure(e,
$\rho $
,
$T_{e-1}$
) successfully constructs T. Then T is an admissible tree.
7.2 Construction of
$A$
and the true path
We simultaneously define A, the trees
$T_e$
satisfying
$\mathcal {M}_e$
, and a true path of outcomes
$\pi $
by finite extension. This construction is non-uniform, and is analogous to choosing a generic in a forcing construction.
Begin with
$A_{-1} = \varnothing $
and
$\pi _{-1} = \varnothing $
. Recall that, as in Section 2.3, we let
$T_{-1}$
be the tree consisting of all nodes in
$2^{< \omega }$
of the form

where each
$a_i \in \{0,1\}$
. We put
$\ell _{-1}(\sigma ) = \top $
and
$\operatorname {\mathrm {scope}}_{-1}(\sigma ) = |\sigma |$
for each
$\sigma \in T_{-1}$
. This
$T_{-1}$
is admissible.
Suppose that we have so far defined
$A_s \prec A$
,
$T_s$
, and
$\pi _s \prec \pi $
, with
$|\pi _s| = s+1$
. To define
$A_{s+1}$
and
$\pi _{s+1}$
we first ask the next requirement what its outcome is, and then define the extensions appropriately.
-
s+1 = 3e: Consider
$\mathcal {M}_e^{\pi _s}$ . If Procedure(e,
$A_s$ ,
$T_{e-1}$ ) successfully constructs a tree T, let
$T_e$ be this tree T, and set
$\pi _{s+1} = \pi _s \textrm {\^{}} \infty $ and
$A_{s+1} = A_s$ . Otherwise, by Lemma 7.1, there is
with
$\ell _{e-1}(\sigma ) = \top $ and
$\operatorname {\mathrm {scope}}_{e-1}(\sigma ) \geq 1$ such that either:
-
(1) there is n such that for all
,
$\Phi _e^\tau (n) \uparrow $ or
-
(2) for all
and n,
$$ \begin{align*} \Phi_e^{\tau_1}(n) \downarrow \ \wedge \ \Phi_e^{\tau_2}(n) \downarrow \quad \longrightarrow \quad \Phi_e^{\pi_1}(n) = \Phi_e^{\pi_2}(n).\end{align*} $$
In either case, let
$\pi _{s+1} = \pi _s \textrm {\^{}} \sigma $ and
$A_{s+1} = \sigma \succeq A_s$ . Let
$T_e$ be the tree
. The labels
$\ell _e$ and scopes of the tree
$T_e$ are defined by setting
$\operatorname {\mathrm {scope}}_e(\tau ) = \operatorname {\mathrm {scope}}_{e-1}(\tau )-1$ and:
-
•
$\ell _e(\tau ) = \top $ if
$\ell _{e-1}(\tau ) = \top $ or
-
•
$\ell _e(\tau ) = \eta $ if
$\ell _{e-1}(\tau ) = \mathsf {f}\eta $ .Footnote 7
-
-
s+1 = 3e+1: Consider
$\mathcal {L}_e^{\pi _s}$ with
$e = \langle e_1,e_2 \rangle $ . If there is
$\sigma \in T_e$ and n such that
$\Psi _{e_1}^{\sigma }(n) \neq R_{e_2}(n)$ , then let
$A_{s+1} = \sigma $ and
$\pi _{s+1} = \pi _s \textrm {\^{}} \sigma $ . We may choose
$\sigma $ to have
$\ell _e(\sigma ) = \top $ , as (by Lemma 7.10 below)
$T_e$ is admissible. Otherwise, if there is no such
$\sigma $ , let
$A_{s+1} = A_s$ and
$\pi _{s+1} = \pi _s \textrm {\^{}} \infty $ .
-
s+1 = 3e+2: Consider
$\mathcal {P}_e^{\pi _s}$ . If
$A_s = \sigma \in T_e$ , let
$\tau _1$ and
$\tau _2$ be the two main children of
$\sigma $ on
$T_e$ . Choose
$A_{s+1} \succ \sigma $ to be whichever of
$\tau _1,\tau _2$ is not an initial segment of the eth c.e. set
$W_e$ .
We define
$A = \bigcup _s A_s$
and the true sequence of outcomes
$\pi = \bigcup _s \pi _s$
. We denote by
$\pi _{\mathcal {R}}$
the true outcome up to and including the requirement
$\mathcal {R}$
; for example,
$\pi _{\mathcal {M}_e} = \pi _{3e}$
; and similarly for
$A_{\mathcal {R}}$
.
In the following lemma, we prove that along the true path, the trees that we construct are total and admissible.
Lemma 7.10. For each e,
$T_e$
is an admissible labeled tree.
Proof. We argue by induction on e.
$T_{-1}$
is an admissible tree. Given
$T_e$
total and admissible, if
$\mathcal {M}_{e+1}$
has the infinitary outcome then
$T_{e+1}$
is defined from
$T_e$
using the Procedure, which is successful, and hence by Lemma 7.9 is admissible.
So suppose that
$\mathcal {M}_{e+1}$
has the finitary outcome, and
$T_{e+1}$
is the tree
for some
with
$\ell _{e}(\rho ) = \top $
. We must argue that
$T_{e+1}$
is admissible:
-
(1) Each
$\sigma \in T_{e+1}$ has two main children
$\sigma ^*$ and
$\sigma ^{**}$ , namely the same two main children of
$\sigma $ in
$T_e$ . We have
$\ell _{e+1}(\sigma ^*) \succeq \ell _{e+1}(\sigma )$ . There are two cases to check: if
$\ell _{e}(\sigma ^*) = \mathsf {f}\ell _{e+1}(\sigma ^*)$ , then
$\ell _{e}(\sigma ) = \mathsf {f}\ell _{e+1}(\sigma )$ and from
$ \ell _{e}(\sigma ^*) \succeq \ell _{e}(\sigma )$ we conclude that
$\ell _{e+1}(\sigma ^*) \succeq \ell _{e+1}(\sigma )$ ; and otherwise,
$\ell _{e}(\sigma ^*) = \top $ and so
$\ell _{e+1}(\sigma ^*) \succeq \ell _{e+1}(\sigma )$ . Similarly for
$\sigma ^{**}$ we have
$\ell _{e+1}(\sigma ^{**}) \succeq \ell _{e+1}(\sigma )$ .
-
(2) If
$\sigma ^*$ is a child of
$\sigma $ on
$T_{e+1}$ , then
$\sigma ^*$ is a child of
$\sigma $ on
$T_e$ and
$\operatorname {\mathrm {scope}}_{e+1}(\sigma ^*) = \operatorname {\mathrm {scope}}_{e}(\sigma ) - 1 \geq \operatorname {\mathrm {scope}}_{e}(\sigma ) - 1 = \operatorname {\mathrm {scope}}_{e+1}(\sigma )$ .
-
(3) For each n, each path through
$T_{e+1}$ is also a path through
$T_e$ , and hence contains a node
$\sigma $ with
$\ell _{e+1}(\sigma ) = \ell _e(\sigma ) = \top $ and
$\operatorname {\mathrm {scope}}_{e+1}(\sigma ) = \operatorname {\mathrm {scope}}_{e}(\sigma ) - 1 \geq n$ .⊣
8 Verification
In this section, we check that the A constructed above is non-computable, of minimal degree, and low for speed.
8.1 Non-computable
We chose the initial segment
$A_{\mathcal {P}_e} = A_{3e+2}$
of A such that it differs from the eth c.e. set. Thus A is not computable. (These requirements also ensure that A is extended to an infinite string.)
8.2 Minimal degree
We show that A is of minimal degree by showing that it lies on the trees
$T_e$
which are either e-splitting or force
$\Phi _e^A$
to be partial or computable.
Lemma 8.1. For all e,
$A \in [T_e]$
.
Proof. Fix e. Then
$A_{3e} = A_{\mathcal {M}_e}$
is the root of
$T_e$
. Then we choose
$A_{3e} \preceq A_{3e+1} \preceq A_{3e+2}$
in
$T_e$
. Then for each
$e' \geq e$
,
$A_{e'} = A_{\mathcal {M}_{e'}} \in T_{e'}$
which is a subtree of
$T_e$
. So
$A \in [T_e]$
.⊣
Lemma 8.2. A is a minimal degree.
Proof.
A is non-computable. We must show that A is minimal. Suppose that
$\Phi _e^A$
is total. If the outcome of
$\mathcal {M}_e$
is
$\infty $
, then A lies on the e-splitting tree
$T_e = T$
produced by Procedure(e,
$A_{\mathcal {P}_{e-1}}$
,
$T_{e-1}$
) and hence
$\Phi _e^A \geq _T A$
. If the outcome of
$\mathcal {M}_e$
is
$\sigma $
, then A lies on
and (since
$\Phi _e^A$
is total) for all
and for all n,

Thus
$\Phi _e^A$
is computable.⊣
8.3 Low-for-speed
Our final task is to show that A is low-for-speed. Fix a lowness requirement
$\mathcal {L}_e$
. Define
$\eta = \Delta (\pi _{\mathcal {L}}) \in \{\mathsf {f},\infty \}^{e+1}$
;
$\eta $
is the sequence of guesses,
$\mathsf {f}$
or
$\infty $
, at the outcomes of
$\mathcal {M}_0,\ldots ,\mathcal {M}_e$
. Write
$\eta _{> i}$
for the final segment
$\langle \eta (i+1),\ldots ,\eta (e) \rangle $
of
$\eta $
, the guesses at
$\mathcal {M}_{i+1},\ldots ,\mathcal {M}_e$
.
In checking that
$\mathcal {L}_e$
is satisfied, we will refer only to the trees
$T_{-1},\ldots ,T_e$
. Write
$\operatorname {\mathrm {scope}}_i$
and
$\ell _{i}$
for the scope and labeling function on
$T_i$
.
By stage s of the construction of
$T_i$
we mean:
-
(1) for
$i = -1$ , the strings of length
$\leq s$ in
$T_{-1}$ ;
-
(2) if the outcome of
$\mathcal {M}_i$ was infinitary, stage s of the Procedure constructing
$T_i$ ; and
-
(3) if the outcome of
$\mathcal {M}_i$ was finitary, that part of
$T_i$ which is determined by stage s of the construction of
$T_{i-1}$ .
One can check that the sth stage of Procedure takes time polynomial in s. The particular polynomial will depend on the parameters for Procedure. In checking this, it is important to note that if the leaves of T have been designated waiting, then we charge the time required to wait for
$T_{e-1}[s]$
to be defined to stages
$s+1,s+2,\ldots $
. Also, because all of these trees are subtrees of
$T_{-1}$
, there are only polynomially in s many elements of each tree of length (as a binary string) at most s.
Now we will define the simulation procedure. Let
$\rho _1,\ldots ,\rho _k$
be incomparable nodes on
$T_e$
such that (a) every path on
$T_e$
passes through some
$\rho _i$
, (b) each
$\rho _i$
has
$\ell _e(\rho _i) = \top $
, and (c) for each i and each
$e' \leq e$
,
$\operatorname {\mathrm {scope}}_{e'}(\rho _i) \geq e-e'$
. We can find such
$\rho _i$
because
$T_e$
is admissible. (Think of the
$\rho _i$
as an open cover of
$T_e$
by nodes whose scope, in every
$T_{e'}$
(
$e' \leq e$
), includes
$\mathcal {L}_e$
.) For each i, we define a simulation
$\Xi _{e,i}$
which works for extensions of
$\rho _i$
. As A extends some
$\rho _i$
, one of these simulations will work for A. Fix i, for which we define the simulation
$\Xi = \Xi _{e,i}$
:
Simulation
$\Xi = \Xi _{e,i}$
: Begin at stage
$0$
with
$\Xi (x) \uparrow $
for all x. At stage s of the simulation, for each
$\sigma \in T_{-1}$
with
$|\sigma | < s$
and
$\sigma \succeq \rho _i$
, check whether, for each
$e' \leq e$
, if
$T_{e'}[n]$
is the greatest level of the tree
$T_{e'}$
defined by stage s of the construction of the trees T, then either:
-
(S1)
$\sigma $ is on
$T_{e'}[n]$ and
, or
-
(S2)
$\sigma $ extends a leaf
$\sigma '$ of
$T_{e'}[n]$ , with
, and:
-
(*) if
$T_{e'}$ is defined using Procedure (i.e.,
$\mathcal {M}_{e'}$ has the infinitary outcome),
$$ \begin{align*} \operatorname{\mathrm{pred}}_{\operatorname{\mathrm{scope}}_{e'}(\sigma')}(\ell_{e'}(\sigma')) = \eta_{> e'},\end{align*} $$
$\sigma '$ has at stage s been designated waiting with main children
$\sigma ^*$ and
$\sigma ^{**}$ , then
$\sigma $ extends or is extended by either
$\sigma ^*$ or
$\sigma ^{**}$ .Footnote 8
-
If for some
$\sigma $
this is true for all
$e' \leq e$
, then for any
$k < s$
with
$\Psi ^{\sigma }_s(k) \downarrow $
, set
$\Xi (k) = \Psi ^{\sigma }_s(k)$
if it is not already defined.
Remark 8.3. Stage s of the simulation can be computed in time polynomial in s. (The polynomial may depend on e.) This is because there are polynomially many in s nodes
$\sigma \in T_{-1}$
with
$|\sigma | < s$
.
We can now make a more formal definition of an observation node and a diagonalization node for
$\mathcal {L}^\eta _e$
on
$T_{e'}$
(
$e' \leq e$
) above
$\rho _i$
, though this terminology will only be used for explaining the proofs and not in the proofs themselves. A node
$\sigma $
extending
$\rho _i$
is an observation node for
$\mathcal {L}^\eta _e$
above
$\rho _i$
on
$T_{e'}$
if
. A node
$\sigma $
extending
$\rho _i$
is a diagonalization node for
$\mathcal {L}^\eta _e$
above
$\rho _i$
on
$T_{e'}$
if the parent of
$\sigma $
is in
and if
$\ell (\sigma ) = \eta _{>e'}$
.
The next series of lemmas are proved in the context above of a fixed e, with
$\rho _1,\ldots ,\rho _k$
. If
$e = \langle e_1,e_2 \rangle $
, we write
$\Psi $
for
$\Psi _{e_1}$
. Fix j such that A extends
$\rho _j$
, and write
$\Xi = \Xi _{e,j}$
.
If
$\mathcal {L}_e$
has outcome
$\infty $
, then we need the initial segments of A to be observation nodes for
$\mathcal {L}_e$
, so that if
$\Psi ^A(x) \downarrow $
then we set
$\Xi (x) = \Psi ^A(x)$
if it was not already defined. First we prove that the initial segments of A have the right labels, i.e., that they are observation nodes, and second that these observation nodes actually get simulated by the simulation procedure.
Lemma 8.4. If
$\pi (\mathcal {L}_e) = \infty $
, for each
$e' \leq e$
,
for some i.
Proof. Let
$\sigma = A_{\mathcal {M}_e}$
be the root of
$T_e$
. Since
$\pi (\mathcal {L}_e) = \infty $
,
$A_{\mathcal {L}_e} = A_{\mathcal {M}_e}$
. Let
$\sigma ^* = \xi (\mathcal {P}_{e}) = A_{\mathcal {P}_e}$
. Then, by construction,
$\sigma ^*$
is a main child of
$\sigma $
. A is a path through
$T_{e+1}$
extending
$\sigma ^*$
, and
$T_{e+1}$
is one of the following two trees, depending on the outcome of
$\mathcal {M}_{e+1}$
:Footnote
9
-
(1) the labeled tree T produced by Procedure(
$e+1$ ,
$\sigma ^*$ ,
$T_e$ ), which is a subtree of
or
-
(2) the tree
for some
with
$\ell _e(\tau ) = \top $ .
In either case,
, and
$\varnothing = \eta _{> e}$
. (Note that
$\rho _j$
extends
$\sigma $
and is extended by A.)
Now we argue backward by induction. Suppose that
. We want to argue that
. We have two cases, depending on the outcome of
$\mathcal {M}_{e'}$
:
-
• The outcome of
$\mathcal {M}_{e'}$ is
$\infty $ . Then
$\eta _{>e'-1} = \infty \eta _{>e'}$ and
$T_{e'}$ is the labeled tree T produced by Procedure(
$e'$ ,
$A_{\mathcal {P}_{e'-1}}$ ,
$T_{e'-1}$ ). By Lemma 7.7, given
$\sigma \in T_{e'}$ and
, we have that
. As
we have
.
-
• The outcome of
$\mathcal {M}_{e'}$ is
$\mathsf {f}$ . Then
$\eta _{>e'-1} = \mathsf {f} \eta _{>e'}$ and
$T_{e'}$ is the tree
for some
with
$\ell _{e-1}(\tau ) = \top $ . The labels on
$T_{e'}$ are defined so that if
$\sigma \in T_{e'}$ , then
$\ell _{e'-1}(\sigma ) = \mathsf {f} \ell _{e'}(\sigma )$ or
$\ell _{e'-1}(\sigma ) = \top $ . Thus, as
, we get that
.⊣
Now we prove that if
$\Psi ^A(r)$
converges, then the simulation
$\Xi (r)$
converges as well, though it is possible that it will have a different value if there was some other computation
$\Psi ^{\sigma }(r)$
which converged before
$\Psi ^A(r)$
did. Moreover, the simulation will not be too much delayed. The proof is essentially showing that we do in fact simulate computations at observation nodes.
Lemma 8.5. If
$\pi (\mathcal {L}_e) = \infty $
, and
$\Psi ^A(r) \downarrow $
, then
$\Xi (r) \downarrow $
. Moreover, there is a polynomial p depending only on e such that if
$\Psi _s^A(r) \downarrow $
, then
$\Xi _{p(s)}(r) \downarrow $
.
Proof. By the previous lemma for each
$e' \leq e$
,
. Let
$\sigma \in T_{e}$
be an initial segment of A and s a stage such that
$\Psi ^{\sigma }_s(r) \downarrow $
. We may assume that
$\sigma $
is sufficiently long that
$\sigma $
extends
$\rho _j$
.
Fix
$e'$
and let
$T_{e'}[n]$
be the greatest level of the tree
$T_{e'}$
defined by stage s. Then, as
,
. We check the conditions (for
$e'$
) from the definition of the simulation
$\Xi $
. Either
, or some initial segment
$\sigma '$
of
$\sigma $
is in
. In the second case, let us check that we satisfy (
$*$
). We only need to check (
$*$
) in the case that
$T_{e'}$
was defined using Procedure,

and
$\sigma '$
has at stage s been designated waiting with main children
$\sigma ^*$
and
$\sigma ^{**}$
. Now, if
$\sigma \in T_e$
does not extend
$\sigma ^*$
or
$\sigma ^{**}$
then it would have to extend one of the other children of
$\sigma $
, namely a secondary child
$\sigma ^\dagger $
of
$\sigma '$
with

This contradicts the fact that
.
Since this is true for every
$e' \leq e$
, and
$\Psi _s^{\sigma }(r) \downarrow $
, the simulation defines
$\Xi (r) = \Psi _s^{\sigma }(r)$
if
$\Xi (r)$
is not already defined.
The simulation defines
$\Xi (r)$
at the sth stage of the simulation. By Remark 8.3, the sth stage of the simulation can be computed in time polynomial in s.⊣
Lemma 8.5 covers the infinitary outcome of
$\mathcal {L}_e$
. For the finitary outcome, we need to see that any computation simulated by
$\Xi $
is witnessed by a computation on the tree, because the use of such a computation is a diagonalization nodes that was not removed from the tree.
Lemma 8.6. If
$\Xi (r) \downarrow $
then there is
$\sigma \in T_e$
,
$\sigma \succeq \rho _j$
, such that
$\Psi ^{\sigma }(r) = \Xi (r)$
.
Proof. Since
$\Xi (r) \downarrow $
, by definition of the simulation, there is a stage s and a
$\sigma \in T_{-1}$
with
$|\sigma | < s$
such that
$\Psi _s^{\sigma }(r) \downarrow $
for which we set
$\Xi (r) = \Psi _s^{\sigma }(r)$
. At stage s, for each
$e' \leq e$
this
$\sigma $
satisfied either (S1) or (S2). Fix this s for the rest of the proof.
We argue by induction on
$-1 \leq i \leq e$
that there is
$\sigma ^*$
a child on
$T_i$
of a node
$\sigma $
with:
-
(1)
$\ell _i(\sigma ^*) \succeq \eta _{>i}$ ;
-
(2)
;
-
(3)
$\Xi (r) = \Psi ^{\sigma ^*}_s(r)$ ; and
-
(4)
$\sigma ^*$ satisfies either (S1) or (S2) for each
$e'$ with
$i < e' \leq e$ .
By the previous paragraph, this is true for
$i = -1$
. If we can show it for
$i = e$
, then the lemma is proved. All that is left is the inductive step.
Suppose that it is true for i; we will show that it is true for
$i+1$
. We have two cases, depending on the outcome of
$\mathcal {M}_{i+1}$
.
Case 1.
$\pi _{\mathcal {M}_{i+1}} = \infty $
, the infinitary outcome.
Since
$\pi _{\mathcal {M}_{i+1}} = \infty $
,
$T_{i+1}$
is the
$(i+1)$
-splitting tree defined by Procedure(
$i+1$
,
$A_{\mathcal {P}_{i}}$
,
$T_i$
). Fix
$\sigma ^*$
from the induction hypothesis. Since
$\Psi ^{\sigma ^*}_s(r)$
converges, we have that
$|\sigma ^*| \leq s$
and so
$\sigma ^* \in T_{i}[s]$
. Let n be the greatest level of
$T_{i+1}$
defined by stage s.
At stage s,
$\sigma ^*$
satisfied either (S1) or (S2) for
$i+1$
. If it satisfies (S1) then
and we are done. So assume that
$\sigma ^*$
satisfies (S2). Then
$\sigma ^*$
extends a leaf
$\sigma _0$
of
$T_{i+1}[n]$
, with
, and also satisfies (
$*$
). Let
$\sigma _0,\sigma _1,\ldots ,\sigma _n$
be the maximal sequence of children of
$\sigma _0$
on
$T_{i+1}$
which are strict predecessors of
$\sigma ^*$
.
Claim 1. For each k,
$\sigma _{k+1}$
is a main child of
$\sigma _k$
on
$T_{i+1}$
. Thus for each k,
$\ell _{i+1}(\sigma _k) \succ \eta _{> i+1}$
.
Proof. We argue inductively. Suppose that
$\sigma _{k}$
is a main child of
$\sigma _{k-1}$
,
$\sigma _{k-1}$
is a main child of
$\sigma _{k-2}$
, and so on. Thus
$\ell _{i+1}(\sigma _k) \succ \eta _{>i+1}$
. (If
$\sigma _0 = \rho _j$
we use the fact that
$\ell _{i+1}(\rho _j) = \top $
.)
Suppose that
$\sigma _{k+1}$
is put on
$T_{i+1}$
at a stage
$t \geq s$
. We claim that if
$\sigma _{k+1}$
was a secondary child of
$\sigma _k$
, then
$\sigma _{k+1}$
would extend
$\sigma ^*$
. It suffices to show that
. Indeed, as remarked above,
$\sigma ^* \in T_i[s]$
,
, and
$\eta _{>i} = \infty \eta _{>i+1} \succ \varnothing $
.⊣
Claim 2. One of the children
$\sigma _{n+1}$
of
$\sigma _{n}$
extends
$\sigma ^*$
and has
$\ell _{i+1}(\sigma _{n+1}) \succeq \eta _{>i+1}$
.
Proof. Suppose that the children of
$\sigma _n$
are put on
$T_{i+1}$
at a stage
$t \geq s$
. To see that one of the children of
$\sigma _n$
extends
$\sigma ^*$
, it suffices to show that
as all such nodes are either compatible with a main child of
$\sigma _n$
—in which case
$\sigma ^*$
would be extended by this main child by choice of
$\sigma _n$
—or are extended by a secondary child of
$\sigma _n$
. Indeed, as in the previous claim,
$\sigma ^* \in T_i[s]$
,
, and
$\eta _{>i} = \infty \eta _{>i+1} \succ \varnothing $
.
We have two cases. (a) Suppose that
$\sigma ^*$
is extended by a main child
$\sigma _{n+1}$
of
$\sigma _n$
. Then
$\ell _{i+1}(\sigma _{n+1}) \succeq \ell _{i+1}(\sigma _n) \succ \eta _{>i+1}$
and we are done.
(b) Suppose that
$\sigma ^*$
is extended by a main child of
$\sigma _n$
. We have
, and so there is an extension
$\sigma _{n+1}$
of
$\sigma ^*$
on
$T_i$
with
$\sigma _{n+1}$
on the tth level of
, namely,
$\sigma _{n+1}$
is a main child (on
$T_{i}$
) of main children of …of
$\sigma ^*$
. Now
$\sigma _{n+1}$
is taken as a secondary child of
$\sigma _n$
. As
$\ell _{i+1}(\sigma _n) \succ \eta _{> i+1}$
,
, and
$\eta _{>i} = \infty \eta _{>i+1}$
, we have
$\ell _{i+1}(\sigma _{n+1}) \succeq \eta _{>i+1}$
.⊣
Claim 3.
$\sigma _{n+1}$
and its parent
$\sigma _n$
satisfy (1)–(4).
Proof. We get (1) from Claim 2. We get (2) from Claim 1 together with the fact that
. (3) follows immediately from the fact that
$\sigma _{n+1}$
extends
$\sigma ^*$
.
Fix
$e'$
,
$i+1 < e' \leq e$
. Note that as
$\sigma ^*$
did not satisfy (S1) for
$i+1$
we had
$\sigma ^* \notin T_{i+1}[n]$
. Thus
$\sigma ^* \notin T_{e'}[s]$
and so
$\sigma ^*$
did not satisfy (S1) for
$e'$
. So
$\sigma ^*$
satisfied (S2) for
$e'$
. As an extension of
$\sigma ^*$
,
$\sigma _{n+1}$
also satisfies (S2) for
$e'$
.⊣
This ends Case 1.
Case 2.
$\pi _{\mathcal {M}_{i+1}} = \tau $
, a finitary outcome.
Let
$\sigma ^*$
be as in the induction hypothesis for i. Let
$\sigma $
be the parent of
$\sigma ^*$
on
$T_{i+1}$
; note that
$\sigma $
is a predecessor of the parent of
$\sigma ^*$
on
$T_i$
. We have that
$\ell _i(\sigma ^*) \succeq \eta _{>i}$
and
$\sigma $
is on
.
Note that since
$\pi _{\mathcal {M}_{i+1}} = \tau $
is the finitary outcome,
$\eta _{> i}$
begins with
$\mathsf {f}$
, and
$\eta _{>i} = \mathsf {f} \eta _{>i+1}$
.
$T_{i+1}$
is the tree
for some
. The labels
$\ell _{i+1}$
of the tree
$T_{i+1}$
are defined from the labels
$\ell _{i}$
of the tree
$T_i$
by setting
$\ell _{i+1}(\tau ') = \top $
if
$\ell _{i}(\tau ') = \top $
, or
$\ell _{i+1}(\tau ') = \eta ^*$
if
$\ell _{i}(\tau ') = \mathsf {f}\eta ^*$
. Thus
$\sigma ^*$
is still on
$T_{i+1}$
, and
$\ell _{i+1}(\sigma ^*) \succeq \eta _{>i+1}$
. Moreover, for each
$\tau ' \in T_{i+1}$
,
$\ell _{i}(\tau ') \succ \eta _{>i}$
if and only if
$\ell _{i+1}(\tau ') \succ \eta _{>i+1}$
. So
$\sigma $
is on
, as desired.⊣
We now show how to use these lemmas to prove that A is low-for-speed.
Lemma 8.7. A is low-for-speed.
Proof. Given
$\langle e,i \rangle $
, suppose that
$\Psi _{e}^A = R_{i}$
and that
$\Psi _e^A(n)$
is computable in time
$t(n)$
. We must show that
$R_{i}$
is computable in time
$p(t(n))$
for some polynomial p. Note that the outcome of
$\mathcal {L}_{\langle e,i \rangle }$
must be
$\infty $
, as otherwise we would have ensured that
$\Psi _{e}^A \neq R_{i}$
. Let j be such that A extends the
$\rho _j$
from the simulation for e. So by Lemma 8.5
$\Xi _{\langle e,i \rangle ,j}$
is total and there is a polynomial p depending only on
$\langle e,i \rangle $
such that if
$\Psi _{e}^A(n)$
is computed in time s, then
$\Xi _{\langle e,i \rangle ,j}(n)$
is computed in time
$p(s)$
.
Now we argue that
$\Xi _{\langle e,i \rangle ,j}$
computes
$R_{i} = \Psi _e^A$
. Suppose not; then there is n such that
$\Xi _{\langle e,i \rangle ,j}(n) \neq R_{i}(n) = \Psi _{e}^A(n)$
. Since
$\Xi _{\langle e,i \rangle ,j}(n)$
does in fact converge, by Lemma 8.6 there is
$\sigma \in T_{\langle e,i \rangle }$
extending
$\rho _j$
such that
$\Psi _e^{\sigma }(n) = \Xi _{{\langle e,i \rangle },j}(n) \neq R_i(n)$
. This contradicts the fact that the outcome of
$\mathcal {L}_{\langle e,i \rangle }$
is
$\infty $
, as we would have chosen
$\sigma $
as the outcome.⊣