Dear John,

On Mon, 11 Aug 2014, John Steel wrote:

If certain technical conjectures areproved, then “V=ultimate L” is an attractive axiom. The paper “Godel’s program“, available on my webpage, gives an argument for that.

Many thanks for mentioning this paper of yours, which I read with great interest. It would be very helpful if you could clarify a few points made in that paper, especially with regard to interpretative power, practical completeness, your multiverse discussion and Woodin’s new axiom. I also have a few more minor questions which I’ll stick at the end.

**Page 6**

Footnote 10: I think it’s worth mentioning that for reflexive theories, S is interpretable in T iff S proves the consistency of T’s finite subtheories. So in fact for “natural” theories there really is no difference between consistency strength and interpretative power. This comes up again later in the paper.

You state:

If T, U are natural theories of consistency strength at least that of ‘there are infinitely many Woodin cardinals’ then the consequences of T in 2nd order arithmetic are contained in those of U, or vice-versa.

I don’t agree. First recall a milestone of inner model theory:

*If the singular cardinal hypothesis fails then there is an inner model with a measurable cardinal.*

From this it is clear that ZFC + “there is an inner model with a measurable cardinal” is a natural theory. Similarly for any large cardinal axiom LCA the theory ZFC + “There is an inner model satisfying LCA” is a natural theory. But this theory has the same consistency strength as ZFC + LCA yet fails to prove sentences which are provable in ZFC + LCA. (By Shoenfield it does prove any sentence provable in ZFC + LCA.) Nor does it prove even determinacy.

So what definition of natural do you have in mind here? The fact is, and this comes up later in your paper, there are many theories equiconsistent with theories of the form ZFC + LCA which do not prove the existence of large cardinals.

Footnote 11: Another issue with natural. Consider the theories PA + The least at which the Paris-Harrington function is not defined is even, and the same theory with “even” replaced by “odd”. These theories are equiconsistent with PA yet contradict each other.

**Page 8**

Theorem 3.2: *Given arbitrarily large Woodin cardinals, set-generic extensions satisfy the same sentences about .*

It is easy to see that if one drops “set-generic” then the result is false and even if one replaces “set-generic extensions” with “extensions with arbitrarily large Woodin cardinals” it is still false (in general). I mention this as you claim that “large cardinal axioms are complete for sentences about “; what notion of “complete” do you have in mind? Surely the requirement of set-genericity is a very strong requirement!

**Page 10**

There may be consistency strengths beyond those we have reached to date that cannot be reached in any natural way without deciding CH.

This is a fascinating comment! I never imagined this possibility. Do you have any suggestions as to what these could be?

**Page 11**

Here you talk about the set-generic multiverse. This is where you lose me almost completely; I don’t seem to grasp the point. Here are some comments and questions.

You have explained, and I agree, that natural theories under the equivalence relation of equiconsistency fall into a wellordered hierarchy with theories of the form ZFC + LCA as representatives of the equivalence classes. I would go further and say that equiconsistency is the same as mutual interpretability for natural theories. But as I said there are many natural theories not of the form ZFC + LCA; yet you claim the following:

The way we interpret set theories today is to think of them as theories of inner models of generic extensions of models satisfying some large cardinal hypothesis … We don’t seem to lose any meaning this way.

Of course by “generic” you mean “set-generic” as you are trying to motivate the set-generic multiverse. Now what happened to the theories (which may deny large cardinal existence) associated to Easton’s model where GCH fails at all regular cardinals, or more dramatically a model where is greater than for every cardinal $latex \kappa$? These don’t seem to fit into your rather restricted framework. In other words, for reasons not provided, you choose to focus only on theories which describe models obtained by starting with a model of some large cardinal hypothesis and then massaging it slightly using set-generic extensions and ground models. Otherwise said you have taken our nice equivalence relation under mutual interpretability and cut it down to a very restricted subdomain of theories which respect large cardinal existence. What happened to the other natural theories?

On **Page 13** you give some explanation as to why you choose just the set-generic multiverse and claim:

“Our multiverse is an equivalence class of worlds under ‘has the same information’.”

But this can’t be right: Suppose that , are mutually generic for an Easton class-product; the and lie in different set-generic multiverses but have the same information.

It is a bit cumbersome but surely one could build a suitable and larger multiverse using some iterated class-forcing instead of the Levy collapses you use. But as you know I’m not much of a fan of any use of forcing in foundational investigations and moreover I’d first like to better understand what it is you want a multiverse foundation to achieve.

**Page 14**

Here you introduce 3 theses about the set-generic multiverse. I don’t understand the first one (“Every proposition”? What about CH?) or the second one (What does “makes sense” mean?). The third one, asserting the existence of a “core” is clear enough, but for the uninitiated it should be made clear that this is miles away from what a “core” in the sense of core model theory is like: Easton products kill “cores” and reverse Easton iterations produce them! So being a “core” of one’s set-generic multiverse is a rather flimsy notion. Can you explain why on Page 15 you say that the “role [of 'core' existence] in our search for a universal framework theory seems crucial”?

You then advertise Hugh’s new axiom as having 3 properties:

- It implies core existence.
- It suggests a way of developing fine-structure theory for the core.
- It may be consistent with all large cardinals.

Surely there is something missing here! Look at my paper with Peter Holy: “A quasi-lower bound on the consistency strength of PFA, to appear, Transactions American Mathematical Society“. (I spoke about it at the 1st European Set Theory meeting in 2007.)

We use a “formidable” argument to show that condensation with acceptability is consistent with essentially all large cardinals. As we use a reverse Easton iteration the models we build are the “cores” in your sense of their own set-generic multiverses. And condensation plus acceptability is a big step towards a fine-structure theory. It isn’t hard to put all of this into an axiom so our work fulfills the description you have above of Hugh’s axiom.

Is it possible that what is missing is some kind of absoluteness property that you left out, which Hugh’s axiom guarantees but my work with Holy does not?

OK, those are my main comments and questions. I’ll end with some lesser points.

**Page 3**

Defn 2.1: Probably you want to restrict to 1st order theories.

Adding Con(T) to T can destroy consistency unless you are talking about “true” theories.

“Reflection principles”: It is wrong to take these to go past axioms consistent with . Reflection has a meaning which should be respected.

**Page 5**

Conjecture at the top about well-ordered consistency strengths: It is vague because of “natural extension of ZFC” and “large cardinal hypothesis”. But at least the first vagueness can be removed by just saying that the axioms Con(ZFC + LCA) are cofinal in all Con(T)’s, natural or otherwise. A dream result would be to show that the minimal ordinal heights of well-founded models of “LCA’s” are cofinal in those of arbitrary sentences with well-founded models. It is not totally out of the question that one could prove something like that but I don’t know how to do it (with some suitable definition of “LCA”).

In your discussion of consistency lower bounds it is very much worthwhile to consider “quasi” lower bounds. Indeed core model theory doesn’t get more than Woodin cardinals from PFA but the Viale-Weiß work and my work with Holy cited above show that there are quasi lower bounds in the supercompact range. For me this is important evidence for the consistency of supercompacts.

**Page 12**

The Laver-Woodin result: Are you sure you’ve got this right? I see that the ground model is definable in the -generic extension with parameter where of the extension, but not from and the -generic.

Best,

Sy

PS: How do you use the concept of “truth” in set theory? Is it governed by “good math” and “mathematical depth”, or do you see it as more than that? Your answer will help me understand your paper better.