Re: Paper and slides on indefiniteness of CH

Dear Hugh and Pen,

Hugh:

1. You proposed:

Coding Assumption: if M is weakly #-generated then M can be coded by a real in an outer model which is weakly #-generated

I can’t see why this would be true. One needs alpha-iterable presharps for each alpha to witness the weak #-generation of M, and although each of these presharps can be preserved by some real coding M, there is no single real that does this for all alpha simultaneously.

Instead, I realise that the theory-version of \textsf{IMH}^\# results in a statement for countable models which is a bit weaker than what I said. So I have to change the formulation of \textsf{IMH}^\# again! (Peter, before you go crazy, let me again emphasize that this is how the HP works: We make an investigation of maximality criteria and only through a lot of math and contemplation do we start to understand what is really going on. It requires time and patience.)

OK, the theory version would say: #-generation for V is consistent in V-logic (formulated in any lengthening of V) and for every phi, the theory in V-logic which says that V is #-generated and phi holds in an outer model M of V which is #-generated proves that phi holds in an inner model of V.

What this translates to for a countable model V is then this:

(*) V is weakly #-generated and for all \phi: Suppose that whenever g is a generator for V (iterable at least to the height of V), phi holds in an outer model M of V with a generator which is at least as iterable as g. Then \phi holds in an inner model of V.

For each \phi the above hypothesis implies that for each countable \alpha, \phi holds in an outer model of V with an \alpha-iterable generator. But if V is in fact fully #-generated then the hypothesis implies that \phi holds in an outer model of V which is also fully #-generated. So now we get consistency just like we did for the original oversimplified form of the \textsf{IMH}^\# for countable models.

2. You said:

I think if our evolving understanding of the large cardinal hierarchy rests primarily on the context of V = Ultimate L then very likely the rich generic extensions are not playing much of a role in understanding the large cardinal hierarchy … This for me would build the case for V = Ultimate L and against these rich extensions. It would then take something quite significant in the theory of the rich extensions to undermine that.

Sorry, I still don’t get it. Forcing extensions of L don’t play much of a role in understanding small large cardinals, do they? Yet if 0^\# provably does not exist I don’t see the argument for V = L; in fact I don’t even see the argument for CH. Now why wouldn’t you favour something like “V is a forcing extension of Ultimate L which satisfies MM” or something like that?

3. The problem

(*) Suppose \gamma^+ is not correctly computed by HOD for any infinite cardinal \gamma. Must weak square hold at some singular strong limit cardinal?

actually grew out of my recent AIM visit with Cummings, Magidor, Rinot and Sinapova. We showed that the successor of a singular strong limit \kappa of cofinality \omega can be large in HOD, and I started asking about Weak Square. It holds at \kappa in our model.

Pen:

You have caved into Peter’s P’s an V’s (Predictions and Verifications)!

Peter wrote:

The notion of “good set theory” is too vague to do much work here. Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise. The key notion is evidence, evidence of a form that people can agree on.

Then you said:

I probably should have stepped in at the time to remark that I’ve been using the term ‘good set theory’ for the set theory that enjoys the sort of evidence Peter is referring to here …

Now I want to defend “Defending”! There you describe “the objective reality that set-theoretic methods track” in terms of “depth, fruitfulness, effectiveness, importance and so on” (each with the prefix “mathematical”), there is no mention of P’s and V’s. And I think you got it dead right there, why back off now? I want my former Pen back!

As I said, I do agree that P’s and V’s are of value, they make a “good set theory” better, but they are not the be-all and end-all of “good set theory”! For a set theory to be good there is no need for it to make “verified predictions”; look at Forcing Axioms. And do you really think that they solve the “consensus” problem? Isn’t there also a lack of consensus about what predictions a theory should make and how devastating it is to the theory if they are not verified? Do you really think that we working set-theorists are busily making “predictions”, hopefully before Inspector Koellner of the Set Theory Council makes his annual visit to see how we are doing with their “verification”?

Pen, I really think you’ve made a wrong turn here. You were basing your Thin Realism very sensibly on what set-theorists actually do in their practice, what they think is important, what will lead to exciting new developments. P’s and V’s are a side issue, sometimes of value but surely not central to the practice of “good set theory”.

There is another point. Wouldn’t you want a discussion of truth in set theory to be receptive to what is going on in the rest of mathematics? Everyone keeps ignoring this point in this thread, despite my repeated attempts to bring it forward. Does a functional analyst or algebraist care about Ultimate L or the HP? Of course not! They might laugh if they were to hear about the arguments that we have been having, which for them are just esoteric and quite irrelevant to mathematics as a whole. Forcing Axioms can at least lay a claim to be really useful both for set theory and other areas of mathematics, surely they have to be part of theory of truth. Anyone who makes claims about set-theoretic truth, be it Ultimate L or HP or anything else, which ignores them is missing something important. And won’t it be embarrassing if 100 years from now, set-theorists will announce that they finally figured out what the “correct axioms for set theory” are and mathematicians from other fields don’t care as the “new and true axioms” are either quite useless for what they are doing or even conflict with the axioms that they would like to have for their own “good mathematics”?

Hence my conclusion is that the only sensible move for us to make is to gather evidence from all three sources: Set theory as an exciting and rapidly-developing branch of math and as a useful foundation for math, together with evidence we can draw from the concept of set itself via the maximality of V in height and width. None of these three types (which I have labelled as Types 1, 2 and 3, respectively) should be ignored. And we must also recognise that the procedure for uncovering evidence of these three types depends heavily on the type in question. “Defending” (even without P’s and V’s) teaches us how the process works in Type 1. For Type 2 we have to get into the trenches and see what the weapons being used in core mathematics are, and how we can help when independence infiltrates. For Type 3 it has to be what I am doing: an open-minded, sometimes sloppy and contantly changing (at least at the start) “shotgun approach” to investigating maximality criteria with the optimistic and determined aim of seeing a clear picture after a lot of very hard work is accomplished. The math is very challenging and as you have seen it is even hard to get things formulated properly. But I have lost patience with and will now ignore all complaints that “it cannot be done”, complaints based on nothing more than unjustified pessimism.

Yes, there is a lack of consensus regarding “good set theory”. But Peter is plain wrong to say that it has “no place in a foundational enterprise”. It has a very important place, but to reach a consensus about what the “correct” axioms of set theory should be, the evidence from “good set theory” must be augmented, not just by P’s and V’s but also by other forms of evidence coming from math outside of set theory and from the study of the maximality of V in height and width.

Pen, I would value your open-minded views on this. I hope that you are not also going to reduce “good set theory” to P’s and V’s and complain that the HP “cannot be done”.

Thanks, Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

A. The principles in the hierarchy IMH(Inaccessible), IMH(Mahlo), IMH(the Erdos cardinal \kappa_\omega exists), etc. up to \textsf{IMH}^\# must be regarded as ad hoc unless one can explain the restriction to models that satisfy Inaccessibles, Mahlos, \kappa_\omega, etc. and #-generation, respectively.

One of my points was that #-generation is ad hoc (for many reasons,
one being that you use the # to get everything below it and then you
ignore the #). There has not been a discussion of the case for
#-generation in this thread. It would be good if you could give an
account of it and make a case for it on the basis of “length
maximality”. In particular, it would be good if you could explain how
it is a form of “reflection” that reaches the Erdos cardinal
\kappa_\omega.

B. It is true that we now know (after Hugh’s consistency proof of
\textsf{IMH}^\#) that \textsf{IMH}^\#(\omega_1) is stronger than \textsf{IMH}^\# in the sense that the large cardinals required to obtain its consistency are stronger. But in contrast to \textsf{IMH}^\# it has the drawback that it is not consistent with all large cardinals. Indeed it implies that there is a real x such that \omega_1=\omega_1^{L[x]} and (in your letter about Max) you have already rejected any principle with that implication. So I am not sure why you are bringing it up.

(The case of \textsf{IMH}^\#\text{(card-arith)} is more interesting. It has a standing chance, by your lights. But it is reasonable to conjecture (as Hugh did) that it implies GCH and if that conjecture is true then there is a real x such that \omega_1=\omega_1^{L[x]}, and, should that turn out to be true, you would reject it.)

2. What I called “Magidor’s embedding reflection” in fact appears in a paper by Victoria Marshall (JSL 54, No.2). As it violates V = L it is not a form of height maximality (the problem  is with the internal embeddings involved; if the embeddings are external then one gets a weak form of #-generation). Indeed Marshall Reflection does not appear to be a form of maximality in height or width at all.

No, Magidor Embedding Reflection appears in Magidor 1971, well before Marshall 1989. [Marshall discusses Kanamori and Magidor 1977, which contains an account of Magidor 1971.]

You say: “I don’t think that any arguments based on the vague notion of “maximality” provide us with much in the way of justification”. Wow! So you don’t think that inaccessibles are justified on the basis of reflection! Sounds like you’ve been talking to the radical Pen Maddy, who doesn’t believe in any form of intrinsic justification.

My comment was about the loose notion of “maximality” as you use it, not about “reflection”. You already know what I think about “reflection”.

3. Here’s the most remarkable part of your message. You say:

Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.

In this thread I have repeatedly and without objection taken Pen’s Thin Realism to be grounded on “good set theory” (or if looking beyond set theory, on “good mathematics”). So you have now rejected not only the HP, but also Thin Realism. My view is that Pen got it exactly right when it comes to evidence from the practice of set theory, one must only acknowledge that such evidence is limited by the lack of consensus on what “good set theory” means.

You are right to say that there is value to “predictions” and “verifications”. But these only serve to make a “good set theory” better. They don’t really change much, as even if a brand of “good set theory” fails to fulfill one of its “predictions”, it can still maintain its high status. Take the example of Forcing Axioms: They have been and always will be regarded as “good set theory”, even if the “prediction” that you attribute to them fails to materialise.

Peter, your unhesitating rejection of approaches to set-theoretic truth is not helpful. You faulted the HP for not being “well-grounded” as its grounding leans on a consensus regarding the “maximality of V in height and width”. Now you fault Thin Realism (TR) for not being “well-grounded” as its grounding leans on “good set theory”. There is an analogy between TR and the HP: Like Pen’s second philosopher, Max (the Maximality Man) is fascinated by the idea of maximality of V in height and width and he “sets out to discover what the world of maximality is like, the range of what there is to the notion and its various properties and behaviours”. In light of this analogy, it is reasonable that someone who likes Thin Realism would also like the HP and vice-versa. It seems that you reject both, yet fail to provide a good substitute. How can we possibly make progress in our understanding of set-theoretic truth with such skepticism? What I hear from Pen and Hugh is a “wait and see” attitude, they want to know what criteria and consensus comes out of the HP. Yet you want to reject the approach out of hand. I don’t get it. Are you a pessimist at heart?

No, I am an unrepentant optimist. (More below.)

It seems to me that in light of your rejection of both TR and HP, the natural way for you to go is “radical skepticism”, which denies this whole discussion of set-theoretic truth in the first place. (Pen claimed to be a radical skeptic, but I don’t entirely believe it, as she does offer us Thin Realism.) Maybe Pen’s Arealism is your real cup of tea?

A. I don’t see how you got from my statement

Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.

to conclusions about my views realism and truth (e.g. my “[rejection]
….of Thin Realism” and my “unhesitating rejection of approaches to
set-theoretic truth”)!

Let’s look at the rest of the passage:

“The key notion is evidence, evidence of a form that people can agree on. That is the virtue of actually making a prediction for which there is agreement (not necessarily universal — there are few things beyond the law of identity that everyone agrees on — but which is widespread) that if it is proved it will strengthen the case and if it is refuted it will weaken the case.

I said nothing about realism or about truth. I said something only about the epistemic notion that is at play in a case (of the kind you call Type-1) for new axioms, namely, that it is not the notion of “good set theory” (a highly subjective, personal notion, where there is little agreement) but rather the notion of evidence (of a sort where there is agreement).

B. I was criticizing the employment of the notion of “good set theory” as you use it, not as Pen uses it.

As you use it Jensen’s work on V = L is “good set theory” and the work on ZF+AD is “good set theory” (in fact, both are cases of “great set theory”). On that we can all agree. (One can multiply examples:
Barwise on KP, etc.) But that has nothing to do with whether we should accept V = L or ZF+AD.

As Pen uses it involves evidence in the straightforward sense that I have been talking about.  (Actually, as far as I can tell she doesn’t use the phrase in her work. E.g. it does not occur in her most detailed book on the subject, “Second Philosophy”. She has simply used it in this thread as a catch phrase for something she describes in more detail, something involving evidence). Moreover, as paradigm examples of evidence she cites precisely the examples that John, Tony, Hugh, and I have given.

In summary, I was saying nothing about realism or truth; I was saying something about epistemology. I was saying: The notion of “good set theory” (as you use it) has no epistemic role to play in a case for new axioms. But the notion of evidence does.

So when you talk about Type 1 evidence, you shouldn’t be talking about “practice” and “good set theory”. The key epistemic notion is rather evidence of the kind that has been put forward, e.g. the kind that involves sustained prediction and confirmation.

[I don't pretend that the notion of evidence in mathematics (and
especially in this region, where independence reigns), is a straightforward matter. The explication of this notion is one of the main things I have worked on. I talked about it in my tutorial when we were at Chiemsee. You already have the slides but I am attaching them here in case anyone else is interested. It contains both an outline of the epistemic framework and the case for \text{AD}^{L(\mathbb R)} in the context of this framework. A more detailed account is in the book I have been writing (for several years now...)]

[C. Aside: Although I said nothing about realism, since you attributed views on the matter to me, let me say something briefly: It is probably the most delicate notion in philosophy. I do not have a settled view. But I am certainly not a Robust Realist (as characterized by Pen) or a Super-Realist (as characterized by Tait), since each leads to what Tait calls "an alienation of truth from proof." The view I have defended (in "Truth in Mathematics: The Question of Pluralism") has much more in common with Thin Realism.]

So I was too honest, I should not have admitted to a radical form of multiversism (radical potentialism), as it is then easy to misundertand the HP as you have. As far as the choice of maximality criteria, I can only repeat myself: Please be open-minded and do not prejudge the programme before seeing the criteria that it generates. You will see that our intuitions about maximality criteria are more robust than our intuitions about “good set theory”.

I have been focusing on CTM-Space because (a) you said quite clearly that you were a radical potentialist and (b) the principles you have put forward are formulated in that context. But now you have changed your program yet again. There have been a lot of changes.
(1) The original claim

I conjecture that CH is false as a consequence of my Strong Inner Model Hypothesis (i.e. Levy absoluteness with “cardinal-absolute parameters” for cardinal-preserving extensions) or one of its variants which is compatible with large cardinals existence. (Aug. 12)

has been updated to

With apologies to all, I want to say that I find this focus on CH to be exaggerated. I think it is hopeless to come to any kind of resolution of this problem, whereas I think there may be a much better chance with other axioms of set theory such as PD and large cardinals. (Oct. 25)

(2) The (strong) notion of “intrinsic justification” has been replaced by the (weak) notion of “intrinsic heurisitic”.

(3) Now, the background picture of “radical potentialism” has been
replaced by “width-actualism + height potentialism”.

(4) Now, as a consequence of (3), the old principles \textsf{IMH}^\#\textsf{IMH}^\#(\omega_1), \textsf{IMH}^\#\text{(card-arith)}, \textsf{SIMH}, \textsf{SIMH}^\#, etc. have been  replaced by New-\textsf{IMH}^\#, New-\textsf{IMH}^\#(\omega_1), etc.

Am I optimistic about this program, program X? Well, it depends on
what X is. I have just been trying to understand X.

Now, in light of the changes (3) and (4), X has changed and we have to start over. We have a new philosophical picture and a whole new collection of mathematical principles. The first question is obviously: Are these new principles even consistent?

I am certainly optimistic about this: If under the scrutiny of people like Hugh and Pen you keep updating X, then X will get clearer and more tenable.

That, I think, is one of the great advantages of this forum. I doubt that a program has ever received such rapid philosophical and mathematical scrutiny. It would be good to do this for other programs, like the Ultimate-L program. (We have given that program some scrutiny. So far, there has been no need for mathematical-updating — there has been no need to modify the Ultimate-L Conjecture or the HOD-Conjecture.)

Best,
Peter

Chiemsee_1 Chiemsee_2

Re: Paper and slides on indefiniteness of CH

Dear Sy,

You wrote:

When it comes to Type 1 evidence (from the practice of set theory as mathematics) we don’t require that opinions about what is “good set theory” be shared (and “the picture” is indeed determined by “good set theory”). As Peter put it:

” Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.”

What Peter wrote is this:

The notion of “good set theory” is too vague to do much work here. Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise. The key notion is evidence, evidence of a form that people can agree on.

I probably should have stepped in at the time to remark that I’ve been using the term ‘good set theory’ for the set theory that enjoys the sort of evidence Peter is referring to here — for example, there was evidence for the existence of sets in the successes of Cantor and Dedekind, and more recently for PD, in the results cited by Peter, Hugh, John Steel, and others.  (Using the term ‘good set theory’ allows me to leave open the question of Thin Realism vs. Arealism.  For the Arealist, these same considerations are just reasons to add sets or PD to our mathematics/set theory, but the Thin Realist sees them as evidence for existence and truth.)

This doesn’t preclude people disagreeing about what parts of set theory they believe to be more interesting, important, promising, etc.  (Scientists also disagree on such matters.)

At the present juncture, it’s more difficult to find and assess new evidence, but that’s to be expected.  Peter and Hugh have made it clear, I think, that they regard many of the current options as open (including HP, when it begins to generate definite claims), that more information is needed. If one theory eventually ‘swamps’ the rest (I should have noted that ‘swamping’ often involves ‘subsuming’), then the apparently contrary evidence will have to be faced and explained.  (Einstein had to explain why there was so much evidence in support of Newton.)

All best,

Pen

Re: Paper and slides on indefiniteness of CH

Dear Sy,

On Nov 5, 2014, at 7:40 AM, Sy David Friedman wrote:

Hugh:

1. Your formulation of \textsf{IMH}^\# is almost correct:

M witnesses \textsf{IMH}^\# if

1) M is weakly #-generated.

2) If \phi holds in an outer model of M which is weakly
#-generated then \phi holds in an inner model of M.

But as we have to work with theories, 2) has to be: If for each countable \alpha, \phi holds in an outer model of M which is generated by an \alpha-iterable presharp then \phi holds in an inner model of M.

Let’s call this New-\textsf{IMH}^\#.

Are you sure this is consistent?

Assume coding works in the weakly #-generated context:

Coding Assumption: if M is weakly #-generated then M can be coded by a real in an outer model which is weakly #-generated.

Then:

Theorem. Assume PD. Then there is a real x such that for all ctm M, if x is in M then M does not satisfy New-\textsf{IMH}^\#.

(So in any case, one cannot get consistency by the determinacy proof).

2. Could you explain a bit more why V = Ultimate-L is attractive?

Shelah has the informal notion of a semi-complete axiom.

V = L is a semi-complete axiom as is AD^{L(\mathbb R)} in the context of L(\mathbb R) etc.

A natural question is whether there is a semi-complete axiom which is consistent with all large cardinals. No example is known.

If the Ultimate L Conjecture is true (provable) then V = Ultimate L is arguably such an axiom and further it is such an axiom which implies V = HOD (being “semi-complete” seems much stronger in the context of V = HOD).

Of course this is not a basis in any way for arguing V = Ultimate L. But is certainly makes it an interesting axiom whose rejection must be based on something equally interesting.

You said: “For me, the “validation” of V = Ultimate L will have to come from the insights V = Ultimate L gives for the hierarchy of large cardinals beyond supercompact.”
But why would those insights disappear if V is, for example, some rich generic extension of Ultimate L? If Jack had proved that 0^\# does not exist I would not favour V = L but rather V = some rich outer model of L.

I think if our evolving understanding of the large cardinal hierarchy rests primarily on the context of V = Ultimate L then very likely the rich generic extensions are not playing much of a role in understanding the large cardinal hierarchy.

This for me would build the case for V = Ultimate L and against these rich extensions. It would then take something quite significant in the theory of the rich extensions to undermine that.

But such speculations seem very premature. We do not even know if the HOD Conjecture is true. If the HOD Conjecture is not true then the entire Ultimate L scenario fails.

3. I told Pen that finding a GCH inner model over which V is generic is a leading open question in set theory. But you gave an argument suggesting that this has to be strengthened. Recently I gave a talk about HOD where I discussed the following four properties of an inner model M:

Genericity: V is a generic extension of M.

Weak Covering: For a proper class of cardinals \alpha, \alpha^+ = \alpha^{+M}.

Rigidity: There is no nontrivial elementary embedding from M to M.

Large Cardinal Witnessing: Any large cardinal property witnessed in V is witnessed in M.

(When 0# does not exist, all of these hold for M = L except for Genericity: V need not be class-generic over L. As you know, there has been a lot of work on the case M = HOD.)

Now I’d like to offer Pen a new “leading open question”. (Of course I could offer the PCF Conjecture, but I would prefer to offer something closer to the discussion we have been having.) It would be great if you and I could agree on one. How about this: Is there an inner model M satisfying GCH together with the above four properties?

Why not just go with the HOD Conjecture? Or the Ultimate L Conjecture?

There is is another intriguing problem which has been suggested by this thread.

Suppose \gamma^+ is not correctly computed by HOD for any infinite cardinal \gamma.Must weak square hold at some singular strong limit cardinal?

This looks like a great problem to me and it seems clearly to be a new problem.

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Pen and Hugh,

Pen:

Well I said that we covered everything, but I guess I was wrong! A new question for you popped into my head. You said:

The HP works quite differently. There the picture leads the way — the only legitimate evidence is Type 3. As we’ve determined over the months, in this case the picture involved
has to be shared, so that it won’t degenerate into ‘Sy’s truth’.

I just realised that I may have misunderstood this.

When it comes to Type 1 evidence (from the practice of set theory as mathematics) we don’t require that opinions about what is “good set theory” be shared (and “the picture” is indeed determined by “good set theory”). As Peter put it:

Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.

I disagree with the last sentence of this quote (I expect that you do too), but the fact remains that if we don’t require a consensus about “good set theory” then truth does break into (“degenerate into” is inappropriate) “Hugh’s truth”, “Saharon’s truth”, “Stevo’s truth”, “Ronald’s truth” and so on. (Note: I don’t mean to imply that Saharon or Stevo really have opinions about truth, here I only refer to what one reads off from their forms of “good set theory”.) I don’t think that’s bad and see no need for one form of “truth” that “swamps all the others”.

Now when it comes to the HP you insist that there is just one “shared picture”. What do you mean now by “picture”? Is it just the vague idea of a single V which is maximal in terms of its lengthenings and “thickenings”? If so, then I agree that this is the starting point of the HP and should be shared, independently of how the HP develops.

In my mail to you of 31.October I may have misinterpreted you by assuming that by “picture” you meant something sensitive to new developments in the programme. For example, when I moved from a short fat “picture” based on the IMH to a taller one based on the \textsf{IMH}^\#, I thought you were regarding that as a change in “picture”. Let me now assume that I made a mistake, i.e., that the “shared picture” to which you refer is just the vague idea of a single V which is maximal in terms of its lengthenings and “thickenings”.

Now I ask you this: Are you going further and insisting that there must be a consensus about what mathematical consequences this “shared picture” has? That will of course be necessary if the HP is to claim “derivable consequences” of the maximality of V in height and width, and that is indeed my aim with the HP. But what if my aim were more modest, simply to generate “evidence” for axioms based on maximality just as TR generates “evidence” for axioms based on “good set theory”; would you then agree that there is no need for a consensus, just as there is in fact no consensus regarding evidence based on “good set theory”?

In this way one could develop a good analogy between Thin Realism and a gentler form of the HP. In TR one investigates different forms of “good set theory” and as a consequence generates evidence for what is true in the resulting “pictures of V”. In the gentler form of the HP one investigates different forms of “maximality in height and width” to generate evidence for what is true in a “shared picture of V”. In neither case is there the presumption of a consensus concerning the evidence generated (in the original HP there is). This gentler HP would still be valuable, just as generating different forms of evidence in TR is valuable. What it generates will not be “intrinsic to the concept of set” as in the original ambitious form of the HP, but only “intrinsically-based evidence”, a form of evidence generated through an examination of the maximality of V in height and width, rather than by “good set theory”.

Hugh:

1. Your formulation of \textsf{IMH}^\# is almost correct:

M witnesses \textsf{IMH}^\# if

1) M is weakly #-generated.

2) If \phi holds in an outer model of M which is weakly
#-generated then \phi holds in an inner model of M.

But as we have to work with theories, 2) has to be: If for each countable \alpha, \phi holds in an outer model of M which is generated by an alpha-iterable presharp then phi holds in an inner model of M.

2. Could you explain a bit more why V = Ultimate L is attractive? You said: “For me, the “validation” of V = Ultimate L will have to come from the insights V = Ultimate L gives for the hierarchy of large cardinals beyond supercompact.” But why would those insights disappear if V is, for example, some rich generic extension of Ultimate L? If Jack had proved that 0^\# does not exist I would not favour V = L but rather V = some rich outer model of L.

3. I told Pen that finding a GCH inner model over which V is generic is a leading open question in set theory. But you gave an argument suggesting that this has to be strengthened. Recently I gave a talk about HOD where I discussed the following four properties of an inner model M:

Genericity: V is a generic extension of M.

Weak Covering: For a proper class of cardinals alpha, alpha^+ = alpha^+ of M.

Rigidity: There is no nontrivial elementary embedding from M to M.

Large Cardinal Witnessing: Any large cardinal property witnessed in V is witnessed in M.

(When 0^\# does not exist, all of these hold for M = L except for Genericity: V need not be class-generic over L. As you know, there has been a lot of work on the case M = \text{HOD}.)

Now I’d like to offer Pen a new “leading open question”. (Of course I could offer the PCF Conjecture, but I would prefer to offer something closer to the discussion we have been having.) It would be great if you and I could agree on one. How about this: Is there an inner model M satisfying GCH together with the above four properties?

Thanks,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

On Nov 3, 2014, at 3:38 AM, Sy David Friedman wrote:

Hugh:

1. The only method I know to obtain the consistency of the maximality criterion I stated involves Prikry-like forcings, which add Weak Squares. Weak Squares contradict supercompactness.

So you think that if the Maximality Criterion holds then weak square holds at some singular strong limit?

3. I was postponing the discussion of the reduction of #-generation to ctm’s (countable transitive models) as long as possible as it is quite technical, but as you raised it again I’ll deal with it now. Recall that in the HP “thickenings” are dealt with via theories. So #-generation really means that for each Gödel lengthening L_\alpha(V) of V, the theory in L_\alpha(V) which expresses that V is generated by a presharp which is \alpha-iterable is consistent. Another way to say this is that for each \alpha, there is an \alpha-iterable presharp which generates V in a forcing extension of L(V) in which \alpha is made countable. For ctm’s this translates to: A ctm M is (weakly) #-generated if for each countable \alpha, M is generated by an \alpha-iterable presharp. This is weaker than the cleaner, original form of #-generation. With this change, one can run the LS argument and regard \textsf{IMH}^\# as a statement about ctm’s. In conclusion: You are right, we can’t apply LS to the raw version of \textsf{IMH}^\#, essentially because #-generation for a (real coding a) countable V is a \Sigma^1_3 property; but weak #-generation is \Pi^1_2 and this is the only change required.

Just be clear you are now proposing that \textsf{IMH}^\# is:

M witnesses \textsf{IMH}^\# if

1) M is weakly #-generated.

2) If \phi holds in an outer model of M which is weakly #-generated then \phi holds in an inner model of M.

Here: a ctm K is weakly #-generated if for each countable ordinal \alpha, there is an \alpha-iterable (N,U) whose \text{Ord}^K-iterate gives K.

Is this correct?

Regards, Hugh

Re: Paper and slides on indefiniteness of CH

Looks like I have three roles here.

1. Very lately, some real new content that actually investigates some generally understandable aspects of “intrinsic maximality”. This has led rather nicely to legitimate foundational programs of a generally understandable nature, involving new kinds of investigations into decision procedures in set theory.

2. Attempts to direct the discussion into more productive topics. Recall the persistent subject line of this thread! The last time I tried this, I got a detailed response from Peter which I intended to answer, but put 1 above at a higher priority.

3. And finally, some generally understandable commentary on what is both not generally understandable and having no tangible outcome.

This is a brief dose of 3.

QUOTE FROM BSL PAPER BY MR. ENERGY (jointly authored):

The approach that we present here shares many features, though not all, of Goedel’s program for new axioms. Let us briefly illustrate it. The Hyperuni- verse Program is an attempt to clarify which first-order set-theoretic state- ments (beyond ZFC and its implications) are to be regarded as true in V , by creating a context in which different pictures of the set-theoretic universe can be compared. This context is the hyperuniverse, defined as the collection of all countable transitive models of ZFC.

DIGRESSION: The above seems to accept ZFC as “true in V”, but later discussions raise issues with this, especially with AxC.

So here we have the idiosyncractic propogandistic slogan “HP” for

*Hyperuniverse Program*

And we have the DEFINITION of the hyperuniverse as

**the collection of all countable transitive models of ZFC**

QUOTE FROM THIS MORNING BY MR. ENERGY:

That is why it is quite inappropriate, as you have done on numerous occasions, to refer to the HP as the study of ctm’s, as there is no need to consider ctm’s at all, and even if one does (by applying LS), the properties of ctm’s that results are very special indeed, far more special than what a full-blown theory of ctm’s would entail.

If it is supposed to be “inappropriate to refer to the HP as the study of ctm’s”, and “no need to consider ctm’s at all”, then why coin the term Hyperuniverse Program and then DEFINE the Hyperuniverse as the collection of all countable transitive models of ZFC???

THE SOLUTION (as I suggested many times)

Stop using HP and instead use CTMP = countable transitive model program. Only AFTER something foundationally convincing arises, AFTER working through all kinds of pitfalls carefully and objectively, consider trying to put forth and defend a foundational program.

In the meantime, go for a “full-blown theory of ctm’s” (language from Mr. Energy) so that you at least have something tangible to show for the effort if and when people reject your foundational program(s).

GENERALLY UNDERSTANDABLE AND VERY DIRECT PITFALLS IN USING INTRINSIC MAXIMALITY

It is “obvious” from intrinsic maximality that the GCH fails at all infinite cardinals because of “width considerations”.

This “refutes” the continuum hypothesis. This also “refutes” the existence of (\omega+2)-extendible cardinals, since they imply that the GCH holds at some infinite cardinals (Solovay).

QED

LESSONS TO BE LEARNED

You have to creatively analyze what is wrong with the above use of “intrinsic maximality”, and how it is fundamentally to be distinguished from other uses of “intrinsic maximality” that one is putting forward as legitimate. If this can be done in a suitably creative and convincing way, THEN you have at least the beginnings of a legitimate foundational program. WARNING: if the distinction is drawn too artificially, then you are not creating a legitimate foundational program.

Harvey

Re: Paper and slides on indefiniteness of CH

Dear Peter and Hugh,

Thanks to you both for the valuable comments and your continued interest in the HP. Answers to your questions follow.

Peter:

1. As I said, #-generation was not invented as a “fix” for anything. It was invented as the optimal form of maximality in height. It is the limit of the small large cardinal hierarchy (inaccessibles, Mahlos, weak compacts, $latex\omega$-Erdos, (\omega+\omega)-Erdos, … #-generation). A nice feature is that it unifies well with the IMH, as follows: The IMH violates inaccessibles. IMH(inaccessibles) violates Mahlos. IMH(Mahlos) violates weak compacts … IMH(omega-Erdos) violates omega+omega-Erdos, … The limit of this chain of principles is the canonical maximality criterion \textsf{IMH}^\#, which is compatible with all small large cardinals, and as an extra bonus, with all large cardinals. It is a rather weak criterion, but becomes significantly stronger even with the tiny change of adding \omega_1 as a parameter (and considering only \omega_1 preserving outer models).

2. What I called “Magidor’s embedding reflection” in fact appears in a paper by Victoria Marshall (JSL 54, No.2). As it violates V = L it is not a form of height maximality (the problem is with the internal embeddings involved; if the embeddings are external then one gets a weak form of #-generation). Indeed Marshall Reflection does not appear to be a form of maximality in height or width at all.

You say: “I don’t think that any arguments based on the vague notion of ‘maximality’ provide us with much in the way of justification”. Wow! So you don’t think that inaccessibles are justified on the basis of reflection! Sounds like you’ve been talking to the radical Pen Maddy, who doesn’t believe in any form of intrinsic justification.

3. Here’s the most remarkable part of your message. You say:

“Different people have different views of what ‘good set theory’ amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.”

In this thread I have repeatedly and without objection taken Pen’s Thin Realism to be grounded on “good set theory” (or if looking beyond set theory, on “good mathematics”). So you have now rejected not only the HP, but also Thin Realism. My view is that Pen got it exactly right when it comes to evidence from the practice of set theory, one must only acknowledge that such evidence is limited by the lack of consensus on what “good set theory” means.

You are right to say that there is value to “predictions” and “verifications”. But these only serve to make a “good set theory” better. They don’t really change much, as even if a brand of “good set theory” fails to fulfill one of its “predictions”, it can still maintain its high status. Take the example of Forcing Axioms: They have been and always will be regarded as “good set theory”, even if the “prediction” that you attribute to them fails to materialise.

Peter, your unhesitating rejection of approaches to set-theoretic truth is not helpful. You faulted the HP for not being “well-grounded” as its grounding leans on a consensus regarding the “maximality of V in height and width”. Now you fault Thin Realism (TR) for not being “well-grounded” as its grounding leans on “good set theory”. There is an analogy between TR and the HP: Like Pen’s second philosopher, Max (the Maximality Man) is fascinated by the idea of maximality of V in height and width and he “sets out to discover what the world of maximality is like, the range of what there is to the notion and its various properties and behaviours”. In light of this analogy, it is reasonable that someone who likes Thin Realism would also like the HP and vice-versa. It seems that you reject both, yet fail to provide a good substitute. How can we possibly make progress in our understanding of set-theoretic truth with such skepticism? What I hear from Pen and Hugh is a “wait and see” attitude, they want to know what criteria and consensus comes out of the HP. Yet you want to reject the approach out of hand. I don’t get it. Are you a pessimist at heart?

It seems to me that in light of your rejection of both TR and HP, the natural way for you to go is “radical skepticism”, which denies this whole discussion of set-theoretic truth in the first place. (Pen claimed to be a radical skeptic, but I don’t entirely believe it, as she does offer us Thin Realism.) Maybe Pen’s Arealism is your real cup of tea?

OK, let’s return to something we agree on: the lack of consensus regarding “good set theory”, where I have something positive to offer. What this lack of consensus suggests to me is that we should seek further clarification by looking to other forms of evidence, namely Type 2 evidence (what provides the best foundation for math) and Type 3 evidence (what follows from the maximality of V in height and width). The optimistic position (I am an optimist at heart) is that the lack of consensus based solely on Type 1 evidence (coming from set-theoretic practice) could be resolved by favouring those Type 1 axioms which in addition are supported by Type 2 evidence, Type 3 evidence, or both. Forcing Axioms seem to be the best current axioms with both Type 1 and Type 2 support, and perhaps if they are unified in some way with Type 3 evidence (consequences of Maximality) one will arrive at axioms which can be regarded as true. This may even give us a glimmer of hope for resolving CH. But of course that is way premature, as we have so much work to do (on all three types of evidence) that it is impossible to make a reasonable prediction at this point.

To summarise this part: Please don’t reject things out of hand. My suggestion (after having been set straight on a number of key points by Pen) is to try to unify the best of three different approaches (practice, foundations, maximality) and see if we can make real progress that way.

4. With regard to your very entertaining story about K and Max: As I have said, one does not need a radical potentialist view to implement the HP, and I now regret having confessed to it (as opposed to a single-universe view augmented by height potentialism), as it is easy to make a mistake using it, as you have done. I explain: Suppose that “we live in a Hyperuniverse” and our aim is to weed out the “optimal universes”. You suggest that maximality criteria for a given ctm M quantify over the entire Hyperuniverse (“Our quantifiers range over CTM-space.”). This is not true and this is a key point: They are expressible in a first-order way over Goedel lengthenings of M. (By Gödel lengthening I mean an initial segment of the universe L(M) built over M, the constructible universe relative to M.) This even applies to #-generation, as explained below to Hugh. From the height potentialist / width actualist view this is quite clear (V is not countable!) and the only reason that Maximality Criteria can be reflected into the Hyperuniverse (denote this by H to save writing) is that they are expressible in this special way (a tiny fragment of second order set theory). But the converse is false, i.e., properties of a member M of H which are expressible in H (essentially arbitrary second-order properties) need not be of this special form. For example, no height maximal universe M is countable in its Goedel lengthenings, even for a radical potentialist, even though it is surely countable in the Hyperuniverse. Briefly put: From the height potentialist / width actualist view, the reduction to the Hyperuniverse results in a study of only very special properties of ctm’s, only those which result from maximality criteria expressed using lengthenings and “thickenings” of V via Löwenheim-Skolem.

So I was too honest, I should not have admitted to a radical form of multiversism (radical potentialism), as it is then easy to misundertand the HP as you have. As far as the choice of maximality criteria, I can only repeat myself: Please be open-minded and do not prejudge the programme before seeing the criteria that it generates. You will see that our intuitions about maximality criteria are more robust than our intuitions about “good set theory”.

Hugh:

1. The only method I know to obtain the consistency of the maximality criterion I stated involves Prikry-like forcings, which add Weak Squares. Weak Squares contradict supercompactness. In your last mail you verify that stronger maximality criteria do indeed violate supercompactness.

2. A synthesis of LCs with maximality criteria makes no sense until LCs themeselves are derived from some form of maximality of V in height and width.

3. I was postponing the discussion of the reduction of #-generation to ctm’s (countable transitive models) as long as possible as it is quite technical, but as you raised it again I’ll deal with it now. Recall that in the HP “thickenings” are dealt with via theories. So #-generation really means that for each Gödel lengthening L_\alpha(V) of V, the theory in L_\alpha(V) which expresses that V is generated by a presharp which is \alpha-iterable is consistent. Another way to say this is that for each \alpha, there is an \alpha-iterable presharp which generates V in a forcing extension of L(V) in which \alpha is made countable. For ctm’s this translates to: A ctm M is (weakly) #-generated if for each countable \alpha, M is generated by an \alpha-iterable presharp. This is weaker than the cleaner, original form of #-generation. With this change, one can run the LS argument and regard \textsf{IMH}^\# as a statement about ctm’s. In conclusion: You are right, we can’t apply LS to the raw version of \textsf{IMH}^\#, essentially because #-generation for a (real coding a) countable V is a \Sigma^1_3 property; but weak #-generation is \Pi^1_2 and this is the only change required.

But again, there is no need in the HP to make the move to ctm’s at all, one can always work with theories definable in Gödel lengthenings of V, making no mention of countability. Indeed it seems that the move to ctm’s has led to unfortunate misunderstandings, as I say to Peter above. That is why it is quite inappropriate, as you have done on numerous occasions, to refer to the HP as the study of ctm’s, as there is no need to consider ctm’s at all, and even if one does (by applying LS), the properties of ctm’s that results are very special indeed, far more special than what a full-blown theory of ctm’s would entail.

Thanks again for your comments,
Sy

Re: Paper and slides on indefiniteness of CH

This is a continuation of my earlier message. Recall that I have two titles to this note. You get to pick the title that you want.

REFUTATION OF THE CONTINUUM HYPOTHESIS AND EXTENDIBLE CARDINALS

THE PITFALLS OF CITING “INTRINSIC MAXIMALITY”

1. GENERAL STRATEGY.
2. THE LANGUAGE L_0.
3. STRONGER LANGUAGES.

1. GENERAL STRATEGY

Here we present a way of using the informal idea of “intrinsic maximality of the set theoretic universe” to do two things:

1. Refute the continuum hypothesis (using PD and less).
2. Refute the existence of extendible cardinals (in ZFC).

Quite a tall order!

Since I am not that comfortable with “intrinsic maximality”, I am happy to view this for the time being as an additional reason to be even less comfortable.

At least I will resist announcing that I have refuted both the continuum hypothesis and existence of certain extensitvely studied large cardinals!

INFORMAL HYPOTHESIS. Let \phi(x,y,z) be a simple property of sets x,y,z. Suppose ZFC + “for all infinite x, there exist infinitely many distinct sets which are pairwise incomparable under \phi(x,y,z)” is consistent. Then for all infinite x, there exist infinitely many distinct sets which are pairwise incomparable under \phi(x,y,z).

Since we are going to be considering only very simple properties, we allow for more flexibility.

INFORMAL HYPOTHESIS. Let 0 \leq n,m \leq \omega. Let \phi(x,y,z) be a simple property of sets x,y,z. Suppose ZFC + “for all x with at least n elements, there exist m distinct sets which are pairwise incomparable under \phi(x,y,z)” is consistent. Then for all x with at least n elements, there exist at least m distinct sets which are pairwise incomparable under \phi(x,y,z).

We can view the above as reflecting the “intrinsic maximality of the set theoretic universe”.

We will see that this Informal Hypothesis leads to “refutations” of both the continuum hypothesis and the existence of certain large cardinals, even using very primitive phi in very primitive set theoretic languages.

2. THE LANGUAGE L_0

L_0 has variables over sets, =,<, \leq^*,\cup. Here =,<, =^* are binary relation symbols, and \cup is a unary function symbol. x \leq^* y is interpreted as “there exists a function from x onto y“. \cup is the usual union operator, \cup x being the set of all elements of elements of x.

\text{MAX}(L_0,n,m). Let 0 \leq n,m \leq \omega. Let \phi(x,y,z) be the conjunction of finitely many formulas of L_0 in variables x,y,z. Suppose ZFC + “for all x with at least n elements, there exist m distinct sets which are pairwise incomparable under \phi(x,y,z)” is consistent. Then for all x with at least n elements, there exist at least m distinct sets which are pairwise incomparable under \phi(x,y,z).

THEOREM 2.1. ZFC + \text{MAX}(L_0,\omega,\omega) proves that there is no (\omega+2)-extendible cardinal.

More generally, we have

THEOREM 2.2. Let 2 < \log(m)+1 < n \leq \omega.

i. ZFC + \text{MAX}(L_0,n,m) proves that there is no (\omega+2)-extendible cardinal. Here \log(\omega) = \omega.
ii. ZFC + PD + \text{MAX}(L_0,n,m) proves that the GCH fails at all infinite cardinals. In particular, it refutes the continuum hypothesis.
iii. ii with PD replaced by higher order measurable cardinals in the sense of Mitchell.

We are morally certain that we can easily get a complete understanding of the meaning of the sentences in quotes that arise in the \text{MAX}(L_0,n,m).

Write \text{MAX}(L_0) for

“For all 0 \leq n,m \leq \omega, \text{MAX}(L_0,n,m)“. Using such a complete understanding we should be able to establish that ZFC + \text{MAX}(L_0) is a “good theory”. E.g., such things as

  1. ZFC + PD + \text{MAX}(L_0) is equiconsistent with ZFC + PD.
  2. ZFC + PD + \text{MAX}(L_0) is conservative over ZFC + PD for sentences of second order arithmetic.
  3. ZFC + PD + \text{MAX}(L_0) + “there is a proper class of measurable cardinals” is also conservative over ZFC + PD for sentences of second order arithmetic.

We will revisit this development after we have gained that complete understanding. Then we will go beyond finite conjunctions of atomic formulas in L_0.

The key technical ingredient in this development is the fact that

1. GCH fails at all infinite cardinals is incompatible with (\omega+2)-extendible cardinals (Solovay).
2. GCH fails at all infinite cardinals is demonstrably consistent using much weaker large cardinals, or using just PD (Foreman/Woodin).

Harvey

Re: Paper and slides on indefiniteness of CH

Dear Sy,

I think I now have a much better grip on the picture you are working with. This letter is an attempt to sum thing up — both mathematical and philosophical — and express my misgivings, in what I hope you will take to be a light-hearted and friendly manner.

Let start with your radical potentialism. To set the stage let me recap.

In my letter on Oct. 26 I carefully laid out the varieties of potentialism and actualism and asked which version you held. You answered on Oct. 26 and I was pretty sure that I understood. But I wanted to be sure so I asked for confirmation. You confirmed that in your P.S. (of a letter on a different topic) on Oct. 26:

PS: In answer to an earlier question, I am indeed naturally inclined to think in terms of the stronger form of radical potentialism. Indeed I do think that, as with height actualism, there are arguments to suggest that the weaker form of radical potentialism without the stronger form is untenable.

Here “the stronger form of radical potentialism” was the one I explicitly asked for confirmation on. To be clear: You endorse the strong form of radical potentialism according to which for every transitive model of ZFC there is an actual extension (meaning an actual lengthening and/or thickening) in which that model is actually seen to be countable.

So, on this view, everything is ultimately (and actually) countable. Thus, on this view, we actually live in the hyperuniverse, the space of countable transitive models of ZFC. That’s all there is. That’s our world.

This is very close to Skolem’s view. He took it to entail that set theory had evaporated, which is why I used that phrase. But you do not. Why? Because one can still do set theory in this limited world. How?

This brings us back to my original questions about your “dual use of ‘V'”, at times for little-V’s (countable transitive models of ZFC) and at other times for “the real thing”, what I called SUPER-V (to disambiguate the notation). I had originally thought that your view was this: There is SUPER-V. Actualism holds with regard to SUPER-V. There are no actual lengthenings or thickenings of SUPER-V, only virtual ones. Everything takes place in SUPER-V. It is the guide to all our claims about “intrinsic justifications on the basis of the “maximal” iterative conception of set” (subsequently demoted to “intrinsically motivated (heuristically) on the basis of the ‘maximal’ iterative conception of set”). By appealing to the downward Löwenheim-Skolem theorem I thought you were arguing that without loss of generality we could gain insight into SUPER-V by investigating the space of countable transitive models of ZFC.

The virtue of that picture (which I erroneously thought you held) is that you would have something to hang your hat on — SUPER-V –, something to cash out the intuitions that you claimed about the ” ‘maximal’ iterative conception of set”. The drawback what that it was hard to see (for me at least) how we could gain insight into SUPER-V (which had no actual lengthenings or thickenings) by investigating countable transitive models of ZFC (which do!).

But that is all neither here not there, since that is not your view. Your view is far more radical. There is just the hyperuniverse, the space of countable transitive models of ZFC. There is no need for appeal to the Löwenheim-Skolem theorem, since everything is countable!


I now have a much better grip on the background philosophical picture. This is what I suspected all along the way that is why I have been pressing you on these matters.

I want now to examine this world view — to take ii seriously and elaborate its consequences. To do that I will follow your lead with Max and tell a story. The story is below, out of the main body of this letter.

Best,
Peter


Let me introduce K. He has a history of getting into situations like this.

Let us enter the hyperuniverse…

K awakes. He looks around. He is surrounded by countable transitive models of ZFC. Nothing else.

How did I get here? Why are all these countable transitive models of ZFC kicking around? Why not just countable transitive models of ZFC – Replacement + \Sigma_2-Replacement? Why not anything else?

K takes a stroll in this strange universe, trying to get his bearing. All of the models he encounters are transitive models of ZFC. He encounters some that satisfy V = L, some that satisfy \textsf{PD}, some that satisfy \textsf{MM}^{++}, etc. But for every model he encounters he finds another model in which the previous model is witnessed to be countable.

He thinks: “I must be dreaming. I have fallen through layers of sleep into the world of countable transitive models of ZFC. The reason all of these countable transitive models of V = L, \textsf{PD}, \textsf{PFA}, etc. are kicking around is that these statements are \beta-consistent, something I know from my experience with the outer world. In the outer world, before my fall, I was not wedded to the idea that there was a SUPER-V — I was open minded about that. But I was confident that there was a genuine distinction between the countable and the uncountable. And now, through the fall, I have landed in the world of the countable transitive models of ZFC. The uncountable models are still out there — everything down here derives from what lies up there.”

At this point a voice is heard from the void…

S: No! You are not dreaming — you have not fallen. There is no outer world. This is all that there is. Everything is indeed countable.

K: What? Are you telling me set theory has evaporated.

S: No. Set theory is alive and well.

K: But all of the models around here are countable, as witnessed by other models around here. That violates Cantor’s theorem. So, set theory has evaporated.

S: No! Set theory is alive and well. You must look at set theory in the right way. You must redirect your vision. Attend not to the array of all that you see around you. Attend to what holds inside the various models. After all, they all satisfy ZFC — so Cantor’s Theorem holds. And you must further restrict your attention, not just to any old model but to the optimal ones.

K: Hold on a minute. I see that Cantor’s Theorem holds in each of the models around here. But it doesn’t really hold. After all, everything is countable!

S: No, no, you are confused.

[K closes his eyes...]

S: Hey! What are you doing?

K: I’m trying to wake up.

S: Wait! Just stay a while. Give it a chance. It’s a nice place. You’ll learn to love it. Let me make things easier. Let me introduce Max. He will guide you around.

[Max materializes.]

Max takes K on a tour, to all the great sites — the “optimal” countable transitive models of ZFC. He tries to give K a sense of how to locate these, so that one day he too might become a tour guide. Max tells K that the guide to locating these is “maximality”, with regard to both “thickenings” and “lengthenings”.

K: I see, so like forcing axioms (for “thickenings”) and the resemblance principles of Magidor and Bagaria (for “lengthenings”)?

Max: No, no, not that. The “optimal” models are “maximal” in a different sense. Let me try to convey this sense.

Let’s start with IMH. But bear in mind this is just a first approximation. It will turn out to have problems. The goal is to investigate the various principles that are suggested by “maximality” (as a kind of “intrinsic heuristic”) in the hope that we will achieve convergence and find the true principles of “maximality” that enable us to locate the “optimal” universes. O

[Insert description of IMH. Let "CTM-Space" be the space of transitive models of ZFC. In short, let "CTM-Space" be the world that K has fallen into.]

K: I see why you said that there would be problems with IMH: If V\in \text{CTM-Space} satisfies IMH, then V contains a real x such that V satisfies “For every transitive model M of ZFC, x is not in M“; in particular, there is no rank initial segment of V that satisfies ZFC and so such a V cannot contain an inaccessible cardinal. In fact, every ordinal of such a V is definable from x. So such a V is “humiliated” in a dramatic fashion by a real x within it.

Max: I know. Like I said, it was just a first approximation. IMH is, as you observe, incompatible with inaccessible cardinals in a dramatic way. I was just trying to illustrate the sense of “width maximality” that we are trying to articulate. Now we have to simultaneously incorporate “height maximality”. We do this in terms of #-generation.

[Insert description of #-generation and \textsf{IMH}^\#]

K: I have a bunch of problems with this. First, I don’t see how you arrive at #-generation. You use the # to generate the model but then you ignore the #.

Second, there is a trivial consistency proof of \textsf{IMH}^\# but it shows even more, namely, this: Assume that for every real x, x^\# exists. Then there is a real x_0 such that for any V\in \text{CTM-Space} containing x_0, V satisfies Extreme-\textsf{IMH}^\# in the following sense: if \varphi holds in any #-generated model N (whether it is an outer extension of V or not) then \varphi holds in an inner model of V. So what is really going on has nothing to do with outer models — it is much more general than that. This gives not just the compatibility of \textsf{IMH}^\# with all standard large cardinal but also with all choiceless large cardinals.

[It should be clear at this point (and more so below) that despite this new and strange circumstances K is still able to access H.]

Third, there is a problem of articulation. The property of being a model V\in \text{CTM-Space} which satisfies \textsf{IMH}^\# is a \Pi_3 property over the entire space \text{CTM-Space}. When I was living in the outer world (where I could see \text{CTM-Space} as a set) I could articulate that property and thereby locate the V’s in CTM-Space that satisfy \textsf{IMH}^\#. But how can you (we) do that down here? If you really believe that the \Pi_3 property over the space \text{CTM-Space} is a legitimate property then you are granting that the domain \text{CTM-Space} is a determinate domain (to make sense of the determinateness of the alternating quantifiers in the \Pi_3 property). But if you believe that \text{CTM-Space} is a determinate domain then why can’t you just take the union of all the models in \text{CTM-Space} to form a set? Of course, that union will not satisfy ZFC. But my point is that by your lights it should make perfect sense, in which case you transcend this world. In short you can only locate the models V\in \text{CTM-Space} that satisfy \textsf{IMH}^\# by popping outside of $latex \text{CTM-Space}$!

Max: Man, are you ever cranky…

K: I’m just a little lost and feeling homesick. What about you? How did you get there? I have the sense that you got here they same way I did and that (in your talk of \textsf{IMH}^\#) you still have one foot in the outer world.

Max: No, I was born here. Bear with me. Like I said, we are just getting started. Let’s move on to \textsf{IMH}^\#.

K: Wait a second. Before we do that can you tell me something?

Max: Sure.

K: We are standing in  \text{CTM-Space}, right?

Max: Right.

K: And from this standpoint we are proving things about various principles that hold in various V‘s in \text{CTM-Space}, right?

Max: Right.

K: What theory are allowed to use in proving these results about things in \text{CTM-Space}? We are standing here, in \text{CTM-Space}. Our quantifiers range over \text{CTM-Space}. Surely we should be using a theory of \text{CTM-Space}. But we have been using ZFC and that doesn’t hold in \text{CTM-Space}. Of course, it holds in every V in \text{CTM-Space} but it does not hold in \text{CTM-Space} itself. We should be using a theory that holds in \text{CTM-Space} if we are to prove things about the objects in \text{CTM-Space}. What is that theory?

Max: Hmm … I see the point … Actually! … Maybe we are really in one of the V‘s in \text{CTM-Space}! This \text{CTM-Space} is itself in one of the V‘s of a larger \text{CTM-Space}

K (to himself): This is getting trippy.

Max: … Yes, that way we can invoke ZFC.

K: But it doesn’t make sense. Everything around us is countable and that isn’t true of any V in any \text{CTM-Space}.

Max: Good point. Well maybe we are in the \text{CTM-Space} of a V that is itself in the \text{CTM-Space} of a larger V.

K: But that doesn’t work either. For then we can’t help ourself to ZFC. Sure, it holds in the V of whose \text{CTM-Space} we are locked in. But it doesn’t hold here! You seem to want to have it both ways — you want to help yourself to ZFC while living in a \text{CTM-Space}.

Max: Let me get back to you on that one … Can we move on to \textsf{SIMH}^\#.

K: Sure.

[Insert a description of the two version of \textsf{SIMH}^\#. Let us call these Strong-\textsf{SIMH}^\# and Weak-\textsf{SIMH}^\#. Strong-\textsf{SIMH}^\# is based on the unification of \textsf{SIMH} (as formulated in the 2006 BSL paper) but where one restricts to #-generated models. (See Hugh's letter of 10/13/14 for details.) Weak-\textsf{SIMH}^\# is the version where one restricts to cardinal-preserving outer models.]

K: Well, Strong-\textsf{SIMH}^\# is not know to be consistent. It does imply \textsf{IMH}^\# (so the `S’ makes sense). But it also strongly denies large cardinals. In fact, it implies that there is a real x such that \omega_1^{L[x]}=\omega_1 and hence that x^\# doesn’t exist! So that’s no good.

Weak-\textsf{SIMH}^\# is not known to be consistent, either. Moreover, it is not known to imply \textsf{IMH}^\# (so why the “S”). It is true that it implies not-CH (trivially). But we cannot do anything with it since since very little is known about building cardinal-perserving outer models over an arbitrary initial model.

Max: Like I said, we are just getting started.

[Max goes on to describe various principles concerning HOD that are supposed to follow from "maximality". K proves to be equally "difficult".]

K: Ok, let’s back up. What is our guide? I’ve lost my compass. I don’t have a grip on this sense of “maximality” that you are trying to convey to me. If you want to teach me how to be a tour guide and locate the “optimal” models I need something to guide me.

Max: You do have something to guide you, namely, the “maximal” iterative conception of set”!

K: Well, I certainly understand the “iterative conception of set” but when we fell into \text{CTM-Space} we gave up on that. After all, every model here is witnessed to be countable in another model. Everything is countable! That flies in the face of the “iterative conception of set”, a conception that was supposed to give us ZFC, which doesn’t hold here in \text{CTM-Space}.

Max: No, no. You are looking at things the wrong way. You are fixated on \text{CTM-Space}. You have to direct your attention to the models within it. You have to think about things differently. You see, in this new way of looking at things to say that a statement \varphi is true (in this new sense) is not to say that it holds in some V in \text{CTM-Space}; and it is not to say that it holds in all V‘s in \text{CTM-Space}; rather it is to say that it holds in all of the “optimal” V‘s in \text{CTM-Space}. This is our new conception of truth: We declare \varphi to be true if and only if it holds in all of the “optimal” V’s in \text{CTM-Space}. For example, if we want to determine whether CH is true (in this new sense) we have to determine whether it holds in all of the “optimal” V‘s in \text{CTM-Space}. If it holds in all of them, it is true (in this new sense), if it fails in all of them it is false (in this new sense), and if it holds in some but not in others then it is neither true nor false (in this new sense). Got it?

K: Yeah, I got it. But you are introducing deviant notions. This is no longer about the “iterative conception of set” in the straightforward sense and it is no longer about truth in the straightforward sense. But let me go along with it, employing these deviant notions and explaining why I think that they are problematic.

It was disconcerting enough falling into \text{CTM-Space}. Now you are asking me to fall once again, into the “optimal” models in \text{CTM-Space}. You are asking me to, as it were, “thread my way” through the “optimal” models, look at what holds across them, and embrace those statements at true (in this new, deviant sense).

I have two problems with this:

First, this whole investigation of principles — like \textsf{IMH}, \textsf{IMH}^\#, Strong-\textsf{SIMH}^\#, Weak-\textsf{IMH}^\#, etc. — has taking place in \text{CTM-Space}. (We don’t have ZFC here but we are setting that aside. You are going to get back to me on that.) The trouble is that you are asking me to simultaneously view things from inside the “optimal” models (to “thread my way through the ‘optimal’ models”) via principles that make reference to what lies outside of those models (things like actual outer extensions). In order for me to make sense of those principles I have to occupy this external standpoint, standing squarely in \text{CTM-Space}. But if I do that then I can see that none of these “optimal” models are the genuine article. It is fine for you to introduce this deviant notion of truth — truth (in this sense) being what holds across the “optimal” models. But to make sense of it I have to stand right here, in \text{CTM-Space}, and access truth in the straightforward sense — truth in \text{CTM-Space}. And those truths (the ones required to make sense of your principles) undermine the “optimal” models since, e.g., those truths reveal that the “optimal” models are countable!

But let us set that aside. (I have set many things aside already. So why stop here.) There is a second problem. Even if I were to embrace this new conception of truth — as what holds across the “optimal” models in \text{CTM-Space} — I am not sure what it is that I would be embracing. For this new conception of truth makes reference to the notion of an “optimal” model in \text{CTM-Space} and that notion is totally vague. It follows that this new notion of truth is totally vague.

You have referred to a specific sense of “maximality” but I don’t have clear intuitions about the notion you have in mind. And the track record of the principles that you claimed to generate from this notion is, well, pretty bad, and doesn’t encourage me in thinking that there is indeed a clear underlying conception.

Tell me Max: How do you do it? How do you get around? What is your compass? How are you able to locate the “optimal” models? How are you able to get a grip on this specific notion of “maximality”?

Max: That’s easy. I just ask S!

[With those words, K awoke. No one knows what became of Max.]

THE END