Tag Archives: Ultimate L

Re: Paper and slides on indefiniteness of CH

Dear Hugh and Pen,

Hugh:

1. You proposed:

Coding Assumption: if M is weakly #-generated then M can be coded by a real in an outer model which is weakly #-generated

I can’t see why this would be true. One needs alpha-iterable presharps for each alpha to witness the weak #-generation of M, and although each of these presharps can be preserved by some real coding M, there is no single real that does this for all alpha simultaneously.

Instead, I realise that the theory-version of \textsf{IMH}^\# results in a statement for countable models which is a bit weaker than what I said. So I have to change the formulation of \textsf{IMH}^\# again! (Peter, before you go crazy, let me again emphasize that this is how the HP works: We make an investigation of maximality criteria and only through a lot of math and contemplation do we start to understand what is really going on. It requires time and patience.)

OK, the theory version would say: #-generation for V is consistent in V-logic (formulated in any lengthening of V) and for every phi, the theory in V-logic which says that V is #-generated and phi holds in an outer model M of V which is #-generated proves that phi holds in an inner model of V.

What this translates to for a countable model V is then this:

(*) V is weakly #-generated and for all \phi: Suppose that whenever g is a generator for V (iterable at least to the height of V), phi holds in an outer model M of V with a generator which is at least as iterable as g. Then \phi holds in an inner model of V.

For each \phi the above hypothesis implies that for each countable \alpha, \phi holds in an outer model of V with an \alpha-iterable generator. But if V is in fact fully #-generated then the hypothesis implies that \phi holds in an outer model of V which is also fully #-generated. So now we get consistency just like we did for the original oversimplified form of the \textsf{IMH}^\# for countable models.

2. You said:

I think if our evolving understanding of the large cardinal hierarchy rests primarily on the context of V = Ultimate L then very likely the rich generic extensions are not playing much of a role in understanding the large cardinal hierarchy … This for me would build the case for V = Ultimate L and against these rich extensions. It would then take something quite significant in the theory of the rich extensions to undermine that.

Sorry, I still don’t get it. Forcing extensions of L don’t play much of a role in understanding small large cardinals, do they? Yet if 0^\# provably does not exist I don’t see the argument for V = L; in fact I don’t even see the argument for CH. Now why wouldn’t you favour something like “V is a forcing extension of Ultimate L which satisfies MM” or something like that?

3. The problem

(*) Suppose \gamma^+ is not correctly computed by HOD for any infinite cardinal \gamma. Must weak square hold at some singular strong limit cardinal?

actually grew out of my recent AIM visit with Cummings, Magidor, Rinot and Sinapova. We showed that the successor of a singular strong limit \kappa of cofinality \omega can be large in HOD, and I started asking about Weak Square. It holds at \kappa in our model.

Pen:

You have caved into Peter’s P’s an V’s (Predictions and Verifications)!

Peter wrote:

The notion of “good set theory” is too vague to do much work here. Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise. The key notion is evidence, evidence of a form that people can agree on.

Then you said:

I probably should have stepped in at the time to remark that I’ve been using the term ‘good set theory’ for the set theory that enjoys the sort of evidence Peter is referring to here …

Now I want to defend “Defending”! There you describe “the objective reality that set-theoretic methods track” in terms of “depth, fruitfulness, effectiveness, importance and so on” (each with the prefix “mathematical”), there is no mention of P’s and V’s. And I think you got it dead right there, why back off now? I want my former Pen back!

As I said, I do agree that P’s and V’s are of value, they make a “good set theory” better, but they are not the be-all and end-all of “good set theory”! For a set theory to be good there is no need for it to make “verified predictions”; look at Forcing Axioms. And do you really think that they solve the “consensus” problem? Isn’t there also a lack of consensus about what predictions a theory should make and how devastating it is to the theory if they are not verified? Do you really think that we working set-theorists are busily making “predictions”, hopefully before Inspector Koellner of the Set Theory Council makes his annual visit to see how we are doing with their “verification”?

Pen, I really think you’ve made a wrong turn here. You were basing your Thin Realism very sensibly on what set-theorists actually do in their practice, what they think is important, what will lead to exciting new developments. P’s and V’s are a side issue, sometimes of value but surely not central to the practice of “good set theory”.

There is another point. Wouldn’t you want a discussion of truth in set theory to be receptive to what is going on in the rest of mathematics? Everyone keeps ignoring this point in this thread, despite my repeated attempts to bring it forward. Does a functional analyst or algebraist care about Ultimate L or the HP? Of course not! They might laugh if they were to hear about the arguments that we have been having, which for them are just esoteric and quite irrelevant to mathematics as a whole. Forcing Axioms can at least lay a claim to be really useful both for set theory and other areas of mathematics, surely they have to be part of theory of truth. Anyone who makes claims about set-theoretic truth, be it Ultimate L or HP or anything else, which ignores them is missing something important. And won’t it be embarrassing if 100 years from now, set-theorists will announce that they finally figured out what the “correct axioms for set theory” are and mathematicians from other fields don’t care as the “new and true axioms” are either quite useless for what they are doing or even conflict with the axioms that they would like to have for their own “good mathematics”?

Hence my conclusion is that the only sensible move for us to make is to gather evidence from all three sources: Set theory as an exciting and rapidly-developing branch of math and as a useful foundation for math, together with evidence we can draw from the concept of set itself via the maximality of V in height and width. None of these three types (which I have labelled as Types 1, 2 and 3, respectively) should be ignored. And we must also recognise that the procedure for uncovering evidence of these three types depends heavily on the type in question. “Defending” (even without P’s and V’s) teaches us how the process works in Type 1. For Type 2 we have to get into the trenches and see what the weapons being used in core mathematics are, and how we can help when independence infiltrates. For Type 3 it has to be what I am doing: an open-minded, sometimes sloppy and contantly changing (at least at the start) “shotgun approach” to investigating maximality criteria with the optimistic and determined aim of seeing a clear picture after a lot of very hard work is accomplished. The math is very challenging and as you have seen it is even hard to get things formulated properly. But I have lost patience with and will now ignore all complaints that “it cannot be done”, complaints based on nothing more than unjustified pessimism.

Yes, there is a lack of consensus regarding “good set theory”. But Peter is plain wrong to say that it has “no place in a foundational enterprise”. It has a very important place, but to reach a consensus about what the “correct” axioms of set theory should be, the evidence from “good set theory” must be augmented, not just by P’s and V’s but also by other forms of evidence coming from math outside of set theory and from the study of the maximality of V in height and width.

Pen, I would value your open-minded views on this. I hope that you are not also going to reduce “good set theory” to P’s and V’s and complain that the HP “cannot be done”.

Thanks, Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

On Nov 5, 2014, at 7:40 AM, Sy David Friedman wrote:

Hugh:

1. Your formulation of \textsf{IMH}^\# is almost correct:

M witnesses \textsf{IMH}^\# if

1) M is weakly #-generated.

2) If \phi holds in an outer model of M which is weakly
#-generated then \phi holds in an inner model of M.

But as we have to work with theories, 2) has to be: If for each countable \alpha, \phi holds in an outer model of M which is generated by an \alpha-iterable presharp then \phi holds in an inner model of M.

Let’s call this New-\textsf{IMH}^\#.

Are you sure this is consistent?

Assume coding works in the weakly #-generated context:

Coding Assumption: if M is weakly #-generated then M can be coded by a real in an outer model which is weakly #-generated.

Then:

Theorem. Assume PD. Then there is a real x such that for all ctm M, if x is in M then M does not satisfy New-\textsf{IMH}^\#.

(So in any case, one cannot get consistency by the determinacy proof).

2. Could you explain a bit more why V = Ultimate-L is attractive?

Shelah has the informal notion of a semi-complete axiom.

V = L is a semi-complete axiom as is AD^{L(\mathbb R)} in the context of L(\mathbb R) etc.

A natural question is whether there is a semi-complete axiom which is consistent with all large cardinals. No example is known.

If the Ultimate L Conjecture is true (provable) then V = Ultimate L is arguably such an axiom and further it is such an axiom which implies V = HOD (being “semi-complete” seems much stronger in the context of V = HOD).

Of course this is not a basis in any way for arguing V = Ultimate L. But is certainly makes it an interesting axiom whose rejection must be based on something equally interesting.

You said: “For me, the “validation” of V = Ultimate L will have to come from the insights V = Ultimate L gives for the hierarchy of large cardinals beyond supercompact.”
But why would those insights disappear if V is, for example, some rich generic extension of Ultimate L? If Jack had proved that 0^\# does not exist I would not favour V = L but rather V = some rich outer model of L.

I think if our evolving understanding of the large cardinal hierarchy rests primarily on the context of V = Ultimate L then very likely the rich generic extensions are not playing much of a role in understanding the large cardinal hierarchy.

This for me would build the case for V = Ultimate L and against these rich extensions. It would then take something quite significant in the theory of the rich extensions to undermine that.

But such speculations seem very premature. We do not even know if the HOD Conjecture is true. If the HOD Conjecture is not true then the entire Ultimate L scenario fails.

3. I told Pen that finding a GCH inner model over which V is generic is a leading open question in set theory. But you gave an argument suggesting that this has to be strengthened. Recently I gave a talk about HOD where I discussed the following four properties of an inner model M:

Genericity: V is a generic extension of M.

Weak Covering: For a proper class of cardinals \alpha, \alpha^+ = \alpha^{+M}.

Rigidity: There is no nontrivial elementary embedding from M to M.

Large Cardinal Witnessing: Any large cardinal property witnessed in V is witnessed in M.

(When 0# does not exist, all of these hold for M = L except for Genericity: V need not be class-generic over L. As you know, there has been a lot of work on the case M = HOD.)

Now I’d like to offer Pen a new “leading open question”. (Of course I could offer the PCF Conjecture, but I would prefer to offer something closer to the discussion we have been having.) It would be great if you and I could agree on one. How about this: Is there an inner model M satisfying GCH together with the above four properties?

Why not just go with the HOD Conjecture? Or the Ultimate L Conjecture?

There is is another intriguing problem which has been suggested by this thread.

Suppose \gamma^+ is not correctly computed by HOD for any infinite cardinal \gamma.Must weak square hold at some singular strong limit cardinal?

This looks like a great problem to me and it seems clearly to be a new problem.

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Sy,

Pen and Peter, can you please help here? Pen hit me very hard for developing what could be regarded as “Sy’s personal theory of truth” and it seems to me that we now have “Hugh’s personal theory of truth”, i.e., when Hugh develops a powerful piece of set theory he wants to declare it as “true” and wants us all to believe that. This goes far beyond Thin Realism, it goes to what Hugh calls a “conception of V” which far exceeds what you can read off from set-theoretic practice in its many different forms. Another example of this is Hugh’s claim that large cardinal existence is justified by large cardinal consistency; what notion of “truth” is this, if not “Hugh’s personal theory of truth”?

Pen’s Thin Realism provides a grounding for Type 1 truth. Mathematical practice outside of set theory provides a grounding for Type 2 truth. Out intuitions about the maximality of V in height and width provide a grounding for Type 3 truth. How is Hugh’s personal theory of truth grounded?

I’m pretty sure Hugh would disagree with what I’m about to say, which naturally gives me pause. With that understood, I confess that from where I sit as a relatively untutored observer, it looks as if the evidence Hugh is offering is overwhelming of your Type 1 (involving the mathematical virtues of the attendant set theory). My guess is he’d also consider type 2 evidence (involving the relations of set theory to the rest of mathematics) if there were some ready to hand. He has a ‘picture’ of what the set theoretic universe is like, a picture that guides his thinking, but he doesn’t expect the rest of us to share that picture and doesn’t appeal to it as a way of supporting his claims. If the mathematics goes this way rather than that, he’s quite ready to jettison a given picture and look for another. In fact, at times it seems he has several such pictures in play, interrelated by a complex system of implications (if this conjecture goes this way, the universe like this; if it goes that way, it looks like that…) But all this picturing is only heuristic, only an aide to thought — the evidence he cites is mathematical. And, yes, this is more or less how one would expect a good Thin Realist to behave (one more time: the Thin Realist also recognizes Type 2 evidence). (My apologies, Hugh. You must be thinking, with friends like these…)

The HP works quite differently. There the picture leads the way — the only legitimate evidence is Type 3. As we’ve determined over the months, in this case the picture involved has to be shared, so that it won’t degenerate into ‘Sy’s truth’. So far, to be honest, I’m still not clear on the HP picture, either in its height potentialist/width actualist form or its full multiverse form. Maybe Peter is doing better than I am on that.

All best,

Pen

Re: Paper and slides on indefiniteness of CH

Dear Sy,

My point is that the non-rigidity of HOD is a natural extrapolation of ZFC large cardinals into a new realm of strength.  I only reject it now because of the Ultimate-L Conjecture and its implication of the HOD Conjecture. It would be interesting to have an independent line which argues for the non-rigidity of HOD. This is the only reason I ask.

Please don’t confuse two things: I conjectured the rigidity of the Stable Core for purely mathematical reasons. I don’t see it as part of the HP. Indeed, I don’t see a clear argument that the nonrigidity of inner models follows from some form of maximality.

It would be nice to see one such reason (other than then non V-constructible one).

You seem to feel strongly that maximality entails some form of V is far from HOD. It would seem a natural corollary of this to conjecture that the HOD Conjecture is false, unless there is a compelling reason otherwise. If the HOD Conjecture is false then the most natural explanation would be the non-rigidity of HOD but of course there could be any number of other reasons.

In brief: HP considerations would seem to predict/suggest the failure of the HOD Conjecture. But you do not take this step. This is mysterious to me.

I am eager to see a well grounded argument for the HOD Conjecture which is independent of the Ultimate-L scenario.

Why am I so eager?  It would “break the symmetry” and for me anyway argue more strongly for the HOD Conjecture.

But I did answer your question by stating how I see things developing, what my conception of V would be, and the tests that need to be passed. You were not happy with the answer. I guess I have nothing else to add at this point since I am focused on a rather specific scenario.

That doesn’t answer the question: If you assert that we will know the truth value of CH, how do you account for the fact that we have many different forms of set-theoretic practice? Do you really think that one form (Ultimate-L perhaps) will “have virtues that will swamp all the others”, as Pen suggested?

Look, as I have stated repeatedly I see the subject of the model theory of ctm’s as separate from the study of V (but this is not to say that theorems in the mathematical study of ctm’s cannot have significant consequences for the study of V). I see nothing wrong with this view or the view that the practice you cite is really in the subject of ctm’s, however it is presented.

For your second question, If the tests are passed, then yes I do think that V = Ulitmate L will “swamp all the others” but only in regard to a conception of V, not with regard to the mathematics of ctm’s. There are a number of conjectures already which I think would argue for this. But we shall see (hopefully sooner rather than later).

Look: There is a rich theory about the projective sets in the context of not-PD (you yourself have proved difficult theorems in this area). There are a number of questions which remain open about the projective sets in the context of not-PD which seem very interesting and extremely difficult. But this does not argue against PD. PD is true.

Sample current open question: Suppose every projective set is Lebesgue measurable and has the property of Baire. Suppose every light-face projective set has a light-face projective uniformization. Does this imply PD? (Drop light-face and the implication is false by theorems of mine and Steel, replace projective by hyper projective and the implication holds even without the light-face restriction,  by a theorem of mine).

If the Ultimate L Conjecture is false then for me it is “back to square one” and I have no idea about an resolution to CH.

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

On Tue, 28 Oct 2014, W Hugh Woodin wrote:

My point is that the non-rigidity of HOD is a natural extrapolation of ZFC large cardinals into a new realm of strength. I only reject it now because of the Ultimate L Conjecture and its implication of the HOD Conjecture. It would be interesting to have an independent line which argues for the non-rigidity of HOD.

This is the only reason I ask.

Please don’t confuse two things: I conjectured the rigidity of the Stable Core for purely mathematical reasons. I don’t see it as part of the HP. Indeed, I don’t see a clear argument that the nonrigidity of inner models follows from some form of maximality.

But I still don’t have an answer to this question:

What theory of truth do you have? I.e. what do you consider evidence for the truth of set-theoretic statements?

But I did answer your question by stating how I see things developing, what my conception of V would be, and the tests that need to be passed. You were not happy with the answer. I guess I have nothing else to add at this point since I am focused on a rather specific scenario.

That doesn’t answer the question: If you assert that we will know the truth value of CH, how do you account for the fact that we have many different forms of set-theoretic practice? Do you really think that one form (Ultimate L perhaps) will “have virtues that will swamp all the others”, as Pen suggested?

Best,
Sy

PS: With regard to your mail starting with “PS:”: I have worked with people in model theory. When we get an idea we sometimes say “but that would give an easy solution to Vaught’s conjecture” so we start to look for (and find) a mistake. That’s all I meant by my comments: What I was doing would have given a “not difficult solution to the HOD conjecture”; so on this basis I should have doubted the argument and indeed I found a bug.

Re: Paper and slides on indefiniteness of CH

Dear Sy,

It is a virtue of a program if it generate predictions which are subsequently verified. To the extent that these predictions are verified one obtains extrinsic evidence for the program. To the extent that these predictions are refuted one obtains extrinsic evidence for the problematic nature of the program. It need not be a prediction which would “seal the deal” in the one case and “set it back to square one” in the other (two rather extreme cases). But there should be predictions which would lend support in the one case and take away support in the other.

The programs for new axioms that I am familiar with have had this feature. Here are some examples:

(1) Definable Determinacy.

The descriptive set theorists made many predictions that were subsequently verified and taken as support for axioms of definable determinacy. To mention just a few. There was the prediction that \text{AD}^{L(\mathbb R)} would lift the structure theory of Borel sets of reals (provable in ZFC) to sets of reals in L(\mathbb R). This checked out. There was the prediction that \text{AD}^{L(\mathbb R)} followed from large cardinals. This checked out. The story here is long and impressive and I think that it provides us with a model of a strong case for new axioms. For the details of this story — which is, in my view, a case of prediction and verification and, more generally, a case that parallels what happens when one makes a case in physics — see the Stanford Encyclopedia of Philosophy entry “Large Cardinals and Determinacy”, Tony Martin’s paper “Evidence in Mathematics”, and Pen’s many writings on the topic.

(2) Forcing Axioms

These axioms are based on ideas of “maximality” in a rather special sense. The forcing axioms ranging from \textsf{MA} to \textsf{MM}^{++} are a generalization along one dimension (generalizations of the Baire Category Theorem, as nicely spelled out in Todorcevic’s recent book “Notes on Forcing Axioms”) and the axiom (*) is a generalization along a closely related dimension. As in the case of Definable Determinacy there has been a pretty clear program and a great deal of verification and convergence. And, at the current stage advocates of forcing axioms are able to point to a conjecture which if proved would support their view and if refuted would raise a serious problem (though not necessarily setting it back to square one), namely, the conjecture that \textsf{MM}^{++} and (*) are compatible. That I take to be a virtue of the program. There are test cases. (See Magidor’s contribution to the EFI Project for more on this aspect of the program.)

(3) Ultimate L

Here we have lots of predictions which if proved would support the program and there are propositions which if proved would raise problems for the program. The most notable on is the “Ultimate L Conjecture”. But there are many other things. E.g. That conjecture implies that V=HOD. So, if the ideas of your recent letter work out, and your conjecture (combined with results of “Suitable Extender Models, I”) proves the HOD Conjecture then this will lend some support “V = Ultimate L” in that “V = Ultimate L” predicts a proposition that was subsequently verified in ZFC.

It may be too much to ask that your program at this stage make such predictions. But I hope that it aspires to that. For if it does not then, as I mentioned earlier, one has the suspicion that it is infinitely revisable and “not even wrong”.

One additional worry is the vagueness of the idea of the “‘maximal’ iterative conception of set”. If there were a lot of convergence in what was being mined from this concept then one might think that it was clear after all. But I have not seen a lot of convergence. Moreover, while you first claimed to be getting “intrinsic justifications” (an epistemically secure sort of thing) now you are claiming to arrive only at “intrinsic heuristics” (a rather loose sort of thing). To be sure, a vague notion can lead to motivations that lead to a great deal of wonderful and intriguing mathematics. And this has clearly happened in your work. But to get more than interesting mathematical results — to make a case for for new axioms — at some stage one will have to do more than generate suggestions — one will have to start producing propositions which if proved would support the program and if refuted would weaken the program.

I imagine you agree and that that is the position that you ultimately want to be in.

Best,
Peter

Re: Paper and slides on indefiniteness of CH

We haven’t discussed Hugh’s Ultimate L program much. There are two big differences between this program and CTMP (aka HP). As I understand it,

1. It offers a proposed preferred set theoretic universe in which it is clear that CH holds – but the question of its existence relative to large cardinals (or relative consistency) is a (or the) major open question in the program.

2. In connection with 1, there are open conjectures (formulated in ZFC) which show how to refute Reinhardt’s axiom (the existence of j:V \to V) within ZF (and more).

So even if one rejects 1, this effort will leave us at least with 2, which is nearly universally regarded as important in the set theory community.

It would be nice for most people on this thread to have a generally understandable account of at least the structure of 1. I know that there has been some formulations already on the thread, but they are a while ago, and relatively technical. So let me ask some leading questions.

Can this Ultimate L proposal be presented in the following generally understandable shape?

Goedel’s constructible sets, going by the name of L, are built up along the ordinals in a very well defined way. This allows all of the usual set theoretic problems like CH to become nice mathematical problems, when formulated within the set theoretic universe of constructible sets, L. Thanks to Goedel, Jensen, and others, all of these problems have been settled as L problems. (L is the original so called inner model).

Dana Scott showed that L cannot accommodate measurable cardinals. There is an incompatibility.

Jack Silver showed that L can be extended to accommodate measurable cardinals. He worked out L[U], where U stands for a suitable measure on a measurable cardinal. The construction is somewhat analogous to the original Goedel’s L. Also all of the usual set theoretic problems like CH are settled in L[U].

This Gödel-Silver program (you don’t usually see that name though) has been lifted to considerably stronger large cardinals, with the same outcome. The name you usually see is “the inner model program”. The program slowed down to a trickle, and is stalled at some medium large cardinals considerably stronger than measurable cardinals, but very much weaker than – well it’s a bit technical and I’ll let others fill in the blanks here.

“Inner model theory for a large cardinal” became a reasonably understood notion at an informal or semiformal level. And some good test questions emerged that seem to be solvable only by finding an appropriate “inner model theory” for some large cardinals.

So I think this sets the stage for a generally understandable or almost generally understandable discussion of what Hugh is aiming to do.

Perhaps Hugh has picked out some important essential features of what properties the inner models so far have had, adds to them some additional desirable features, and either conjectures or proves that there is a largest such inner model – if there is any such inner model at all. I am hoping that this is screwed up only a limited amount, and the accurate story can be given in roughly these terms, black boxing the important details.

There are also a lot of important issues that we have only touched on in this thread that I think we should return to. Here is a partial list.

1. Sol maintains that there is a crucial difference between (\mathbb N,+,\times) and (P(\mathbb N),\mathbb N,\in,+,\times) that drives an enormous difference in the status of first order sentences. Whereas Peter for sure, and probably Pen, Hugh, Geoffrey strongly deny this. I think that Sol’s position is stronger on this, but I am interested in playing both sides of the fence on this. In particular, one enormous difference between the two structures that is mathematically striking is that the first is finitely generated (even 1-generated), whereas the second is not even countably generated. Of course, one can argue both for and against that this indisputable fact does or does not inform us about the status of first order sentences. Peter has written on the thread that he has refuted Sol’s arguments in this connection, and Sol denies that Peter has refuted Sol’s arguments in this connection. Needs to be carefully and interactively discussed, even though there has been published stuff on this.

2. The idea of “good set theory” has been crucial to the entire thread here. Obviously, there is the question of what is good set theory. But even more basic is this: I don’t actually hear or see much done at all in higher set theory other than studies of models of higher set theory. By higher set theory I mean more or less set theory except for DST = descriptive set theory. See, DST operates just like any normal mathematical area. DST does not study models of DST, or models of any set theory. DST basically works with Borel and sometimes analytic sets and functions, and applies these notions to shed light on a variety of situations in more or less core mathematics. E.g., ergodic theory, group actions, and the like. Higher set theory operates quite differently. It’s almost entirely wrapped up in metamathematical considerations. Now maybe there is a point of view that says I am wrong and if you look at it right, higher set theorists are simply pursuing a normal mathematical agenda – the study of sets. I don’t see this, unless the normal mathematical area is supposed to be “the study of models of higher set theory”. Perhaps people might want to interpret working out what can be proved from forcing axioms? Well, I’m not sure this is similar to the situation in a normal area of mathematics like DST. So my point is: judging new axioms for set theory on the basis of “good set theory” or “bad set theory” doesn’t quite match the situation on the ground, as I see it.

3. In fact, the whole enterprise of higher set theory has so many features that are so radically different from the rest of mathematics, that the whole enterprise, to my mind, should come into serious question. Now I want to warn you that I am both a) incredibly enthusiastic about the future of higher set theory, and b) incredibly dismissive about any future of higher set theory whatsoever — all at the same time. This is because a) is based on certain special aspects of higher set theory, whereas b) is based on the remaining aspects of higher set theory. So when you see me talking from both sides of my mouth, you won’t be shocked.

Harvey

Re: Paper and slides on indefiniteness of CH

Dear Peter,

My apologies for the actualism/potentialism confusion! The situation is this: We have been throwing around 3 views:

1. Actualism in height and width (Neil Barton?)
2. Actualism only in width (Pen and Geoffrey?)
3. Actualism in neither.

Now the problem with me is that I have endorsed both 2 and 3 at different times! What I have been trying to say is that the choice between 2 and 3 does not matter for the HP, the programme can be presented from either point of view without any change in the mathematics. In 3 the universes to which V is compared actually are there, as part of the background multiverse (an extreme multiverse view) and in 2 you can only talk about them with “quotes”, yet the question of what is true in them is internal to (a mild lengthening of) V.

I have been a chameleon on this: My personal view is 3, but since no one shares that view I have offered to adopt view 2, to avoid a philosophical debate which has no pracatical relevance for the HP.

It is similar with the use of countable models! Starting with view 2 I argue that the comparisons that are made of V with other “universes” (in quotes) could equally well be done by replacing V by a ctm and removing the quotes. But again, this is not necessary for the programme, as one could simply refuse to do that and awkardly work with quoted “universes” all of the time. I don’t understand why anyone would want to do such an awkward thing, but I am willing to play along and sadly retitle the programme the MP (Maximality Programme) instead of the Hyperuniverse Programme, as now the countable models play no role anymore. In this way the MP is separated from the study of countable transitive models altogether.

In summary: There is some math going on in the HP which is robust under changes of interpretation of the programme. My favourite interpretation would be View 3 above, but I have settled on View 2 to make people happy, and am even willing to drop the reduction to countable models to make even more people happy.

I am an extreme potentialist who is willing to behave like a width actualist.

The mathematical dust has largely settled — as far as the program as it currently stands is concerned –, thanks to Hugh’s contributions.

What? There is plenty of unsettled mathematical dust out there, not just with the future development of the HP but also with the current discussion of it. See my mail of 25.October to Pen, for example. What do we say about the likelihood that maximality of V with respect to HOD likely contradicts large cardinal existence? Even if the HP leads to the failure of supercompacts to exist, can one at least get PD out of the HP and if so, how?

More broadly, a lot remains unanswered in this discussion regarding Type 1 evidence (for “good set theory”): If \text{AD}^{L(\mathbb R)} is parasitic on \text{AD} how does one argue that it is a good choice of theory? When we climb the interpretability hierarchy, should we drop AC in our choice of theories and instead talk about what happens in inner models, as in the case of AD? Similarly, why is large cardinal existence in V preferred over LC existence in inner models? Are Reinhardt cardinals relevant to these questions? And with regard to Ultimate L: What theory of truth is to be used when assessing its merits? Is it just Thin Realism, and if so, what is the argument that it yields “the best set theory” (“whose virtues swamp all the others” as Pen would say) and if not, is there something analagous to the HP analysis of maximality from which Ultimate L could be derived?

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

On Fri, 24 Oct 2014, W Hugh Woodin wrote:

Dear Sy,

You wrote to Pen:

But to turn to your second comment above: We already know why CH doesn’t have a determinate truth value, it is because there are and always will be axioms which generate good set theory which imply CH and others which imply not-CH. Isn’t this clear when one looks at what’s been going on in set theory? (Confession: I have to credit this e-mail discussion for helping me reach that conclusion; recall that I started by telling Sol that the HP might give a definitive refutation of CH! You told me that it’s OK to change my mind as long as I admit it, and I admit it now!)

ZF + AD will always generate “good set theory”…   Probably also V=L…

This seems like a rather dubious basis for the indeterminateness of a problem.

I guess we have something else to put on our list of items we simply have to agree we disagree about.

What theory of truth do you have? I.e. what do you consider evidence for the truth of set-theoretic statements? I read “Defending the Axioms” and am convinced by Pen’s Thin Realism when it comes to such evidence coming either from set theory as a branch of mathematics or as a foundation of mathematics. On this basis, CH cannot be established unless a definitive case is made that it is necessary for a “good set theory” or for a “good foundation for mathematics”. It is quite clear that there never will be a case that we need CH (or not-CH) for “good set theory”. I’m less sure about its necessity for a “good foundation”; we haven’t looked at that yet.

We need ZF for good set theory and we need AC for a good foundation. That’s why we can say that the axioms of ZFC are true.

On the other hand if you only regard evidence derived from the maximality of V as worthy of consideration then you should get the negation of CH. But so what? Why should that be the only legitimate relevant evidence regarding the truth value of CH? That’s why I no longer claim that the HP will solve the continuum problem (something I claimed at the start of this thread, my apologies). But nor will anything like Ultimate L, for the reasons above.

I can agree to disagree provided you tell me on what basis you conclude that statements of set theory are true.

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

There is a great deal of disconnect between your reaction to Hugh’s letter and the impression I got from his letter, so much so that it feels like we read different letters.

Right at the start, Hugh’s letter has the line “I want to emphasize that what I describe below is just my admittedly very optimistic view” and later it has the line “Now comes (really extreme) sheer speculation.” It is thus clear that the letter is presenting an optimistic view.

Now, on the one hand, you seem to realize it is the presentation of an optimistic scenario — e.g. when you speak of “fantasy” and the difficulty of solving some of the underlying conjectures on which it rests, like, the iterability problem — but then later you switch and write:

You give the feeling that you are appearing at the finish line without running the race. … It gives the false impression that you have figured everything out, while in fact there is a lot not yet understood even near the beginning of your story.

I didn’t get that impression at all and I don’t know how you got it. I got the impression of someone presenting an “very optimistic view”, one that is mathematically precise and has the virtue of being sensitive to mathematical conjectures. ["There are rather specific conjectures which if proved would, I think argue strongly for this view. And if these conjectures are false then I would have to alter my view".] Far from getting the impression of someone who made it look like he was “at the finish line without running the race” I got the impression of someone who had a clear account of a finish line, was working hard to get there, realized there was a lot to do, and even thought that the finish line could disappear if certain conjectures turned out to be false.

(The mathematics behind this is considerable. In addition to the massive amount of work in inner model theory over the last forty years the new work is quite involved. E.g. even the monographs “Suitable Extender Models” 1 and 2 and the monograph on fine structure, alone amount to more than 1000 pages of straight mathematics, which, given my experience with the “expansion factor” in this work, is a misleadingly small number.)

It is remarkable to me that we now have such a scenario in inner model theory, one that is mathematically precise and has mathematical traction in that if certain conjectures turn out to be true one would have a strong case for it. The point is that given the incremental nature of inner model theory, a decade ago no one would have advocated such a view since, e.g., once one reached one supercompact the task of reaching a huge cardinal would not thereby be solved (any more than solving the inner model problem for strong cardinals also solved the inner model problem for Woodin cardinals). But now there has been a shift in landscape — a shift due to mathematical discoveries, showing that in a precise sense one just has to reach one supercompact and that at that point there is `overflow’ — and one can articulate such a scenario in a mathematically precise manner.

It is a virtue of a foundational program if it can articulate such a scenario. A foundational program should be able to list a sequence of conjectures which if true would make a case for the program and which if false would be a mark against the program, and even, in an extreme case set one back to square one. To do this is not to indulge in sheer fantasy. It is to give a program mathematical traction.

I would like to see you do the same for your program. You really should, at some stage, be able to do this. There must be a line of conjectures that you can point to which if true would make a strong case for the program and if false would be a setback; otherwise, it is not open to certification or refutation and one starts to wonder whether it is infinitely revisable and so “not even wrong”. I’m sure you agree. So please tell us whether you are at the stage where you can do that and if you are then I for one would like to hear some of the details (or be pointed to a place where I can find them).

Best,
Peter

P.S. I owe you a response to your request for feedback on one of the points at issue between you and Pen. I’m sorry for not doing that yet. The semester started. I’ll send something soon.

Also, a high bar must be met to send an email to so many people and I doubt I will meet that high bar. I’ll send it here since it was requested here. But eventually I think this should all be moved to a blog or FOM, something where people can subscribe.