Re: Paper and slides on indefiniteness of CH

Dear Sy,

So what is \textsf{SIMH}^\#(\omega_1)? You wrote in your message of Sept 29:

The \textsf{IMH}^\# is compatible with all large cardinals. So is the  \textsf{SIMH}^\#(\omega_1)

A second question. The version of  \textsf{SIMH}^\# you specified in your next message to me on Sept 29:

The (crude, uncut) \textsf{SIMH}^\# is the statement that V is #-generated and if a sentence with absolute parameters holds in a cardinal-preserving, #-generated outer model then it holds in an inner model. It implies a strong failure of CH but is not known to be consistent.

does not even obviously imply \textsf{IMH}^\#.  Perhaps you meant, the above together with \textsf{IMH}^\#? If not then calling it \textsf{SIMH}^\# is rather misleading. Either way it is closer to \textsf{IMH}^\#(\text{card}).

Anyway this explains my confusion, thanks.

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

Thanks, that helps. But just to be clear, does \textsf{SIMH}^\# imply the following statement?

If \varphi holds of \omega_1^V in a #-generated outer model of V which preserves \omega_1^V then \varphi holds of \omega_1^V in an inner model of V.

I don’t see why it should.

The reason I ask is that for \textsf{SIMH}, the analogous statement (deleting #-generated) holds and is implied by \textsf{SIMH}(\omega_1).

Hugh, the HP is (primarily) a study of maximality criteria of the sort we have been discussing. As I have been trying to explain, it is essential to the programme to formulate, analyse, compare and synthesise different criteria, discovering their mathematical consequences. I referred to my formulation of the \textsf{SIMH}^\# as “crude and uncut” as it may have to be modified later as we learn more. Changes in its formulation do not mean a defeat for the programme, but rather progress in our understanding of maximality.

So it makes no sense to assert that if a particular formulation of maximality coming out of the programme contradicts large cardinal existence then the programme is a failure and therefore irrelevant to the resolution of CH. Indeed the first HP criterion, the IMH, did contradict large cardinals, but it was later for compelling reasons synthesised with #-generation into the \textsf{IMH}^\#, which does not. It is not yet clear if the optimal maximality criterion will be compatible with large cardinal existence. It is certainly not the intention of the programme to take a stance on large cardinal existence “in advance” before seeing what maximality criteria are out there.

Sy

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

Sorry for my delayed response.

For the purposes of this discussion:

The \textsf{IMH}^\# is the statement that V is #-generated and if a sentence holds in a #-generated outer model then it holds in an inner model. It is consistent with all large cardinals.

The (crude, uncut) \textsf{SIMH}^\# is the statement that V is #-generated and if a sentence with absolute parameters holds in a cardinal-preserving, #-generated outer model then it holds in an inner model. It implies a strong failure of CH but is not known to be consistent.

The reason that the \textsf{IMH}^\# can be shown to be consistent is that Jensen coding works for #-generated models.

The reason that the \textsf{SIMH}^\#  is not known to be consistent is that Jensen coding will collapse cardinals if GCH fails.

I hope that this clarifies the situation.

Thanks for your interest,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

I guess I have misunderstood something. This would not be the first time.

I thought that M witnesses \textsf{SIMH}^\#(\omega_1) implies that  if there is a \#-generated extension of M preserving \omega^M_1 in which there is a definable inner model in which \varphi holds of \omega^M_1, then in M there is a definable inner model in which \varphi holds of \omega^M_1.  Maybe this implied by \textsf{SIMH}^\#(\omega_2) and what I thought was \textsf{SIMH}^\#(\omega_1) is really  \textsf{SIMH}^\#(\omega_1+1).

In any case this in turn implies that in M there is a real x such that \omega_1 = \omega_1^{L[x]}.  So unless I am really confused the existence of a real x such that \omega_1 = \omega_1^{L[x]} follows from \textsf{SIMH}^\# which still makes my point.

So I guess it would be useful to have precise statements (in terms of countable models etc) of \textsf{SIMH}^\# and \textsf{SIMH}^\#(\kappa) that we all can refer to.

Regards.
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

I have to leave on a short trip now, and will respond in more detail as soon as I can.

You have again misunderstood the \textsf{IMH}^\#!

Below are some brief responses.

On Mon, 29 Sep 2014, W Hugh Woodin wrote:

Dear Sy,

The disadvantage of your formulation of \textsf{IMH}^\# is that it is not even in general a \Sigma^1_3 property of M and so it is appealing in more essential ways to the structure of the “hyperuniverse”.

No. It appeals only to the ordinals of “lengthenings”, not to the structure of the Hyperuniverse!

This is why the consistency proof of \textsf{SIMH}^\#(\omega_1) uses substantially more than a Woodin cardinal with an inaccessible above,  unlike the case of \textsf{IMH} and \textsf{SIMH}(\omega_1).

OK, It seems we will just have to agree that we disagree here.

OK, so you disagree with treating width actualism with “lengthenings”, unlike Pen, Geoffrey and myself. I am missing a coherent explanation for your view.

I think it is worth pointing out to everyone that \textsf{SIMH}^\#, and even the weaker \textsf{SIMH}^\#(\omega_1) which we know to be consistent, implies that there is a real x such that x^\# does not exist.

No, that is not true. The \textsf{IMH}^\# is compatible with all large cardinals. So is the \textsf{SIMH}^\#(\omega_1). What argument are you thinking of?

(even though x^\# exists in the parent hyperuniverse which is a bit odd to say the least in light of the more essential role that the hyperuniverse is playing). The reason of course is that \textsf{SIMH}^\#(\omega_1) implies that there is a real x such that L[x] correctly computes \omega_1.

No, it does not. What argument do you have in mind?

Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

The disadvantage of your formulation of \textsf{IMH}^\# is that it is not even in general a \Sigma^1_3 property of M and so it is appealing in more essential ways to the structure of the “hyperuniverse”.  This is why the consistency proof of \textsf{SIMH}^\#(\omega_1) uses substantially more than a Woodin cardinal with an inaccessible above,  unlike the case of \textsf{IMH} and \textsf{SIMH}(\omega_1).

OK, It seems we will just have to agree that we disagree here.

I think it is worth pointing out to everyone that \textsf{IMH}^\#, and even the weaker \textsf{SIMH}(\omega_1)$ which we know to be consistent, implies that there is a real x such that x^\# does not exist (even though x^\# exists in the parent hyperuniverse which is a bit odd to say the least in light of the more essential role that the hyperuniverse is playing). The reason of course is that \textsf{SIMH}(\omega_1) implies that there is a real x such that L[x] correctly computes \omega_1.

This is a rather high price to pay for getting not-CH.

Thus for me at least, \textsf{SIMH}^\# has all the problems of \textsf{IMH} with regard to isolating candidate truths of V.

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

As I said, if you synthesise the IMH with a weak form of reflection then you will contradict large cardinals. For example if you relativise the IMH to models which are only generated by a presharp which is iterable to the height of the model then you will contradict #’s for reals. The only synthesis that is friendly to large cardinals is with the strongest possible form of reflection, given by #-generation. More on this below:

Consider the following extreme version of IMH^\#:

Suppose M is a ctm and M \models ZFC.  Then M witnesses extreme-IMH^\# if:

  1. There is a thickening of M, satisfying ZFC, in which M is a #-generated inner model.

??? This just says that M is #-generated to its own height! It is a weakened form of #-generation.

  1. M witnesses IMH^\# in all thickenings of M, satisfying ZFC, in which M is a #-generated inner model.

This makes no sense to me. The point of the synthesis is to say that IMH holds for models that satisfy reflection. You are only looking at models which satisfy weak reflection, i.e. which are presharp-generated up to their height! How do you motivate this? Even in the basic V-logic you get iterability up to the least admissible past the height. Of course we want our presharps to stay iterable past the height of the model; this is necessary to capture reflection to its fullest.

One advantage to extreme-IMH^\# is that the formulation does not need to refer to sharps in the hyperuniverse (and so there is a natural variation which can be formulated just using the V-logic of M). This also implies that the property that M witnesses extreme-IMH^\# is \Delta^1_2 as opposed to IMH^\# which is not even in general \Sigma^1_3.

Yes, your weak version of reflection can be captured in V-logic. But this is not much of an advantage as it is heavily outweighed by its disadvantages: We don’t want weak reflection (weak #-generation), we want reflection (#-generation), and this is captured by the natural infinitary logics fixing V defined in arbitrary “lengthenings” of V resulting by adding new L-levels (like the “lengthenings” that Pen, Geoffrey and I have discussed, but instead of iterating powerset to get new von Neumann ranks, one iterates *definable* powerset to generate new Gödel ranks). So once again, full #-generation is a property captured by logics associated to “lengthenings” of V, just like the IMH. It simply makes no sense to stop with weak #-generation, as there is no advantage of doing so.

Given the motivations you have cited for IMH^\# etc., it seems clear that extreme-IMH^\# is the correct result of synthesizing IMH with reflection unless it is inconsistent.

No! The motivation I cited was to assert the IMH for models that obey reflection and to do this you need to use the correct form of reflection, not what you are suggesting.

Thm: Assume every real has a sharp and that some countable

ordinal is a Woodin cardinal in a definable inner model. Then there is a ctm which witnesses that extreme-IMH^\# holds.

However unlike IMH^\#, extreme-IMH^\# is not consistent with all large cardinals.

Thm:  If M satisfies extreme-IMH^\# then there is a real x in M such that in M,  x^\# does not exist.

Originally, when I formulated #-generation I had the weaker form that you are suggesting in mind, knowing this result quite well. But later I came to the full form of #-generation and this problem with large cardinal nonexistence disappeared. I wasn’t actually looking for a way to rescue large cardinals but that was a nice consequence of the correct point of view.

Once again: Any form of reflection weaker than (full) #-generation will kill large cardinals when synthesised with the IMH. And even full #-generation is perfectly compatible with width actualism; it just requires consideration of logics in arbitrary “Gödel-lengthenings” of V.

This seems to be a bit of an issue for the motivation of IMH# and IMH^\#. How will you deal with this?

Explained above.

Yours,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

Pen wrote:

Hugh has talked about how things might go if various conjectures fall in a particular direction: there’d then be a principle ‘V=Ultimate L’ that would at least deserve serious consideration. That’s far short of ‘endorsement’, of course.  Can you point to an HP-generated principle that has that sort of status?

and you responded:

I can come close. It would be the \textsf{SIMH}^\#. But it’s not really analogous to Ultimate L for several reasons:

  1. I hesitate to “conjecture” that the \textsf{SIMH}^\# is consistent.
  2. The \textsf{SIMH}^\# in its crude, uncut form might not be “right”. Recall that my view is that only after a lengthy exploratory process of analysis and unification of different maximality criteria can one understand the Optimal maximality criterion. I can’t say with confidence that the original uncut form of the \textsf{SIMH}^\# will be part of that Optimal criterion; it may have to first be unified with other criteria.
  3. The \textsf{SIMH}^\#, unlike Ultimate L, is absolutely not a “back to square one” principle, as Hugh put it. Even if it is inconsistent, the HP will continue its exploration of maximality criteria and in fact, understanding the failure of the \textsf{SIMH}^\# will be a huge boost to the programme, as it will provide extremely valuable knowledge about how maximality criteria work mathematically.

This is a technical criticism. In brief I am claiming that based on the methodology of HP you have described (though perhaps now rejected), \textsf{IMH}^\# is not the correct synthesis of IMH and reflection. Moreover the correct synthesis, which is significantly stronger, resurrects all the issues associated with IMH regarding “smallness”.

Consider the following extreme version of \textsf{IMH}^\#:

Suppose M is a ctm and M \vDash \text{ZFC}.  Then M witnesses extreme-\textsf{IMH}^\# if:

  1. There is a thickening of M, satisfying ZFC, in which M is a \#-generated inner model.
  2. M witnesses \textsf{IMH}^\# in all thickenings of M, satisfying ZFC, in which M is a \#-generated inner model.

One advantage to extreme-\textsf{IMH}^\# is that the formulation does not need to refer to sharps in the hyperuniverse (and so there is a natural variation which can be formulated just using the V-logic of M). This also implies that the property that M witnesses extreme-\textsf{IMH}^\# is \Delta^1_2 as opposed to \textsf{IMH}^\# which is not even in general \Sigma^1_3.

Given the motivations you have cited for \textsf{IMH}^\# etc., it seems clear that extreme-\textsf{IMH}^\# is the correct result of synthesizing IMH with reflection unless it is inconsistent.

Thm: Assume every real has a sharp and that some countable ordinal is a Woodin cardinal in a definable inner model. Then there is a ctm which witnesses that extreme-\textsf{IMH}^\# holds.

However unlike \textsf{IMH}^\#, extreme-\textsf{IMH}^\# is not consistent with all large cardinals.

Thm:  If M satisfies extreme-\textsf{IMH}^\# then there is a real x in M such that in M, x^\# does not exist.

This seems to be a bit of an issue for the motivation of \textsf{IMH}^\# and \textsf{SIMH}^\#. How will you deal with this?

Regards.
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Neil,

Very impressive! Thanks for the travel guide through the actualist class-theory literature! It seems to be a minefield out there, so the safest move for me is to try to develop a “wimpy HP” that only draws on classes which are first-order definable. I say “wimpy” because this move really takes the juice out of the theory and has the smell of fear in it (“Oh no, I can’t even imagine Tarski’s satisfaction relation for V, because it’s not definable! I’m scared that something awful might happen if we do that!”)

OK, so here is the WHP (Wimpy HP): Height maximality means nothing more than ZFC, i.e. first-order reflection (with set-parameters). (I already have tears in my eyes.) Width maximality means that if we take a wimpy-thickening of V we don’t discover new properties, where a wimpy-thickening of V (no quotes necessary!) is a V-definable structure V-thick (whose universe and relations are V-definable classes) together with a V-definable interpretation of (V,membership) into V-thick. (The “interpretation” should obey the usual good rules for interpretations; in particular any first-order property of (V, membership) translates faithfully into a first-order property of V-thick). So Wimpy-IMH says that if a first-order sentence holds in some wimpy-thickening of V then it holds in a definable inner model of V.

The classical example of a wimpy-thickening is the Scott-Solovay Boolean-valued universe V^\mathbb B where \mathbb B is a set-sized complete Boolean algebra. A more courageous example is V^\mathbb B where \mathbb B is a definable class-sized “tame” set-complete Boolean algebra (for the definition of “tame” see my book on class-forcing). At the moment no other natural example comes to mind. Note how “wimpy” this is: L surely can have “thickenings” with superhuge cardinals but it can’t even have a wimpy-thickening with 0^\#! (On the other hand I have to confess that even Wimpy IMH has powerful consequences, like the nonexistence of inaccessibles).

I conjecture that if someone rejects the HP because it doesn’t sit well with height actualism and gets interested in the Wimpy HP, they will soon regret it and wish they had attended the Real Non-Wimpy HP party.

P.S. I thought I’d keep this out of the text above because it’s rather speculative. Earlier it was intimated that the actualist can’t interpret IMH in a satisfactory manner. I’m not sure I see this; there are plenty of ways of simulating a forcing extension of V within V. For example, Hamkins and Seabold (http://arxiv.org/abs/1206.6075) show how you can replace V with an elementary extension \bar V,

Whoops, how does a severe actualist pull that off? When you move to a (proper) elementary extension you are taking a model of a rather fancy, non-first-order definable theory. It seems you’ve already violated the promise that all classes are first-order definable if you want to think about that theory.

So probably you didn’t mean “fully elementary” but only “\Sigma_N elementary for some big N“.

But more seriously: Note that your elementary extension may fail to be an end-extension! I.e., it may even have nonstandard natural numbers! If “thickenings” of V are to be useful then they can’t change the ordinals. For this reason we’re forced into an infinitary logic where we can form a theory which ensures that the ordinals of V don’t change (or are at least an initial segment of the ordinals of the “imaginary universe”, so we can take a simple “truncation” back to \text{Ord}(V)). In fact we need to get all of the sets of V “fixed” by that theory. To get such a logic we’re forced to my “slight lengthening” of V to a model V^+ of KP with V as an element, and if an actualist can accept that then she might as well let loose and buy the whole HP package, leaving her life as a wimp behind.

pick a \bar V-generic G from V, and simulate the forcing extension of V with \bar V[G]. Why not just do the same with IMH? For example, for the Weak Inner Model Hypothesis we could have:

[WIMH-V] If \varphi holds in an inner model I^{\bar V[G]} of some appropriate elementary extension \bar V of V, then \varphi already holds in an inner model I of V.

If you take \bar V to be an elementary extension and “inner model” to mean “definable inner model” then this will be true automatically; it has no strength.

The matter is complicated by the fact that the \bar V are quite often non-well-founded, but I don’t see that this would affect the results any. You could motivate such a version with exactly the same reasons as the original Weak Inner Model Hypothesis; V should be as “wide” as possible in
the sense that it contains a very high density of inner models, we just require a deviant interpretation for some models to explain how this should be understood. I don’t see, therefore, that the width potentialism is a necessary part of motivating width maximality.

I’ll think about it a bit more, but as I said above I don’t think that it is useful to look at non well-founded “thickenings” of V. And for HP purposes it doesn’t change anything to consider well-founded “thickenings” that live in a non well-founded “imaginary universe”.

Many thanks for your comments!
Sy