Re: Paper and slides on indefiniteness of CH: My final mail to the Thread

Dear Sy,

Before we close this thread, it would be nice if you could state what the current version of \textsf{IMH}^\# is. This would at least leave me with something specific to think about.

Is it:

1) (SDF: Nov 5) M is weakly #-generated and for each phi, if
for each countable alpha, phi holds in an outer model of M which
is generated by an alpha-iterable presharp then phi holds in an inner model of M.

2) (SDF: Nov 8) M is weakly #-generated and for all \phi: Suppose that whenever g is a generator for M (iterable at least to the height of M), \phi holds in an outer model M with a generator which is at least as iterable as g. Then \phi holds in an inner model of M.

or something else? Or perhaps it is now a work in progress?

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Sy,

In an attempt to move things along, I would like to both summarize where we are
and sharpen what I was saying in my (first) message of Nov 8. My points were
possibly swamped by the technical questions I raised.

1) We began with Original-\textsf{IMH}^\#

This is the #-generated version. In an attempt to provide a V-logic formulation
you proposed a principle which I called (in my message of Nov 5):

2) New-\textsf{IMH}^\#

I raised the issue of consistency and you then came back on Nov 8 with the principle (*):

What this translates to for a countable model V is then this:

(*) V is weakly #-generated and for all phi: Suppose that whenever g is a generator for V (iterable at least to the height of V), \phi holds in an outer model M of V with a generator which is at least as iterable as g. Then \phi holds in an inner model of V.

Let’s call this:

3) Revised-New-\textsf{IMH}^\#

(There are too many (*) principles)

But: Revised-New-\textsf{IMH}^\# is just the disjunct of Original-\textsf{IMH}^\# and New-\textsf{IMH}^\#

So Revised-New-\textsf{IMH}^\# is consistent. But is Revised-New-\textsf{IMH}^\# really what you had in mind?

(The move from New-\textsf{IMH}^\# to the disjunct of Original-\textsf{IMH}^\# and New-\textsf{IMH}^\# seems a bit problematic to me.)

Assuming Revised-New-\textsf{IMH}^\# is what you have in mind, I will continue.

Thus, if New-\textsf{IMH}^\# is inconsistent then Revised-New-\textsf{IMH}^\# is just Original-\textsf{IMH}^\#.

So we are back to the consistency of New-\textsf{IMH}^\#.

The theorem (of my message of Nov 8 but slightly reformulated here)

Theorem. Assume PD. Then there is a countable ordinal \eta and a real x such that if M is a ctm such that
1) x is in M and M \vDash ``V = L[t]\text{ for real }t"
2) M satisfies Revised-New-\textsf{IMH}^\# with parameter \eta
then M is #-generated (and so M satisfies Original-\textsf{IMH}^\#)

strongly suggests (but does not prove) that New-\textsf{IMH}^\# is
inconsistent if one also requires M be a model of “V = L[Y] for some set Y”.

Thus if New-\textsf{IMH}^\# is consistent it likely must involve weakly #-generated models M which cannot be coded by a real in an outer model which is #-generated.

So just as happened with SIMH, one again comes to an interesting CTM question whose resolution seem essential for further progress.

Here is an extreme version of the question for New-\textsf{IMH}^\#:

Question: Suppose M is weakly #-generated. Must there exist a weakly #-generated outer model of M which contains a set which is not set-generic over M?

[This question seems to have a positive solution. But, building weakly #-generated models which cannot be coded by a real in an outer model which is weakly #-generated still seems quite difficult to me. Perhaps Sy has some insight here.]

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Sy,

Theorem. Assume PD. Then there is a countable ordinal \eta and a real x such that if M is a ctm such that

(1) x is in M and M \vDash ``V = L[t] \text{ for a real }t"

(2) M satisfies (*)(\eta) (this (*) but allowing \eta as a parameter),

then M is #-generated.

So, you still have not really addressed the ctm issue at all.

Here is the critical question:

Key Question: Can there exist a ctm M such that M satisfies (*) in the hyper-universe of L(M)[G] where G is L(M)-generic for collapsing all sets to be countable.

Or even:

Lesser Key Question: Suppose that M is a ctm which satisfies (*). Must M be #-generated?

Until one can show the answer is “yes” for the Key Question, there has been no genuine reduction of this version of \textsf{IMH}^\# to V-logic.

If the answer to the Lessor Key Question is “yes” then there is no possible reduction to V-logic.

The theorem stated above strongly suggests the answer to the Lesser Key Question is actually “yes” if one restricts to models satisfying “V = L[Y]\text{ for some set }Y”.

The point of course is that if M is a ctm which satisfies “V = L[Y]\text{ for some set }Y” and M witnesses (*) then M[g] witnesses (*) where g is an M-generic collapse of Y to $lateex \omega$.

The simple consistency proofs of Original-\textsf{IMH}^\# all easily give models which satisfy “V = L[Y]\text{ for some set }Y”.

The problem

(*) Suppose \gamma^+ is not correctly computed by HOD for any infinite cardinal \gamma. Must weak square hold at some singular strong limit cardinal?

actually grew out of my recent AIM visit with Cummings, Magidor, Rinot and Sinapova. We showed that the successor of a singular strong limit kappa of cof omega can be large in HOD, and I started asking about Weak Square. It holds at kappa in our model.

Assuming the Axiom \textsf{I}0^\# is consistent one gets a model of ZFC in which for some singular strong limit \kappa of uncountable cofinality, weak square fails at \kappa and \kappa^+ is not correctly computed by HOD.

So one cannot focus on cofinality \omega (unless Axiom \textsf{I}0^\# is inconsistent).

So born of this thread is the correct version of the problem:

Problem: Suppose \gamma is a singular strong limit cardinal of uncountable cardinality such that \gamma^+ is not correctly computed by HOD. Must weak square hold at \gamma?

Aside: \textsf{I}0^\# asserts there is an elementary embedding j:L(V_{\lambda+1}^\#) \to L(V_{\lambda+1}^\#) with critical point below \lambda.

Regards, Hugh

Re: Paper and slides on indefiniteness of CH

Dear Sy,

On Nov 5, 2014, at 7:40 AM, Sy David Friedman wrote:

Hugh:

1. Your formulation of \textsf{IMH}^\# is almost correct:

M witnesses \textsf{IMH}^\# if

1) M is weakly #-generated.

2) If \phi holds in an outer model of M which is weakly
#-generated then \phi holds in an inner model of M.

But as we have to work with theories, 2) has to be: If for each countable \alpha, \phi holds in an outer model of M which is generated by an \alpha-iterable presharp then \phi holds in an inner model of M.

Let’s call this New-\textsf{IMH}^\#.

Are you sure this is consistent?

Assume coding works in the weakly #-generated context:

Coding Assumption: if M is weakly #-generated then M can be coded by a real in an outer model which is weakly #-generated.

Then:

Theorem. Assume PD. Then there is a real x such that for all ctm M, if x is in M then M does not satisfy New-\textsf{IMH}^\#.

(So in any case, one cannot get consistency by the determinacy proof).

2. Could you explain a bit more why V = Ultimate-L is attractive?

Shelah has the informal notion of a semi-complete axiom.

V = L is a semi-complete axiom as is AD^{L(\mathbb R)} in the context of L(\mathbb R) etc.

A natural question is whether there is a semi-complete axiom which is consistent with all large cardinals. No example is known.

If the Ultimate L Conjecture is true (provable) then V = Ultimate L is arguably such an axiom and further it is such an axiom which implies V = HOD (being “semi-complete” seems much stronger in the context of V = HOD).

Of course this is not a basis in any way for arguing V = Ultimate L. But is certainly makes it an interesting axiom whose rejection must be based on something equally interesting.

You said: “For me, the “validation” of V = Ultimate L will have to come from the insights V = Ultimate L gives for the hierarchy of large cardinals beyond supercompact.”
But why would those insights disappear if V is, for example, some rich generic extension of Ultimate L? If Jack had proved that 0^\# does not exist I would not favour V = L but rather V = some rich outer model of L.

I think if our evolving understanding of the large cardinal hierarchy rests primarily on the context of V = Ultimate L then very likely the rich generic extensions are not playing much of a role in understanding the large cardinal hierarchy.

This for me would build the case for V = Ultimate L and against these rich extensions. It would then take something quite significant in the theory of the rich extensions to undermine that.

But such speculations seem very premature. We do not even know if the HOD Conjecture is true. If the HOD Conjecture is not true then the entire Ultimate L scenario fails.

3. I told Pen that finding a GCH inner model over which V is generic is a leading open question in set theory. But you gave an argument suggesting that this has to be strengthened. Recently I gave a talk about HOD where I discussed the following four properties of an inner model M:

Genericity: V is a generic extension of M.

Weak Covering: For a proper class of cardinals \alpha, \alpha^+ = \alpha^{+M}.

Rigidity: There is no nontrivial elementary embedding from M to M.

Large Cardinal Witnessing: Any large cardinal property witnessed in V is witnessed in M.

(When 0# does not exist, all of these hold for M = L except for Genericity: V need not be class-generic over L. As you know, there has been a lot of work on the case M = HOD.)

Now I’d like to offer Pen a new “leading open question”. (Of course I could offer the PCF Conjecture, but I would prefer to offer something closer to the discussion we have been having.) It would be great if you and I could agree on one. How about this: Is there an inner model M satisfying GCH together with the above four properties?

Why not just go with the HOD Conjecture? Or the Ultimate L Conjecture?

There is is another intriguing problem which has been suggested by this thread.

Suppose \gamma^+ is not correctly computed by HOD for any infinite cardinal \gamma.Must weak square hold at some singular strong limit cardinal?

This looks like a great problem to me and it seems clearly to be a new problem.

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Sy,

On Nov 3, 2014, at 3:38 AM, Sy David Friedman wrote:

Hugh:

1. The only method I know to obtain the consistency of the maximality criterion I stated involves Prikry-like forcings, which add Weak Squares. Weak Squares contradict supercompactness.

So you think that if the Maximality Criterion holds then weak square holds at some singular strong limit?

3. I was postponing the discussion of the reduction of #-generation to ctm’s (countable transitive models) as long as possible as it is quite technical, but as you raised it again I’ll deal with it now. Recall that in the HP “thickenings” are dealt with via theories. So #-generation really means that for each Gödel lengthening L_\alpha(V) of V, the theory in L_\alpha(V) which expresses that V is generated by a presharp which is \alpha-iterable is consistent. Another way to say this is that for each \alpha, there is an \alpha-iterable presharp which generates V in a forcing extension of L(V) in which \alpha is made countable. For ctm’s this translates to: A ctm M is (weakly) #-generated if for each countable \alpha, M is generated by an \alpha-iterable presharp. This is weaker than the cleaner, original form of #-generation. With this change, one can run the LS argument and regard \textsf{IMH}^\# as a statement about ctm’s. In conclusion: You are right, we can’t apply LS to the raw version of \textsf{IMH}^\#, essentially because #-generation for a (real coding a) countable V is a \Sigma^1_3 property; but weak #-generation is \Pi^1_2 and this is the only change required.

Just be clear you are now proposing that \textsf{IMH}^\# is:

M witnesses \textsf{IMH}^\# if

1) M is weakly #-generated.

2) If \phi holds in an outer model of M which is weakly #-generated then \phi holds in an inner model of M.

Here: a ctm K is weakly #-generated if for each countable ordinal \alpha, there is an \alpha-iterable (N,U) whose \text{Ord}^K-iterate gives K.

Is this correct?

Regards, Hugh

Re: Paper and slides on indefiniteness of CH

Dear Peter and Sy,

I would like to add a short comment about the move to \textsf{IMH}^\#. This concerns to what extent it can be formulated without consulting the hyper-universe is an essential way (which is the case for \textsf{IMH} since \textsf{IMH} can be so formulated). This issue has been raised several times in this thread.

Here is the relevant theorem which I think sharpens the issues.

Theorem. Suppose \textsf{PD} holds, Let X be the set of all ctm M such that M satisfies \textsf{IMH}^\#. Then X is not \Sigma_2-definable over the hyperuniverse. (lightface).

Aside: X is always \Pi_2 definable modulo being #-generated and being #-generated is \Sigma_2-definable. So X is always \Sigma_2\wedge \Pi_2-definable. If one restricts to M of the form L_{\alpha}[t] for some real t, then X is \Pi_2-definable but still not \Sigma_2-definable.

So it would seem that internalization \textsf{IMH}^\# to M via some kind of vertical extension etc., might be problematic or might lead to a refined version of \textsf{IMH}^\# which like IMH has strong anti-large cardinal consequences.

I am not sure what if anything to make of this, but I thought I should point it out.

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

On Oct 31, 2014, at 12:20 PM, Sy David Friedman wrote:

Dear Hugh,

On Fri, 31 Oct 2014, W Hugh Woodin wrote:

Ok we keep going.

Why? I think I made my position clear enough: I stated a consistent Maximality Criterion and based on my proof (with co-authors) of its consistency I have the impression that this Criterion contradicts supercompacts (not just extendibles).

But why do you have that impression? That is what I am interested in. You have given no reason and at the same time there seem to be many reasons for you not to have that impression. Why not reveal what you know?

I also think that the Maximality Criterion I stated could be made much stronger, which I think is only possible if one denies the existence of supercompacts. (Just a conjecture, no theorem yet.)

Let the Strong HOD Hypothesis be: No successor of a singular strong limit of uncountable cofinality is \omega-strongly measurable in HOD

(Recall: this is not known to consistently fail with appealing to something like Reinhardt Cardinals. The restriction to uncountable cofinality is necessary because of the Axiom I0: Con (ZFC + I0) gives the consistency with ZFC that there is a singular strong limit cardinal  whose successor is \omega-strongly measurable in HOD.)

If the Strong HOD Hypothesis holds in V and if the Maximality Criterion holds in V, then there are no supercompact cardinals, in fact there are no cardinals \kappa which are \omega_1+\omega-extendible; i.e. no \kappa for which there is j:V_{\kappa+\omega_1+\omega} \to V_{j(\kappa +\omega_1+\omega)}.

If ZFC proves the HOD Hypothesis, it surely proves the Strong HOD Hypothesis.

First you erroneously thought that I wanted to reject PD and now you think I want to reject large cardinals! Hugh, please give me a chance here and don’t jump to quick conclusions; it will take time to understand Maximality well enough to see what large cardinal axioms it implies or tolerates.

I see you making speculations for which I do not yet see another explanation of. But fine, take all the time you want. I have no problem with agreeing that HP is in a (mathematically) embryonic phase and we have to wait before being able to have a substantive (mathematical) discussion about it.

There is something robust going on, please give the HP time to do its work. I simply want to take an unbiased look at Maximality Criteria, that’s all. Indeed I would be quite happy to see a convincing Maximality Criterion that implies the existence of supercompacts (or better, extendibles), but I don’t know of one.

But if the synthesis of maximality, in the sense of failure of the HOD Hypothesis, together with large cardinals, in the sense of there is an extendible cardinal, yields a greatly enhanced version of maximality, why is this not enough?

That is what I am trying to understand.

Regards.
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Sy,

I guess I should respond to your question as well.

On Oct 31, 2014, at 3:30 AM, Sy David Friedman wrote:

My point is that Hugh considers large cardinal existence to be part of set-theoretic truth. Why?

Let me clarify my position, or at least that part of it which concerns my (frankly extreme) skepticism about your anti-large cardinal principles.

(I am assuming LC axioms persists under small forcing and that is all in the discussion below)

Suppose there is a proper class of Woodin cardinals. Suppose M is a ctm and M has an iteration strategy \mathcal I at its least Woodin cardinal such that \mathcal I is in L(A,\mathbb R) for some univ. Baire set A.

Suppose some LC axiom holds in M above the least Woodin cardinal of M.Then in V, every V_{\alpha} has a vertical extension in which the LC axiom holds above \alpha.

The existence of such an M for the LC axiom is a natural form of consistency of the LC axiom (closely related to the consistency in \Omega-logic).

Thus for any LC axiom (such as extendible etc.), it is compelling (modulo consistency) that every V_{\alpha} has a vertical extension in which LC axiom holds above \alpha.

But then any claim that the LC axiom does not hold in V, is in general an extraordinary claim in need of extraordinary evidence.

The maximality principles you have proposed do not (for me anyway) meet this standard.

Just to be clear. I am not saying that any LC axiom which is consistent in the sense described above, must be true. I do not believe this (there are adhoc LC axioms for which it is false).

I am just saying that the declaration, the LC axiom does not hold in V, in general requires extraordinary evidence, particularly in the case of LC axioms such as the LC axiom: there is an extendible cardinal.

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Ok we keep going.

On Oct 31, 2014, at 3:30 AM, Sy David Friedman wrote:

Dear Pen,

With co-authors I established the consistency of the following Maximality Criterion. For each infinite cardinal \alpha, \alpha^+ of HOD is less than \alpha^+.

Both Hugh and I feel that this Criterion violates the existence of certain large cardinals. If that is confirmed, then I will (tentatively) conclude that Maximality contradicts the existence of large cardinals.

It seems that you believe the HOD Conjecture (i.e. that the HOD Hypothesis is a theorem of ZFC). But then HOD is close to V in a rather strong sense (just not in the sense of computing many successor cardinals correctly). This arguably undermines the whole foundation for your maximality principle (Maximality Criterion stated above). I guess you could respond that you only think that the HOD Hypothesis is a theorem of ZFC + extendible and not necessarily from just ZFC.

If the HOD Hypothesis is false in V and there is an extendible cardinal, then in some sense, V is as far as possible (modulo trivialities) from HOD. So in this situation the maximality principle you propose holds in the strongest possible form. This would actually seem to confirm extendible cardinals for you. Their presence transforms the failure of the HOD Hypothesis into an extreme failure of the closeness of V to HOD, optimizing your maximality principle. So in the synthesis of maximality, in the sense of the failure of the HOD Hypothesis, with large cardinals, in the sense of the existence of extendible cardinals, one gets the optimal version of your maximality principle.

The only obstruction is the HOD Conjecture. The only evidence I have for the HOD Conjecture is the Ultimate L scenario. What evidence do you have that compels you not to make what would seem to be strongly motivated conjecture for you (that ZFC + extendible does not prove the HOD Hypothesis)?

I find your position rather mysterious. It is starting to look like your main motivation is simply to deny large cardinals.

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Sy,

I see no virtue in “back to square one” conjectures. In the HP the whole point is to put out maximality criteria and test them; it is foolish to make conjectures without doing the mathematics.

Of course you do not see any virtue in “back to square one” conjectures. Fine, we have different views (yet again),

Why should your programme be required to make “make or break” conjectures, and what is so attractive about that?

I find it quite interesting if philosophical considerations lead to specific “make or break” conjectures. Especially if there is no obvious purely mathematical basis on which to make the conjecture. The HOD Conjecture is a good example. There is no purely mathematical reason (that I know of) to make that conjecture (that the HOD Hypothesis is provable from say ZFC + extendible).  It is a prediction from the Ultimate L scenario (just as is the (provability) of the \Omega Conjecture).

Of course there is another reason for identifying such conjectures. They provide test questions for future progress. If one can refute from large cardinals that the \Omega Conjecture holds then one refutes the Ultimate L Conjecture and moreover shows that there is a failure of inner model theory based on sequences of extenders.

One more question at this point: Suppose that Jack had succeeded in proving in ZFC that 0^\# does not exist. Would you infer from this that V = L is true? On what grounds?

Not necessarily. But I certainly would no longer declare as evident that V is not L. The question of V versus L would for me,  reopen.

Your V = Ultimate-L programme (apologies if I misunderstand it) sounds very much like saying that Ultimate L is provably close to V so we might as well just take V = Ultimate-L to be true. If I haven’t misunderstood then I find this very dubious indeed. As Pen would say, axioms which restrict set-existence are never a good idea.

If the Ultimate L Conjecture is true (provable in ZFC + extendible) then V = Ultimate L becomes a serious possibility which (to me anyway) cannot just be dismissed as is now the possibility that V = L.

For me, the “validation” of V = Ultimate L will have to come from the insights V = Ultimate L gives for the hierarchy of large cardinals beyond supercompact. (These are the “other tests which will have to be passed”).

If that does not happen or if the genuine insights come from outer models of V = Ultimate L, or even from something entirely unrelated to Ultimate L, then for me the case for V = Ultimate L will weaken, possibly significantly.

On the other hand, if in the setting of V = Ultimate L, a whole new hierarchy of large cardinals is revealed, otherwise invisible, then things get interesting. Here it might be the Axiom \textsf{I}0, in but the context of V = Ultimate L, which could be key.

You will respond that is sheer speculation without foundation or solid evidence. It is sheer speculation. We shall see about the evidence.

Maybe it is time to try once again to simply agree that we disagree and wait for future mathematical developments before continuing this debate.

Regards,
Hugh