Tag Archives: HOD conjecture

Re: Paper and slides on indefiniteness of CH

PS:

On Oct 28, 2014, at 4:37 PM, Sy David Friedman wrote:

So, if you fix the proof, you have proved the HOD Conjecture.

I’ll try not to let that scare me ;)

?? This seems like an odd comment. The HOD Conjecture is a prediction of the Ultimate L Conjecture. But there is no reason it could not have a proof which has nothing to do with the Ultimate L Conjecture.

But I’m also not surprised that there was a bug in my proof!

Why? Do you believe the HOD Conjecture? If so why?

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Sy,

On Oct 28, 2014, at 4:37 PM, Sy David Friedman wrote:

But I did answer your question: The Stability Predicate is the basis for my conjecture, not just some arbitrary predicate that makes V generic over HOD. In fact my conjecture looks stronger than the rigidity of (HOD,S), as the Stable Core (L[S],S) is smaller.

But why do you make this conjecture when it is refuted by the existence in ZF of a Super-Reinhardt cardinal? That is what I am trying to understand. Are you arguing that the nonexistence of a V-constructible embedding is the evidence? If that is the evidence then there is a general conjecture one could make but you are not. So why?

My point is that the non-rigidity of HOD is a natural extrapolation of ZFC large cardinals into a new realm of strength. I only reject it now because of the Ultimate L Conjecture and its implication of the HOD Conjecture. It would be interesting to have an independent line which argues for the non-rigidity of HOD. This is the only reason I ask.

But I still don’t have an answer to this question:

“What theory of truth do you have? I.e. what do you consider evidence for the truth of set theoretic statements?”

But I did answer your question by stating how I see things developing, what my conception of V would be, and the tests that need to be passed. You were not happy with the answer. I guess I have nothing else to add at this point since I am focused on a rather specific scenario.

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Sy,

It is a virtue of a program if it generate predictions which are subsequently verified. To the extent that these predictions are verified one obtains extrinsic evidence for the program. To the extent that these predictions are refuted one obtains extrinsic evidence for the problematic nature of the program. It need not be a prediction which would “seal the deal” in the one case and “set it back to square one” in the other (two rather extreme cases). But there should be predictions which would lend support in the one case and take away support in the other.

The programs for new axioms that I am familiar with have had this feature. Here are some examples:

(1) Definable Determinacy.

The descriptive set theorists made many predictions that were subsequently verified and taken as support for axioms of definable determinacy. To mention just a few. There was the prediction that \text{AD}^{L(\mathbb R)} would lift the structure theory of Borel sets of reals (provable in ZFC) to sets of reals in L(\mathbb R). This checked out. There was the prediction that \text{AD}^{L(\mathbb R)} followed from large cardinals. This checked out. The story here is long and impressive and I think that it provides us with a model of a strong case for new axioms. For the details of this story — which is, in my view, a case of prediction and verification and, more generally, a case that parallels what happens when one makes a case in physics — see the Stanford Encyclopedia of Philosophy entry “Large Cardinals and Determinacy”, Tony Martin’s paper “Evidence in Mathematics”, and Pen’s many writings on the topic.

(2) Forcing Axioms

These axioms are based on ideas of “maximality” in a rather special sense. The forcing axioms ranging from \textsf{MA} to \textsf{MM}^{++} are a generalization along one dimension (generalizations of the Baire Category Theorem, as nicely spelled out in Todorcevic’s recent book “Notes on Forcing Axioms”) and the axiom (*) is a generalization along a closely related dimension. As in the case of Definable Determinacy there has been a pretty clear program and a great deal of verification and convergence. And, at the current stage advocates of forcing axioms are able to point to a conjecture which if proved would support their view and if refuted would raise a serious problem (though not necessarily setting it back to square one), namely, the conjecture that \textsf{MM}^{++} and (*) are compatible. That I take to be a virtue of the program. There are test cases. (See Magidor’s contribution to the EFI Project for more on this aspect of the program.)

(3) Ultimate L

Here we have lots of predictions which if proved would support the program and there are propositions which if proved would raise problems for the program. The most notable on is the “Ultimate L Conjecture”. But there are many other things. E.g. That conjecture implies that V=HOD. So, if the ideas of your recent letter work out, and your conjecture (combined with results of “Suitable Extender Models, I”) proves the HOD Conjecture then this will lend some support “V = Ultimate L” in that “V = Ultimate L” predicts a proposition that was subsequently verified in ZFC.

It may be too much to ask that your program at this stage make such predictions. But I hope that it aspires to that. For if it does not then, as I mentioned earlier, one has the suspicion that it is infinitely revisable and “not even wrong”.

One additional worry is the vagueness of the idea of the “‘maximal’ iterative conception of set”. If there were a lot of convergence in what was being mined from this concept then one might think that it was clear after all. But I have not seen a lot of convergence. Moreover, while you first claimed to be getting “intrinsic justifications” (an epistemically secure sort of thing) now you are claiming to arrive only at “intrinsic heuristics” (a rather loose sort of thing). To be sure, a vague notion can lead to motivations that lead to a great deal of wonderful and intriguing mathematics. And this has clearly happened in your work. But to get more than interesting mathematical results — to make a case for for new axioms — at some stage one will have to do more than generate suggestions — one will have to start producing propositions which if proved would support the program and if refuted would weaken the program.

I imagine you agree and that that is the position that you ultimately want to be in.

Best,
Peter

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

The Stability Predicate S is the important thing. V is generic over the Stable Core = (L[S],S). As far as I know, V may not be generic over HOD; but it is generic over (HOD,S).

V is always a symmetric extension of HOD but maybe you have something else in mind.

Let A be a V-generic class of ordinals (so A codes V). Then A is (HOD, P)-generic for a class partial order P which is definable in V. So if T is the \Sigma_2-theory of the ordinals then P is definable in (HOD,T) and A is generic over (HOD,T).

Why are you stating a weaker result than mine? I show that for some A, (V,A) models ZFC and is generic over the Stable Core and hence over (HOD,S) where S is the Stability predicate. The Stability Predicate is \Delta_2, not \Sigma_2. And a crucial point is that its only reference to truth in V is via the “stability relationships” between V_\alpha‘s, a much more absolute property than truth which is much easier to analyse. As I said, the Stability Predicate is the important thing in my conjecture.

But you did not answer my question. Are you just really conjecturing that if V is generic over N then there is no nontrivial j:N \to N?

But I did answer your question: The Stability Predicate is the basis for my conjecture, not just some arbitrary predicate that makes V generic over HOD. In fact my conjecture looks stronger than the rigidity of (HOD,S), as the Stable Core (L[S],S) is smaller.

Let me phrase this more precisely.

Suppose A is a V-generic class of ordinals, N is an inner model of V, P is a partial order which is amenable to N and that A is (N,P)-generic.

Are you conjecturing that there is no non-trivial j:N \to N? Or that there is no nontrivial j:(N,P) \to (N,P)? Or nothing along these general lines?

As I said: Nothing along those general lines.

I show that (in Morse-Kelley), the (enriched) Stable Core is rigid for “V-constructible” embeddings. That makes key use of the (enriched) Stability Predicate. I wouldn’t know how to handle a different predicate.

I would think that based on HP etc., you would actually conjecture that there is a nontrivial j:\text{HOD} \to \text{HOD}. </blockquote>          No. This is the "reality check" that Peter and I discussed. Maximality suggests that V is as far from HOD as possible, but we have to acknowledge what is not possible. </blockquote>    So maximality considerations have no predictive content. It is an idea which has to be continually revised in the face of new results.  </blockquote>        Finally you are catching on! I have been trying to say this from the beginning, and both you and Peter were strangely trying to "pin me down" on what the "definitive consequences" or the "make-or-break predictions" of the HP are. It is a study of maximality criteria, with the aim of converging towards the optimal such criterion. How can you expect such a programme to make "definitive predictions" in short time? In recursion theory language, the process is latex \Delta_2$ and not \Sigma_1 (changes of direction are permitted when necessary; witness the IMH being replaced by the \textsf{IMH}^\#). And set-theoretic practice is the big daddy: If you investigate a maximality criterion which ZFC proves inconsistent then you have to revise what you are doing (is “all regular cardinals inaccessible in HOD” consistent? I think so, but may be wrong.)

Yet you propose to deduce the non existence of large cardinals at some level based on maximality considerations. I would do the reverse, revise maximality.

If the goal is to understand maximality then that would be cheating! You may have extrinsic reasons for wanting LCs as opposed to LCs in inner models (important note: for Reinhardt cardinals that would be the only option anyway!) but those reasons have no role in an analysis of maximality of V in height and width.

I guess this is yet another point we just disagree on.

But I still don’t have an answer to this question: “What theory of truth do you have? I.e. what do you consider evidence for the truth of set-theoretic statements?”

Have you read Pen’s “Defending the Axioms”, and if so, does her Thin Realist describe your views? And if so, do you have an argument that LC existence is necessary for “good set theory”?

PS: With embarrassment and apologies to the group, I have to report that I found a bug in my argument that maximality kills supercompacts. I’ll try to fix it and let you know what happens. I am very sorry for the premature claim.

Suppose that there is an extendible and that the HOD Conjecture fails. Then:

1) Every regular cardinal above the least extendible cardinal is measurable in HOD (so HOD computes no successors correctly above the least extendible cardinal).

2) Suppose \gamma is an inaccessible cardinal which is a limit of extendible cardinals. Then there is a club C \subset \gamma such that every \kappa \in C is a regular cardinal in \text{HOD} (and hence inaccessible in HOD).

So, if you fix the proof, you have proved the HOD Conjecture.

I’ll try not to let that scare me ;)

But I’m also not suprised that there was a bug in my proof!

Thanks,
Sy

Re: Paper and slides on indefiniteness of CH

On Oct 27, 2014, at 11:00 PM, Sy David Friedman wrote:

The Stability Predicate S is the important thing. V is generic over the Stable Core = (L[S],S). As far as I know, V may not be generic over HOD; but it is generic over (HOD,S).

V is always a symmetric extension of HOD but maybe you have something else in mind.

Let A be a V-generic class of ordinals (so A codes V). Then A is (HOD, P)-generic for a class partial order P which is definable in V.  So if T is the \Sigma_2-theory of the ordinals then P is definable in (HOD,T) and A is generic over (HOD,T).

But you did not answer my question. Are you just really conjecturing that if V is generic over N then there is no nontrivial j:N \to N ? Let me phrase this more precisely.

Suppose A is a V-generic class of ordinals, N is an inner model of V, P is a partial order which is amenable to N and that A is (N,P)-generic.

Are you conjecturing that there is no non-trivial j:N \to N?  Or that there is no nontrivial j:(N,P) \to (N,P)? Or nothing along these general lines?

I would think that based on HP etc., you would actually conjecture that  there _is_ a nontrivial j:HOD \to HOD.

No. This is the “reality check” that Peter and I discussed. Maximality suggests that V is as far from HOD as possible, but we have to acknowledge what is not possible.

So maximality considerations have no predictive content. It is an idea which has to be continually revised in the face of new results.

Yet you propose to deduce the non existence of large cardinals at some level based on maximality considerations. I would do the reverse, revise maximality.

I guess this is yet another point we just disagree on.

PS: With embarrassment and apologies to the group, I have to report that I found a bug in my argument that maximality kills supercompacts. I’ll try to fix it and let you know what happens. I am very sorry for the premature claim.

Suppose that there is an extendible and that the HOD Conjecture fails. Then:

1) Every regular cardinal above the least extendible cardinal is measurable in HOD (so HOD computes no successors correctly above the least extendible cardinal).

2) Suppose  \gamma is an inaccessible cardinal which is a limit of extendible cardinals.  Then there is a club C \subset \gamma such that every \kappa \in C is a regular cardinal in HOD (and hence inaccessible in HOD).

So, if you fix the proof, you have proved the HOD Conjecture.

Regards, Hugh

Re: Paper and slides on indefiniteness of CH

pre-PS: Thanks Sol for correcting my spelling. My problem with German has plagued me my entire academic life.

Dear Sy,

I think we are getting to the point where we are simply talking past one another. Also the nesting of messages is making this thread somewhat difficult to follow (perhaps a line break issue or a platform issue).

You have made an important point for me: a rich structure theory together with Goedelian ‘success’ is insufficient to convince number-theorists that ERH is true, and by analogy these criteria should not suffice to convince set-theorists that PD is true.

Unless there is something fundamentally different about LC which there is.

Many (well at least 2) set theorists are convinced that PD is true. The issue is why do you think Con PD is true. You have yet to give any coherent reason for this. You responded:

The only ‘context’ needed for Con PD is the empirical calibration provided by a strict ‘hierarchy’ of consistency strengths. That makes no assumptions about PD.

Such a position is rather dubious to me. The consistency hierarchy is credible evidence for the consistency of LC only in the context of large cardinals as potentially true axioms.  Remove that context (as IMH and its variants all do) then why is the hierarchy evidence for anything?

Aside: Suppose an Oracle informs us that RH is equivalent to Con PD. Then I would say RH is true (and it seems you would agree here). But suppose that the Oracle informs us that RH is equivalent to Con Reinhardt cardinal. Then I actually would conjecture that RH is false. But by your criteria of evidence you would seem to argue RH is true.

Next point and within the discussion on strong rank maximality.  I wrote:

Question 1:  How can we even be sure that there is no  pairwise incompatibility here which argues against the very concept of the \Pi_2 consequences of strong rank maximality?

and you responded:

Let T be the set of first-order sentences which hold in all strongly maximal universes. These are precisely the first-order consequences of ‘strong maximality’. Then any two sentences in T are compatible with each other.

I realized after sending the message I should have elaborated on what I had in mind on the incompatibility issue and so I will do so here. I imagine many the followers of this thread  (if any are left) will want to skip this.

Incompatibility


Let me explain the sort of incompatibility I am concerned with.

Suppose M is strongly rank maximal. One might have a \Pi_2-sentence \phi_1 certified by a rank preserving extension of M with X-cardinals and a \Pi_2-sentence \phi_2 certified by a rank preserving extension with Y-cardinals.

What if X-cardinals and Y-cardinals are mutually incompatible or worse, the existence of X-cardinals implies \phi_2 cannot hold (or vice-versa).  Then how could \phi_1\wedge\phi_2 be certified? If the certifiable \Pi_2-sentences are not closed under finite conjunction then there is a problem.

Let N_X be a rank-preserving extension of M with a proper class of X-cardinals which certifies \phi_1. Let’s call this a good witness if \phi_1 holds in all the set-generic extensions of N_X and all the \Pi_2-sentences which hold in all the set-generic extensions of N_X, are deemed certified by N_X (this is arguably reasonable given the persistence of large cardinals under small forcing).

Similarly let’s suppose that N_Y is a rank-preserving extension of M with a proper class of Y-cardinals which certifies
\phi_2 and is a good witness.

Assuming the \Omega Conjecture is provable (and recall our base theory is ZFC + a proper class of Woodin cardinals) then one of the following must hold:

  1. \phi_1 \wedge \phi_2 holds in all the set-generic extensions of N_X (and so N_X certifies \phi_1\wedge \phi_2).
  2. \phi_1 \wedge \phi_2 holds in all the set-generic extensions of N_Y (and so N_Y certifies \phi_1\wedge \phi_2).

To me this is a remarkable fact. I see no way to prove it at this level of generality without the \Omega Conjecture.

You wrote:

You have it backwards. The point of beginning with arbitrary extensions is precisely to avoid any kind of bias. If one deliberately chooses a more restricted notion of extension to anticipate a later desire to have large cardinals then this is a very strong and in my view unacceptable bias towards the existence of large cardinals.

I completely disagree. Having more models obscures truth that is my whole point.

Moving on, I want to return to the inner model issue and illustrate an even deeper sense (beyond correctness issues) in which the Inner Model Program is not just about inner models.

Consider the following variation of the inner model program. This is simply the definable version of your “internal consistency” question which you have explored quite bit.

Question: Suppose that there exists a proper class of X-cardinals. Must there exist an inner model N with a proper class of X-cardinals such that N \subseteq \text{HOD}?

(Of course, if one allows more than the existence of a proper class of X cardinals then there is a trivial solution so here it is important that one is only allowed to use the given large cardinals).

For “small” large cardinals even at the level of  Woodin cardinals I know of no positive solution that does not use fine-structure theory.

Define a cardinal \delta to be n-hyper-extendible if \delta is extendible relative to the \Sigma_n-truth predicate.

Theorem: Suppose that HOD Conjecture is true. Suppose that for each n, there is an n-hyper-extendible cardinal. Then for each n there is an n-hyper extendible cardinal in HOD (this is a scheme of course).

The HOD Conjecture could have an elementary proof (if there is an extendible cardinal).  This does not solve the inner model problem for hyper-extendible cardinals or even shed any light on the inner model problem.

Finally you wrote:

The HP focuses on truth, not on consistency. It seems that the next generation of axioms will not be of the large cardinal variety (consistency appears to be already exhausted; I think it likely that Reinhardt cardinals are inconsistent even without AC) but concern new and subtle forms of absoluteness / powerset maximality.

I agree on Reinhardt cardinals. But obviously disagree on the route to new hierarchies. Certainly HP has yet to indicate any promise for being able to reach new levels of consistency strength since even reaching the level of “ZFC + infinitely many Woodin cardinals” looks like a serious challenge for HP.  It would be interesting to even see a conjecture along these lines.

Perhaps the most pressing challenge is to justify large cardinal existence as a consequence of well-justified criteria for the selection of preferred universes. This requires a new idea. Some have suggested structural reflection, but I don’t find this convincing due to the arbitrariness in the relationship between V and its reflected versions.

I am not asking how HP could justify the existence of large cardinals. I am simply asking how HP is ever going to even argue for the consistency of just PD (which you have already declared a “truth”). If HP cannot do this then how is it ever credibly going to make progress on the issue of truth in set theory?

However one conceives of truth in set theory, one must have answers to:

  1. Is PD true?
  2. Is PD consistent?

You have examples of how HP could lead to answering the first question.  But no examples of how HP could ever answer the second question.  Establishing Con LC for levels past PD looks even more problematic.

There is strong meta-mathematical evidence that the only way to ultimately answer 2. with “yes” is to answer 1. with “yes”.  This takes us back to my basic confusion about the basis for your conviction in Con PD.

The fundamental technology (core-model methods) which is used in establishing the “robustness” of the consistency hierarchy which you cite as evidence, shows that whenever “ZFC + infinitely many Woodin cardinals” is established as a lower bound for some proposition (such as PFA, failure of square at singular strong limits, etc), that proposition implies PD.   For these results (PFA, \square etc.) there are no other lower bound proofs known. There is a higher level consistency hierarchy (which is completely obscured by your more-is-better approach to the hyper-universe).

You also cite strictness of the hierarchy as an essential component of the evidence, which you must in light of the ERH example, and so the lower bound results are key in your view. Yet as indicated above, for the vast majority (if not all) of these lower bound results, once one is past the level of Con PD, one is actually inferring PD.  It seems to me that by your own very criteria, this is a far stronger argument for PD then HP is ever going to produce for the negation of PD.

All those comments aside, we have an essential disagreement at the very outset. I insist that any solution to CH must be in the context of strong rank maximality (and assuming the provability of the \Omega Conjecture this becomes a perfectly precise criterion). You insist that this is too limited in scope and that we should search outside this “box”.

I agree that there are interesting models outside this box. But I strongly disagree that V is one of them.

Regards,
Hugh