Re: Paper and slides on indefiniteness of CH

Dear Sy,

Thanks so much for your patient responses to my elementary questions!  I now see that I was viewing those passages in your BSL paper through the wrong lens, but rather than detailing the sources of my previous errors, I hope you’ll forgive me in advance for making some new ones.  As I now (mis?)understand your picture, it goes roughly like this…

We reject any ‘external’ truth to which we must be faithful, but we also deny that the resulting ‘true-in-V’ arises strictly out of the practice (as my Arealist would have it).  One key is that ‘true-in-V’ is answerable, not to a realist ontology or some sort of ‘truth value realism’, but to various intrinsic considerations.  The other key is that it’s also answerable to a certain restricted portion of the practice, the de facto set-theoretic claims.  These are the ones that ‘due to the role that they play in the practice of set theory and, more generally, of mathematics, should not be contradicted by any further candidate for a set-theoretic statement that may be regarded as ultimate and unrevisable’ (p. 80).  (Is it really essential that these statements be ‘ultimate and unrevisable’?  Isn’t it enough that they’re the ones we accept for now, reserving the right to adjust our thinking as we learn more?)  These include ZFC and the consistency of LCs.

The intrinsic constraints aren’t limited to items that are ‘implicit in the concept of set’.  They also include items ‘implicit in the concept of a set-theoretic universe’.  (This sounds reminiscent of Tony’s reading in ‘Gödel’s conceptual realism’.  Do you find this congenial?)  One of the items present in the latter concept is a notion of maximality.  The new intrinsic considerations arise at this point, when we begin to consider, not just V, but a range of different ‘pictures of V’ and their interrelations in the hyperuniverse.  When we do this, we come to see that the vague principle of maximality derived from the concept of a set-theoretic universe can be made more precise — hence the schema of Logical Maximality and its various instances.

At this point, we have the de facto part of practice and various maximality principles (and more, but let’s stick with this example for now).  If the principles conflict with the de facto part, they’re rejected.  Of the survivors, they’re further tested by their ability to settle independent questions.

Is this at least a bit closer to the story you want to tell?

All best,
Pen

Re: Paper and slides on indefiniteness of CH

Dear Penny,

Many thanks for your insightful comments. Please see my responses below.

On Tue, 5 Aug 2014, Penelope Maddy wrote:

Thank you for the plug, Sol.  Sy says some interesting things in his BSL paper about ‘true in V':  it doesn’t ‘reflect an ontological state of affairs concerning the universe of all sets as a reality to which existence can be ascribed independently of set-theoretic practice’, but rather ‘a façon de parler that only conveys information about set-theorists’ epistemic attitudes, as a description of the status that certain statements have or are expected to have in set-theorist’s eyes’ (p. 80). There is ‘no “external” constraint … to which one must be faithful’, only ‘justifiable procedures’ (p. 80); V is ‘a product of our own, progressively developing along with the advances of set theory’ (p. 93).  This sounds more or less congenial to my Arealist (a non-platonist):   in the course of doing set theory, when we adopt an axiom or prove a theorem from axioms we accept, we say it’s ‘true in V’, and the Arealist will say this along with the realist; the philosophical debate is about what we say when we’re describing set-theoretic activity itself, and here the Arealist denies (and the realist asserts) that it’s out to discover the truth about some objectively existing abstracta.  (By the way, I don’t think ‘truth-value realism’ is the way to go here.  In its usual form, it avoids abstract entities, but there remains an external fact-of-the-matter quite independent of the practice to which we’re supposed to be faithful.)

My apologies here. In my reply to Sol I only made reference to truth-value realism for the purpose of illustrating that one can ascribe meaning to set-theoretic truth without being a platonist. Indeed my view of truth is very far from the truth-value realist, it is entirely epistemic in nature.

Unfortunately the rest of my story of the Arealist as it stands won’t be much help because the non-platonistic grounds given there in favor of embracing various set-theoretic methods or principles are fundamentally extrinsic and Sy is out to find a new kind of intrinsic support.

Yes. I am trying to make the case that there are unexplored intrinsic sources of evidence in set theory. Some have argued that we must rely solely on extrinsic sources, evidence emanating directly from current set-theoretic practice, because intrinsic evidence cannot take us past what is derivable from the maximal iterative conception. I do agree that this conception can lead us no further than reflection principles compatible with V = L.

But in fact my intuition goes further and suggests that no intrinsic first-order property of the universe of sets will enable us to resolve problems like CH. We have to examine features of the universe of sets that are only revealed by comparing it to other possible universes (goodbye Platonism) and infer first-order properties from these “higher-order” intrinsic features of V (a name for the epistemically-conceived universe of sets).

Obviously a direct comparison of V with other universes is not possible (V contains all sets) so we must instead content ourselves with the comparison of pictures of V. These pictures are perfectly provided by the hyperuniverse (also conceived of non-platonistically). And by Löwenheim-Skolem we lose none of the first-order features of V when we model it within the hyperuniverse.

Now consider the effect that this has on the principle of maximality. Whereas the maximal iterative concept allows us to talk about generating sets inside V by iterating powerset “as long as possible”, the hyperuniverse allows us to express the maximality of (a picture of) V in a more powerful way: maximal means “as large as possible in comparison to other universes” and the hyperuniverse gives a precise meaning to this by providing those “other universes”. Maximality is no longer just an internal matter regarding the existence of sets within V, but is also an external matter regarding the largeness of the universe of sets as a whole in comparison to other universes. Thus the move from the concept of set to the concept of set-theoretic universe.

Now comes a crucial point. I assert that maximality is an intrinsic feature of the universe of sets. Certainly I can assert that there is a rich discussion of maximality in the philosophy of set theory literature with some strong advocates of the principle, including Goedel, Scott and yourself (correct me if I am wrong).

Maximality is not the only philosophical principle regarding the set-theoretic universe that drives the HP but surely it is currently the most important one. Another is omniscience (the definability in V of truth across universes external to V). Maybe there will be more.

I’m probably insufficiently attentive, or just plain dim, but I confess to being confused about how this new intrinsic evidence is intended to work.   It isn’t a matter of being part of the concept of set, nor is it given by the clear light of mathematical intuition.  It does involve, quoting from Gödel, ‘a more profound understanding of basic concepts underlying logic and mathematics’, and in particular, in Sy’s words, ‘a logical-mathematical analysis of the hyperuniverse’ (p. 79).  Is it just a matter of switching from the concept of set to the concept of the hyperuniverse?  (My guess is no.)  Our examination of the hyperuniverse is supposed to ‘evoke’ (p. 79) certain general principles (the principles are ‘based on’ general features of the hyperuniverse (p. 87)), which will in turn ‘suggest’ (pp. 79, 87) criteria for singling out the preferred universes — and the items ultimately supported by these considerations are the first-order statements true in all preferred universes. One such general principle is maximality, but I’d like to understand better how it arises intrinsically out of our contemplation of the hyperuniverse (at the top of p. 88).  On p. 93, the principle (or its more specific versions) is said to be ‘the rigorous expression of what it means for an element of the hyperuniverse, i.e., a countable transitive model of ZFC, to display “maximal properties”‘.  Does this mean that maximality for the hyperuniverse derives from a prior principle of maximality inherent in the concept of set?

You ask poignant questions; I hope that what I say above is persuasive!

Many thanks for your interest, and very best wishes,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

On Sun, 3 Aug 2014, W Hugh Woodin wrote:

I really do not understand the basis for your conviction in the consistency of PD (or AD or ZFC + \omega many Woodin cardinals).

This results from the standard empirical fact that the consistency of a huge variety of statements in set theory is shown assuming the consistency of large cardinal axioms and there are even “lower bound results” calibrating the strength of the large cardinal axioms required, even at the level of a supercompact. For example, \textsf{PFA}(\mathfrak c^+ \text{linked}) is consistent relative to a degree of supercompactness yet there are models with slightly less large cardinal strength (subcompact cardinals) with the property that \textsf{PFA}(\mathfrak c^+ \text{linked}) fails in all of its proper (not necessarily generic) extensions. This is strong evidence that the consistency strength of \textsf{PFA}(\mathfrak c^+ \text{linked}) is at the level of a degree of supercompactness. Thus large cardinals provide a “hierarchy” of consistency strengths whose strictness is witnessed by numerous statements of set-theoretic interest. I see this as sufficient justification for the consistency of large cardinals; we don’t need inner model theory or some structure theory which follows from their existence to know that they are consistent.

Consider the Extended Reimann Hypothesis (ERH). ERH passes all the usual tests cited for PD (or AD or ZFC + \omega many Woodin cardinals) as the basis of its consistency. Tremendous structure theory, implications of theorems which are later proved by other means etc.

Those tests are used (not by me but by a number of set-theorists) in favour of the truth of PD, not just of its consistency! The consistency of PD only needs the consistency of large cardinals, as justified above, and none of these tests.

Yet there does not seem to be any conviction in the Number Theory community that even the Reimann Hypothesis is true (and of course RH follows from the consistency of ERH).

You have made an important point for me: a rich structure theory together with Gödelian “success” is insufficient to convince number-theorists that ERH is true, and by analogy these criteria should not suffice to convince set-theorists that PD is true.

Your analogy is problematic as there is no clear distinction between consistency and truth with regard to the ERH; this is because there is only one natural model of first-order arithmetic. But there is a clear distinction between consistency and truth for PD as there are many natural models of second-order arithmetic.

Look at the statement on the rules of the Millennium Prizes. A counterexample to RH is not unconditionally accepted as a solution. If there was any consensus that RH is true this escape clause would not be in the stated rules. Further the structure theory you cite as evidence for Con PD is in the context of PD etc. If one rejects that context then how can one maintain the conviction that Con PD is true?

The only “context” needed for Con PD is the empirical calibration provided by a strict “hierarchy” of consistency strengths. That makes no assumptions about PD.

Rephrased: The fact that the models of T (if any exist) have a rich internal theory is not evidence that there are any models of T. Something else is needed.

The consistency of large cardinals does not require any “rich internal theory”. In contrast, the truth of large large cardinal axioms is much harder to justify, indeed I have yet to see a convincing argument for that.

I completely agree this is the basic issue over which we disagree. The position that all extensions, class or set generic, are on an equal footing is at the outset already a bias against large cardinals.

You have it backwards. The point of beginning with arbitrary extensions is precisely to avoid any kind of bias. If one deliberately chooses a more restricted notion of extension to anticipate a later desire to have large cardinals then this is a very strong and in my view unacceptable bias towards the existence of large cardinals.

The HP is open to the conclusion that large large cardinals exist, but to achieve this one needs a philosophically well-justified and unbiased criterion for the choice of preferred universes. There may be such a criterion but so far the only ones I have come up with either contradict or tolerate large large cardinal existence; none of them succeed in proving large large cardinal existence.

The canonical objects identified by large cardinals, such as the generalizations of 0^\#, can disappear (i.e. cease to be recognized as such) if one passes to a class forcing extension.

Yes, but once again if you set things up to protect desired generalisations of 0^\# then you are showing a bias towards those generalisations. This is in my view unacceptable without justifying that way of setting things up with a philosophically well-justified and unbiased criterion for the choice of preferred universes.

In summary: The basic problem with what you are saying is that you are letting set-theoretic practice dictate the investigation of set-theoretic truth!

It would be ideal if the axioms and objects that you want to see in set theory would arise as a consequence of a philosophically well-justified approach to truth, but so far I don’t see how to do that (things are pointing in a different direction). I present this to you as an interesting challenge. It should be clear by now that the derivation of large large cardinal axioms from maximaality criteria is very problematic (but perhaps not impossible).

Rephrased: The claim that an inner model is just a proper class is a bias against large cardinals. Once one passes the level of one Woodin cardinal the existence of proper class inner models becomes analogous to the existence of transitive set models in the context of just ZFC. It has no real structural implications for V particularly in the context of for example IMH (which are not already implied by the existence of an inner model of just 1 Woodin cardinal). This fact is not irrelevant to HP since it lies at the core of the consistency proof of IMH.

See the above. The HP is not biased against large cardinals. Rather, imposing large cardinals at the start of any investigation of truth is where the bias lies.

Let me explain further and also clarify the relationship between \Omega-logic and set forcing. For this discussion and to simplify things grant that the \Omega Conjecture is provable and that the base theory is now ZFC + a proper class of Woodin cardinals.

OK, for the sake of discussion let us now relativise everything to large cardinal axioms. I.e., the base theory is now ZFC + Arbitrary large cardinals. This is not justified but surely worth investigating.

To a set theorist, a natural variation of ordinal maximality, let’s call this strong rank maximality, is that there are rank preserving extensions of M in which large cardinals exist above the ordinals of M (and here one wants to include all ‘possible large cardinals’ whatever that means).

Please see my paper with Honzik anout strong reflection. We capture ordinal maximality with the property of “\#-generation”. Roughly speaking, V is \#-generated if it arises from a “sharp” in the same way that L arises from 0^\#. This is in my view the strongest possible form of ordinal maximality. (Aside: It also allows one to argue that reflection is perfectly compatible with a potentialist view of V, no actualism is required.)

Then the context you want is \#-generation plus large cardinals. These are your preferred universes. (Note that large cardinals in V automatically generate large cardinals past the ordinals of V via \#-generation.) Below I will use “strong maximality” to mean \#-generated plus large cardinals.

Question 1:  How can we even be sure that there is no pairwise incompatibility here which argues against the very concept of the \Pi_2 consequences of strong rank maximality?

Let T be the set of first-order sentences which hold in all strongly maximal universes. These are precisely the first-order consequences of “strong maximality”. Then any two sentences in T are compatible with each other.

Maybe you meant to ask if any two strongly maximal universes satisfy the same \Pi_2 sentences? The answer is of course “no” if strongly maximal universes can differ about what large cardinals exist. This will surely be the case unless one requires that each strongly maximal universe satisfies “all large cardinal axioms”. How do you formulate that exactly?

Question 2:  If one can make sense of the \Pi_2 consequences of strong rank maximality and given that M is strongly rank maximal, can the \Pi_2 consequences of this for M be defined in M?

M. Stanley showed that there are countable transitive models M of ZFC with the property that the set of first-order sentences with parameters from M which hold in an arbitrary outer model of M is M-definable. (Note that this is immediate for any M if one considers only set-generic outer models of M!) This can also be done for the theory ZFC + Large cardinals (fixing a notion of large cardinal axiom). I expect that this can also be done for strong maximality (and indeed holds for all strongly maximal universes) and therefore the answer to your question is “yes”.

Here is the first point. If there is a proper class of X-cardinals  (and accepting also that an X-cardinal is preserved under forcing by partial orders of size less than the X-cardinal), then in every set-generic extension there is a proper class of X-cardinals and so in every set-generic extension, the sentence  \phi holds where \phi = “Every set A belongs to a set model with an X-cardinal above A.” \phi is a \Pi_2-sentence and therefore by the \Omega Conjecture this \Pi_2-sentence is \Omega provable.

So we know something about what happens in set-generic extensions. But this tells us nothing about what happens in more general extensions, even those which satisfy \phi.

Further these are arguably exactly the \Pi_2 sentences which generate the \Pi_2 consequences of strong rank maximality.

Why? You want to claim that if a \Pi_2 sentence \psi holds in all strongly maximal universes then it follows from some \phi as above. Equivalently, if a \Sigma_2 sentence is compatible with all large cardinal axioms then it holds in some strongly maximal universe. But again we have a problem with the concept of “all large cardinal axioms”. If this really could be formulated then we have the problem sentence “For some large cardinal axiom \phi there is no transitive model of \phi“. This is \Sigma_2, compatible with all large cardinal axioms but false in all strongly maximal universes. So you are forced to fix a bound on the large cardinal axioms you consider, but then you lose consequences of strong maximality.

It seems better to drop set-genericity and consider the IMH for strongly maximal universes. Honzik and I verified the consistency of the IMH for ordinal maximal (= \#-generated) universes using a variant of Jensen coding and an interesting challenge is to do this for strongly maximal universes (i.e. add the large cardinals). As you know the mathematics is now quite hard as so far we lack the inner model theory and Jensen coding theorems that are available for the smaller large cardinals.

Here is the second point. If M_1 is a rank initial segment of M_2 then every sentence which is \Omega-provable in M_2 is \Omega-provable in M_1. \Omega proofs have a notion of (ordinal) length and in the ordering of the \Omega-provable sentences by proofs of shortest  length, the sentences which are \Omega-provable in M_2 are an initial segment of the sentences which are \Omega-provable in M_1 (and they could be the same of course).

Putting everything together, the \Pi_2-consequences of the strong rank maximality of a given model M makes perfect sense (no pairwise incompatibility issues) and this set of \Pi_2-sentences is actually definable in M.

This connection with \Omega-logic naturally allows one to adapt strong rank maximality into the HP framework, one restricts to extensions in which the \Omega-proofs of the initial model are not de-certified in the extension (for example if a \Pi_2 sentence is \Omega-provable in the initial model M, it is \Omega-provable in the extension).

This includes set-forcing extensions but also many other extensions. So in this sense \Omega-logic is not just about set-forcing. \Omega-logic is about trying to clarify (or even make sense of) the \Pi_2 consequences of large cardinals (and how is this possibly not relevant to a discussion of truth in set theory?).

As I said above, I fail to see how you reach the conclusion that the \Pi_2 consequences of large cardinals are those generated by your sentences \phi. In any case your conclusion is that the \Pi_2 consequences of large cardinals are just the local versions of the very same large cardinal axioms. How does this constitute a clarification?

My concern with HP is this. I do not see a scenario in which HP even with strong rank maximality can lead anywhere on the fundamental questions involving the large cardinal hierarchy.  The reason is that strong rank maximality considerations will force one to restrict to the case that PD holds in V at which point strong rank maximality notions require consulting at the very least the \Omega-logic of V and this is not definable within the hyper-universe of V.

I don’t know what fundamental questions involving the large cardinal hierarchy you refer to. Consider IMH (Strong maximality). A consistency proof for this would be very revealing. Of course any model of this will satisfy PD but this does not mean that PD holds in V. As I said it is likely that the \Gamma-logic of any such universe will be definable in that universe, where \Gamma-logic is the (in my view more natural and better motivated) version of \Omega-logic which refers to arbitrary outer models and not just to set-generic ones.

Granting this, genuine progress on CH is even less plausible since how could that solution ever be certified in the context of strong rank maximality? A solution to CH which is not compatible with strong rank maximality is not a solution to CH since it is refuted by large cardinals.

No. Consider \textsf{SIMH}^\# (the IMH with absolute parameters for ordinal maximal = #-generated universes). I conjecture that this is consistent. Just as the \textsf{IMH}^\# is consistent with strong maximality I expect the same for the \textsf{SIMH}^\#. And the \textsf{SIMH}^\# implies that the continuum is very large.

You will disagree and perhaps that is the conclusion of this discussion, we simply disagree.

But here is a challenge for HP and this does presuppose any conception or application of a notion of strong rank maximality.

Identify a new family of axioms of strong infinity beyond those which have been identified to date (a next generation of large cardinal axioms) or failing this, generate some new insight into the hierarchy of large cardinal axioms we already have. For example, HP does not discriminate against the consistency of a Reinhardt cardinal. Can HP make a prediction here? If so what is that prediction?

This does sound interesting but I confess that I don’t quite understand it. The HP focuses on truth, not on consistency. It seems that the next generation of axioms will not be of the large cardinal variety (consistency appears to be already exhausted; I think it likely that Reinhardt cardinals are inconsistent even without AC) but concern new and subtle forms of absoluteness / powerset maximality.

Perhaps the most pressing challenge is to justify large cardinal existence as a consequence of well-justified criteria for the selection of preferred universes. This requires a new idea. Some have suggested structural reflection, but I don’t find this convincing due to the arbitrariness in the relationship between V and its reflected versions.

Many thanks, Hugh, for your stimulating comments,
Sy

Re: Paper and slides on indefiniteness of CH

Thank you for the plug, Sol.  Sy says some interesting things in his BSL paper about ‘true in V':  it doesn’t ‘reflect an ontological state of affairs concerning the universe of all sets as a reality to which existence can be ascribed independently of set-theoretic practice’, but rather ‘a façon de parler that only conveys information about set-theorists’ epistemic attitudes, as a description of the status that certain statements have or are expected to have in set-theorist’s eyes’ (p. 80). There is ‘no “external” constraint … to which one must be faithful’, only ‘justifiable procedures’ (p. 80); V is ‘a product of our own, progressively developing along with the advances of set theory’ (p. 93).  This sounds more or less congenial to my Arealist (a non-platonist):   in the course of doing set theory, when we adopt an axiom or prove a theorem from axioms we accept, we say it’s ‘true in V’, and the Arealist will say this along with the realist; the philosophical debate is about what we say when we’re describing set-theoretic activity itself, and here the Arealist denies (and the realist asserts) that it’s out to discover the truth about some objectively existing abstracta.  (By the way, I don’t think ‘truth-value realism’ is the way to go here.  In its usual form, it avoids abstract entities, but there remains an external fact-of-the-matter quite independent of the practice to which we’re supposed to be faithful.)  Unfortunately the rest of my story of the Arealist as it stands won’t be much help because the non-platonistic grounds given there in favor of embracing various set-theoretic methods or principles are fundamentally extrinsic and Sy is out to find a new kind of intrinsic support.

I’m probably insufficiently attentive, or just plain dim, but I confess to being confused about how this new intrinsic evidence is intended to work.   It isn’t a matter of being part of the concept of set, nor is it given by the clear light of mathematical intuition.  It does involve, quoting from Gödel, ‘a more profound understanding of basic concepts underlying logic and mathematics’, and in particular, in Sy’s words, ‘a logical-mathematical analysis of the hyperuniverse’ (p. 79).  Is it just a matter of switching from the concept of set to the concept of the hyperuniverse?  (My guess is no.)  Our examination of the hyperuniverse is supposed to ‘evoke’ (p. 79) certain general principles (the principles are ‘based on’ general features of the hyperuniverse (p. 87)), which will in turn ‘suggest’ (pp. 79, 87) criteria for singling out the preferred universes — and the items ultimately supported by these considerations are the first-order statements true in all preferred universes.

One such general principle is maximality, but I’d like to understand better how it arises intrinsically out of our contemplation of the hyperuniverse (at the top of p. 88).  On p. 93, the principle (or its more specific versions) is said to be ‘the rigorous expression of what it means for an element of the hyperuniverse, i.e., a countable transitive model of ZFC, to display “maximal properties”‘.  Does this mean that maximality for the hyperuniverse derives from a prior principle of maximality inherent in the concept of set?

With all best wishes,
Pen

Re: Paper and slides on indefiniteness of CH

Dear Sol,

On Sun, 3 Aug 2014, Solomon Feferman wrote:

Dear Sy,

Thanks for your helpful comments on my draft, “The Continuum Hypothesis is neither a definite mathematical problem nor a definite logical problem,” and especially for bringing your Hyperuniverse Program (HP) to my attention.  I had seen your 2013 article with Arrigoni on HP back then but had not taken in its point.  I have now read it as well as your Chiemsee slides, and will certainly take it into account in the final version of my paper. I’m glad that we are in considerable agreement about my fundamental argument that one must distinguish mathematical problems in the ordinary sense from logical problems, and that as of now what I claim in the title is true, even taking HP into consideration.  Is my title misleading since it does not say “as of the time of writing”? The reader will see right away in the abstract and the opening section that what I claim does not exclude the possibility that in the future CH will return as a definite mathematical problem [quite unlikely] or that it will somehow become a definite logical problem.

This does appear to constitute a significant retreat in your position. In the quote of yours that I used in my Chiemsee tutorial you refer to CH as being “inherently vague”, in other words dealing with concepts that render it impossible to ever assign it a truth value. If you now concede the possibility that new ideas such as hinted at by Gödel in the quote below (and perhaps provided by the hyperuniverse programme) may indeed lead to a solution, then the “inherent vagueness” argument disappears and our positions are quite close. Indeed we may only differ in the degree of optimism we have about the chances of resolving ZFC-undecidable problems in abstract set theory through philosophically-justifiable logical methods.

“(Gödel) Probably there exist other axioms based on hitherto unknown principles … which a more profound understanding of the concepts underlying logic and mathematics would enable us to recognize as implied by these concepts.”

This is not the place to respond to your many interesting comments on the draft, nor on the substance of the HP and your subsequent exchange with Woodin.  But I would like to make some suggestions regarding your terminology for HP (friendly to my mind).  First all, it seems to me that “preferred models” is too weak to express what you are after.  How about, “premier models” or some such?  (Tapping into the Thesaurus could lead to the best choice.)

I do see your point here, because I do want to suggest not simply a “preference” for certain universes over others but rather a “compelling” or “justified” preference. I’ll give the terminology more thought, thanks for the comment.

Secondly, I’m not happy about your use of “intrinsic evidence for set-theoretic truth” both because “intrinsic evidence” is commonly used to refer to the constellation of Gödel’s ideas in that respect (not the line you are taking) as opposed to “extrinsic evidence”, and because “set-theoretic truth” suggests a platonistic view (which you explicitly reject).  I don’t have anything to take its place, but it reminds me of the kinds of methodological maxims that Maddy has promoted, so perhaps a better choice of terminology can be found in her writings in place of that.

I do not think that “set-theoretic truth” entails a platonistic viewpoint (indeed there is a concept of “truth-value determinism” that falls short of Platonism). The goal of the programme is indeed to make progress in our understanding of truth in set theory and a key claim is that there is intrinsic evidence regarding the nature of the set-theoretic universe that transcends the older form of such evidence emanating from the maximal iterative conception. I think that the dichotomy intrinsic (a priori) versus extrinsic (a posteriori) which Peter Koellner has emphasized is a valuable way to clarify the debate. Nevertheless I do appreciate that some have suggested that the distinction is not as sharp as I may have assumed and I would like to hear more about that.

Another very interesting question concerns the relationship between truth and practice. It is perfectly possible to develop the mathematics of set theory without consideration of set-theoretic truth. Indeed Saharon has suggested that ZFC exhausts what we can say regarding truth but of course that does not force him to work just in ZFC. Conversely, the HP makes it clear that one can investigate truth in set theory quite independently from set-theoretic practice; indeed the IMH arose from such an investigation and some would argue that it conflicts with set-theoretic practice (as it denies the existence of inaccessibles). So what is the relationship between truth and practice? If there are compelling arguments that the continuum is large and measurable cardinals exist only in inner models but not in V will this or should this have an effect on the development of set theory? Conversely, should the very same compelling arguments be rejected because their consequences appear to be in conflict with current set-theoretic practice?

Best wishes and many thanks,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

Thanks for your helpful comments on my draft, “The Continuum Hypothesis is neither a definite mathematical problem nor a definite logical problem,” and especially for bringing your Hyperuniverse Program (HP) to my attention.  I had seen your 2013 article with Arrigoni on HP back then but had not taken in its point.  I have now read it as well as your Chiemsee slides, and will certainly take it into account in the final version of my paper.

I’m glad that we are in considerable agreement about my fundamental argument that one must distinguish mathematical problems in the ordinary sense from logical problems, and that as of now what I claim in the title is true, even taking HP into consideration.  Is my title misleading since it does not say “as of the time of writing”? The reader will see right away in the abstract and the opening section that what I claim does not exclude the possibility that in the future CH will return as a definite mathematical problem [quite unlikely] or that it will somehow become a definite logical problem.

This is not the place to respond to your many interesting comments on the draft, nor on the substance of the HP and your subsequent exchange with Woodin.  But I would like to make some suggestions regarding your terminology for HP (friendly to my mind).  First all, it seems to me that “preferred models” is too weak to express what you are after.  How about, “premier models” or some such?  (Tapping into the Thesaurus could lead to the best choice.) Secondly, I’m not happy about your use of “intrinsic evidence for set-theoretic truth” both because “intrinsic evidence” is commonly used to refer to the constellation of Gödel’s ideas in that respect (not the line you are taking) as opposed to “extrinsic evidence”, and because “set-theoretic truth” suggests a platonistic view (which you explicitly reject).  I don’t have anything to take its place, but it reminds me of the kinds of methodological maxims that Maddy has promoted, so perhaps a better choice of terminology can be found in her writings in place of that.

Best,
Sol

Re: Paper and slides on indefiniteness of CH

Dear Sy,

What form of ordinal maximality are you using? In my paper with Arrigoni I had a weaker form, with Honzik a stronger one. In the latter version, a countable ordinal maximal universe remains ordinal maximal in any outer model of V.

The notion of ordinal maximality to which I was referring was that in the bulletin paper and that which is used to formulate IMH* there.

Indeed, I do strongly endorse the consistency of AD and much more. I do not subscribe to the view that we need the inner model programme to justify the consistency of large cardinals. I think that there is enough evidence from the role of large cardinals in establishing consistency results to justify their own consistency, and indeed I would go further and assert their existence in inner models.

I really do not understand the basis for your conviction in the consistency of PD (or AD or ZFC + \omega many Woodin cardinals).

Consider the Extended Reimann Hypothesis (ERH). ERH passes all the usual tests cited for PD (or AD or ZFC + \omega many Woodin cardinals) as the basis of its consistency. Tremendous structure theory, implications of theorems which are later proved by other means etc.

Yet there does not seem to be any conviction in the Number Theory community that even the Reimann Hypothesis is true (and of course RH follows from the consistency of ERH).  Look at the statement on the rules of the Millennium Prizes. A counterexample to RH is not unconditionally accepted as a solution. If there was any consensus that RH is true this escape clause would not be in the stated rules.

Further the structure theory you cite as evidence for Con PD is in the context of PD etc. If one rejects that context then how can one maintain the conviction that Con PD is true?

Rephrased: The fact that the models of T (if any exist) have a rich internal theory is not evidence that there are any models of T. Something else is needed.

I think this is another example of the fundamental difference in our points of view. Yes, “iterable and correct” inner models are important for the relationship between large cardinals and descriptive set theory. But the fundamental concept of inner model is simply a transitive class containing all the ordinals and modeling ZFC, there is no technical requirement of ‘iterability’ involved. Thus again we have the difference between the interpretation of a basic notion in (a particular area of) set-theoretic practice and its natural interpretation in discussions of set-theoretic truth. And there is no hope of producing useful inner models which are correct for 2nd order arithmetic without special assumptions on V, such as the existence of large cardinals. And even if one puts large cardinal axioms into the base theory one still has no guarantee of even Sigma-1-3 correctness for outer models which are not set-generic. So to say that large cardinals “freeze projective truth” is not accurate, unless one adopts a set-generic interpretation of “freezing.”

I completely agree this is the basic issue over which we disagree.

The position that all extensions, class or set generic, on on an equal footing is at the outset already a bias against large cardinals. The canonical objects identified by large cardinals, such as the generalizations of 0^\#, can disappear (i.e. cease to be recognized as such) if one passes to a class forcing extension.

Rephrased: The claim that an inner model is just a proper class is a bias against large cardinals. Once one passes the level of one Woodin cardinal the existence of proper class inner models becomes analogous to the existence of transitive set models in the context of just ZFC. It has no real structural implications for V particularly in the context of for example IMH (which are not already implied by the existence of an inner model of just 1 Woodin cardinal). This fact is not irrelevant to HP since it lies at the core of the consistency proof of IMH.

Let me explain further and also clarify the relationship between \Omega-logic and set forcing. For this discussion and to simplify things grant that the \Omega Conjecture is provable and that the base theory is now ZFC + a proper class of Woodin cardinals.

To a set theorist, a natural variation of ordinal maximality, let’s call this strong rank maximality, is that there are rank preserving extensions of M in which large cardinals exist above the ordinals of M (and here one wants to include all “possible large cardinals” whatever that means).

Question 1:  How can we even be sure that there is no pairwise incompatibility here which argues against the very concept of the \Pi_2 consequences of strong rank maximality?

Question 2:  If one can make sense of the \Pi_2 consequences of strong rank maximality and given that M is strongly rank maximal, can the \Pi_2 consequences of this for M be defined in M?

Here is the first point. If there is a proper class of X-cardinals  (and accepting also that an X-cardinal is preserved under forcing by partial orders of size less than the X-cardinal), then in every set-generic extension there is a proper class of X-cardinals and so in every set-generic extension, the sentence  \phi holds where

\phi = “Every set A belongs to a set model with an X-cardinal above A.”

\phi is a \Pi_2-sentence and therefore by the \Omega Conjecture this \Pi_2-sentence is \Omega provable. Further these are arguably exactly the \Pi_2 sentences which generate the \Pi_2 consequences of strong rank maximality.

Here is the second point. If M_1 is a rank initial segment of M_2 then every sentence which is \Omega-provable in M_2 is \Omega-provable in M_1. \Omega proofs have a notion of (ordinal) length and in the ordering of the \Omega-provable sentences by proofs of shortest  length, the sentences which are \Omega-provable in M_2 are an initial segment of the sentences which are \Omega-provable in M_1 (and they could be the same of course).

Putting everything together, the \Pi_2-consequences of the strong rank maximality of a given model M makes perfect sense (no pairwise incompatibility issues) and this set of \Pi_2-sentences is actually definable in M.

This connection with \Omega-logic naturally allows one to adapt strong rank maximality into the HP framework, one restricts to extensions in which the \Omega-proofs of the initial model are not de-certified in the extension (for example if a \Pi_2 sentence is \Omega-provable in the initial model M, it is \Omega-provable in the extension).

This includes set-forcing extensions but also many other extensions. So in this sense \Omega-logic is not just about set-forcing. \Omega-logic is about trying to clarify (or even make sense of) the \Pi_2 consequences of large cardinals (and how is this possibly not relevant to a discussion of truth in set theory?).

My concern with HP is this. I do not see a scenario in which HP even with strong rank maximality can lead anywhere on the fundamental questions involving the large cardinal hierarchy.  The reason is that strong rank maximality considerations will force one to restrict to the case that PD holds in V at which point strong rank maximality notions require consulting at the very least the \Omega-logic of V and this is not definable within the hyper-universe of V.

Granting this, genuine progress on CH is even less plausible since how could that solution ever be certified in the context of strong rank maximality? A solution to CH which is not compatible with strong rank maximality is not a solution to CH since it is refuted by large cardinals.

You will disagree and perhaps that is the conclusion of this discussion, we simply disagree.

But here is a challenge for HP and this does presuppose any conception or application of a notion of strong rank maximality.

Identify a new family of axioms of strong infinity beyond those which have been identified to date (a next generation of large cardinal axioms) or failing this, generate some new insight into the hierarchy of large cardinal axioms we already have. For example, HP does not discriminate against the consistency of a Reinhardt cardinal. Can HP make a prediction here? If so what is that prediction?

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

Many thanks for your comments. It is good that we are finally having this debate (thank you, Sol). Below are some responses to your message.

Dear Sy,

You have asked for a response so I suppose I should respond. I will be brief and take just a few of your points in order.

Indeed regarding \Omega-logic I will go further and claim that it has little relevance for discussions of set-theoretic truth anyway.

If there is a proper class of Woodin cardinals and the \Omega Conjecture holds then the set of \Sigma_2 sentences which can be forced to hold by set forcing is \Delta_2-definable. If the Strong \Omega Conjecture holds then this set is definable in H(\mathfrak c^+).

On the other hand if V = L then this set is \Sigma_2-complete.

How is this not relevant to a discussion of truth?

These are impressive results, but they are about the effect of set forcing on truth, in my view avoiding the key questions about truth. Only considering the effect of set forcing on truth is misleading: Shoenfield’s absoluteness theorem is an important statement about truth, asserting that \Sigma^1_2 truth is not affected by enlargements of the universe of sets. This is inconsistent for \Sigma^1_3 truth, but to see this one has to consider enlargements not obtainable by set forcing (indeed it is consistent for set-generic enlargements).

As I said, if one admits set-forcing as a legitimate concept in discussions of set-theoretic truth then the continuum problem has an easy solution: Extend Levy absoluteness by asserting \Sigma_1 absoluteness with absolute parameters for ccc set-forcing extensions. This is consistent and implies that the continuum is very large. I do not consider this to be a legitimate solution to the continuum problem and the reason is the artificial restriction to ccc set-forcing extensions.

An unfortunate misuse of terminology in discussions of absoluteness in the literature is that “absoluteness” and “forcing” are misleadingly taken to mean “set-generic absoluteness” and “set-forcing”, giving a misleading interpretation of the absoluteness concept. Indeed one of the difficult issues is the relationship between absoluteness and large cardinals: Clearly if absoluteness means set-generic absoluteness than the existence of a proper class of large cardinals is absolute, but this evades the question of how large cardinal existence is affected by enlargements that are not set-generic, such as in Shoenfield’s Theorem.

Also note that by a theorem of Bukovsky, set-genericity is equivalent to a covering property: N is a set-generic extension of M iff for some M-cardinal \kappa, every single-valued function in N between sets in M can be covered by a \kappa-valued function in M. Put this way it is clear that set-genericity is an unwarranted restriction on a the notion of extension of a model of set theory, as there is nothing “intrinsic” about such a covering property.

Perhaps our disagreement is deeper than the above and resides in the following misunderstanding: I am looking for intrinsic sources for the truth of new axioms of set theory. The results you mention, as well as many other impressive results of yours, are highly relevant for the practice of set theory, as they tell us what we can and cannot expect from the typical methods that we use. For example, most independence results in set theory are based entirely on set-forcing, and do not need class forcing, hyperclass forcing or other methods for creating new universes of sets. But this is very different from finding justifications for the truth of new axiom candidates. Such justifications must have a deeper source and cannot be phrased in terms of specific set-theoretic methods. (I understand that Cohen agreed with this point.)

Large cardinals and determinacy rank among the the most exciting and profound themes of 20th century set theory. Their interconnection is striking and in my view lends very strong evidence to their consistency. But 21st century set theory may have a different focus and my interest is to understand what can be said about set-theoretic truth that does not hinge on the most exciting mathematical developments at a particular time in the history of the subject. I am not a naturalist in the sense of Penelope Maddy. And unlike Sol, I remain optimistic about finding new intrinsic sources of set-theoretic truth that may resolve CH and other prominent questions.

You seem to be intentionally provocative here.

If the existence of a measurable limit of Woodin cardinals is inconsistent then these conjectures along with the Strong \Omega Conjecture are all true.

Are you saying that the potential inconsistency of a measurable limit of Woodin cardinals is not ‘worthy of much discussion’? This is a surprisingly strong endorsement of the consistency of AD (and much more).

Indeed, I do strongly endorse the consistency of AD and much more. I do not subscribe to the view that we need the inner model programme to justify the consistency of large cardinals. I think that there is enough evidence from the role of large cardinals in establishing consistency results to justify their own consistency, and indeed I would go further and assert their existence in inner models.

I see absolutely no basis for the claim that HP suggests the existence of inner models for (large) large cardinals holds (within the preferred universes). At best, as you implement it,  HP just seems to be able to suggest that if inner models of large large cardinals can exist then these inner models do exist. There is no insight here as to whether the inner models actually exist. The reason of course is that there no difference in your treatment between inner models of large large cardinals and inner models which satisfy some arbitrary sentence \phi.

First a point of clarification: The HP entails the examination of a variety of different mathematical criteria for the choice of preferred universes. These different criteria need not agree on their first-order consequences, although each is motivated by an intrinsic feature of the universe of sets (usually maximality). The long-term question is whether these criteria will “synthesise” to a stable universal criterion with interesting first-order consequences that are not in conflict with set-theoretic practice. The answer to this questions is not yet known (the programme is new).

Now in answer to what you say above: The first mathematical criteria in the HP was the IMH. It implies that there are measurable cardinals of arbitrary Mitchell order in inner models. (It also implies that in V there are no measurable cardinals.) The reason for measurables in inner models has nothing to do with their existence in some model, it is a consequence of core model theory (with covering over core models and the fact that \square holds in core models). So I don’t see your point here.

The inner models relevant to current practice in Set Theory are correct inner models and their existence (at the level of infinitely many Woodin cardinals)  implies that PD holds in V. Rephrased, the core model technology for building inner models can really only build correct (iterable) inner models once one passes even the level of 1 Woodin cardinal. This is why in the restricted setting of for example V = L[x] for some real x, core model methods cannot go past 1 Woodin cardinal.

I think this is another example of the fundamental difference in our points of view. Yes, “iterable and correct” inner models are important for the relationship between large cardinals and descriptive set theory. But the fundamental concept of inner model is simply a transitive class containing all the ordinals and modeling ZFC, there is no technical requirement of “iterability” involved. Thus again we have the difference between the interpretation of a basic notion in (a particular area of) set-theoretic practice and its natural interpretation in discussions of set-theoretic truth. And there is no hope of producing useful inner models which are correct for 2nd order arithmetic without special assumptions on V, such as the existence of large cardinals. And even if one puts large cardinal axioms into the base theory one still has no guarantee of even \Sigma^1_3 correctness for outer models which are not set-generic. So to say that large cardinals “freeze projective truth” is not accurate, unless one adopts a set-generic interpretation of “freezing”.

Example: The proof of the theorem of Steel that PFA implies the existence of an inner model with infinitely many Woodin cardinals shows that PFA implies PD (in fact that AD holds in L(\mathbb R)). There is no other proof known. This phenomenon is ubiquitous in Set Theory. Combinatorial statements are shown to imply say PD as a by-product of establishing lower bounds for their consistency strength.

These are wonderful applications of core model theory. They add to the evidence for the consistency of PD. But I don’t see what implications this has for the truth of PD. After all the theory “ZFC + V = L[x] for some real x + There is a supercompact cardinal in some inner model” is a strong theory which implies the consistency of PD but does not even imply \Pi_1^1 determinacy.

There is a serious issue in HP with regard to the structure of the hyper-universe (which you define as the set of all countable models of ZFC).  The formulation of ordinal maximality requires a fairly rich structure since the property that a countable transitive model M satisfy ordinal maximality is not absolute to transitive models of ZFC in which M is countable.

What form of ordinal maximality are you using? In my paper with Arrigoni I had a weaker form, with Honzik a stronger one. In the latter version, a countable ordinal maximal universe remains ordinal maximal in any outer model of V.

But more fundamentally: Why need the mathematical criteria for preferred universes be absolute in any sense? One of the important features of the programme is the dynamic interplay between the nature of the Hyperuniverse and V. The Hyperuniverse must be defined in a background V, and conversely first-order properties of preferred universes within the Hyperuniverse are candidates for truth assertions about V. So it is to be expected that changing V will lead to changes in the preferred members of the Hyperuniverse.

Consider the following principle where H denotes the hyper-universe and (H)^M denotes the hyper-universe as defined in M.

(Hyper-reflection)  There exist universes M within H such that (H)^M is an elementary substructure of H.

Does one reject hyper-reflection?  Why?

If one allows hyper-reflection then it seems quite reasonable to take the position that the preferred universes satisfy hyper-reflection. But no witness of hyper-reflection can satisfy IMH or any of the stronger versions such as IMH*.

There is no question of “rejecting” any criterion for preferred universes. Instead, we are exploring the consequences of the different criteria and the extent to which “syntheses” of different criteria with each other are possible.

The criterion you propose is similar to ones I have discussed with Hannes Letigeb and Eduardo Rivello. Hannes asked me about replacing the Hyperuniverse with models of 2nd order ZFC. But of course then the only “pictures of V” are the V_\alpha, \alpha inaccessible and if we apply the HP to such universes we will arrive only at reflection principles and nothing more. Eduardo asked about using countable transitive models of V which are elementarily equivalent to V (one could strengthen this further by demanding an elementary embedding into V). The problem now is that this choice of universes “begs the question”: We want to use the hyperuniverse to understand what first-order properties V should have (based on philosophically justified criteria for the choice of preferred universes), but with Eduardo’s choice one has “built in” all first-order properties of V and therefore can learn nothing new. It is analogous to Zermelo’s quasi-categoricity for 2nd order set theory: Yes, you have categoricity modulo the ordinals and therefore arrive at a complete theory, but you have no idea what this theory says.

I am rather convinced that the generous use of all countable transitive models of ZFC is the right notion of Hyperuniverse and valuable criteria for preferred universes cannot “build in” the first-order theory of V.

One could take the position that H should satisfy PD (by asserting that for each n, H verifies the existence of M_n^\# where M_n is the inner model of n Woodin cardinals) in which case taking the witnesses of hyper-reflection as preferred universes one concludes PD is true in the preferred universes.

??? This is like saying we assume PD in V and take our preferred universes to reflect all first-order truths of V, and therefore PD. But this misses the point: The first-order properties of H, like those of V, are to result from criteria for preferred universes that are based on intrinsic features, such as maximality. How do you arrive at PD that way? I can well imagine that you could arrive at preferred universes that satisfy the existence of inner models with many large cardinals, but this is very far from PD.

In summary the entire approach of HP seems to start from a basic premise (a failure of richness of hyper-universe) that is biased against ever concluding PD is true in the preferred universes. If the hyper-universe is taken to be limited then it is not surprising that one comes to a definition of preferred universes which is similarly limited since one is using properties of the universes within the hyper-universe in defining those preferred universes.

The Hyperuniverse contains ALL countable transitive models of ZFC, so you cannot say that it isn’t rich; it is as rich as possible. I think what you want to do is impose assumptions on V that imply that the Hyperuniverse will contain certain universes that you would like to see there. But this misses the point of the programme: Any such assumption must arise as a consequence of criteria for preferred universes that are intrinsically based. If there is a way of getting PD as a first-order consequence of such criteria I would be happy to see it. But so far things seem to be going in another direction: Only lightface PD and the existence of large cardinals only in inner models, not in V.

More generally, the failure of PD is a higher notion of the inconsistency of PD. Rejecting PD has enormous structural consequences for V just as rejecting 0^\# does. It seems to me that your entire implementation of HP is just another version of this.

But this is not an argument against PD.

You have misinterpreted the programme. It does not necessarily lead to the definitive conclusion that PD is false! The first criterion, IMH, gave this consequence, but other criteria, such as \textsf{IMH}^\# (see my paper with Honzik) do not. Briefly put: There are intrinsically-based criteria which imply that PD is false and others which do not decide PD. So far, no such criterion implies that PD is true.

In fact a key feature of the programme is that it avoids any bias with regard to specific set-theoretic statements like large cardinal existence or PD. The programme proceeds using intrinsically-based criteria and explores their consequences. Of course a strict rule is to only employ criteria which are based on an intrinsic feature of the universe of sets; reference to “forcing” or “large cardinals” or “determinacy” is inappropriate in the formulation of such criteria. My aim is for the programme to be “open-minded” and not slip in, intentionally or otherwise, technical baggage from the current practice of set theory. But currently I do think that whatever consequences the programme yields should be tested for “compatibility” with set-theoretic practice.

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

You have asked for a response so I suppose I should respond. I will be brief and take just a few of your points in order.

Indeed regarding \Omega-logic I will go further and claim that it has little relevance for discussions of set-theoretic truth anyway.

If there is a proper class of Woodin cardinals and the \Omega Conjecture holds then the set of \Sigma_2 sentences which can be forced to hold by set forcing is \Delta_2-definable. If the Strong \Omega Conjecture holds then this set is definable in H(\mathfrak c^+).

On the other hand if V = L then this set is \Sigma_2-complete.

How is this not relevant to a discussion of truth?

In your Section 6 you discuss two programmes, Omega-logic and the Inner Model Programme. In my view, the latter is not worthy of much discussion, as it is still just a set of unverified conjectures, despite it having been launched by Dodd and Jensen about 40(?) years ago.

You seem to be intentionally provocative here.

If the existence of a measurable limit of Woodin cardinals is inconsistent then these conjectures along with the Strong \Omega Conjecture are all true.

Are you saying that the potential inconsistency of a measurable limit of Woodin cardinals is not “worthy of much discussion”? This is a surprisingly strong endorsement of the consistency of AD (and much more).

Let me now briefly explain what the HP is about….the idea behind the programme is to make no biased assumptions based on mathematical concepts like genericity, but rather to select preferred pictures of V based on intrinsic philosophical principles such as maximality (another is ‘omniscience’). The challenge in the programme is to arrive at a stable set of criteria for preferred universes based on such principles. This will take time (the programme is still quite new). Also the mathematics is quite hard (for example sophisticated variants of Jensen coding are required). The current status is as follows: The programme suggests that small large cardinals exist, large large cardinals exist in inner models and CH is very false (the continuum is very large). But there are many loose ends at the moment, both philosophical and mathematical. It is too early to predict what the long-term conclusions will be. But it is clear to me that a solution to the continuum problem is quite possible via this programme; indeed there is a proposed criterion, the Strong Inner Model Hypothesis which will lead to this outcome. A serious mathematical obstacle is the difficulty in showing that the SIMH is consistent.

I see absolutely no basis for the claim that HP suggests the existence of inner models for (large) large cardinals holds (within the preferred universes). At best, as you implement it,  HP just seems to be able to suggest that if inner models of large large cardinals can exist then these inner models do exist. There is no insight here as to whether the inner models actually exist. The reason of course is that there no difference in your treatment  between inner models of large large cardinals and inner models which satisfy some arbitrary sentence \phi.

The inner models relevant to current practice in Set Theory are correct inner models and their existence (at the level of infinitely many Woodin cardinals)  implies that PD holds in V. Rephrased, the core model technology for building inner models can really only build correct (iterable) inner models once one passes even the level of 1 Woodin cardinal. This is why in the restricted setting of for example V = L[x] for some real x, core model methods cannot go past 1 Woodin cardinal.

Example: The proof of the theorem of Steel that PFA implies the existence of an inner model with infinitely many Woodin cardinals shows that PFA implies PD (in fact that AD holds in L(\mathbb R)). There is no other proof known. This phenomenon is ubiquitous in Set Theory. Combinatorial statements are shown to imply say PD as a by-product of establishing lower bounds for their consistency strength.

There is a serious issue in HP with regard to the structure of the hyper-universe (which you define as the set of all countable models of ZFC).  The formulation of ordinal maximally requires a fairly rich structure since the property that a countable transitive model M satisfy ordinal maximality is not absolute to transitive models of ZFC in which M is countable.

Consider the following principle where H denotes the hyper-universe and (H)^M denotes the hyper-universe as defined in M.

(Hyper-reflection)  There exist universes M within H such that (H)^M is an elementary substructure of H.

Does one reject hyper-reflection?  Why?

If one allows hyper-reflection then it seems quite reasonable to take the position that the preferred universes satisfy hyper-reflection. But no witness of hyper-reflection can satisfy IMH or any of the stronger versions such as IMH*.

One could take the position that H should satisfy PD (by asserting that for each n, H verifies the existence of M_n^\# where M_n is the inner model of n Woodin cardinals) in which case taking the witnesses of hyper-reflection as preferred universes one concludes PD is true in the preferred universes.

In summary the entire approach of HP seems to start from a basic premise (a failure of richness of hyper-universe) that is biased against ever concluding PD is true in the preferred universes. If the hyper-universe is taken to be limited then it is not surprising that one comes to a definition of preferred universes which is similarly limited since one is using properties of the universes within the hyper-universe in defining those preferred universes.

More generally, the failure of PD is a higher notion of the inconsistency of PD. Rejecting PD has enormous structural consequences for V just as rejecting 0^\# does.  It seems to me that your entire implementation of HP is just another version of this.

But this is not an argument against PD.

Regards,
Hugh