Tag Archives: Omega Logic

Re: Paper and slides on indefiniteness of CH

Dear Sy,

I guess I should respond to your question as well.

On Oct 31, 2014, at 3:30 AM, Sy David Friedman wrote:

My point is that Hugh considers large cardinal existence to be part of set-theoretic truth. Why?

Let me clarify my position, or at least that part of it which concerns my (frankly extreme) skepticism about your anti-large cardinal principles.

(I am assuming LC axioms persists under small forcing and that is all in the discussion below)

Suppose there is a proper class of Woodin cardinals. Suppose M is a ctm and M has an iteration strategy \mathcal I at its least Woodin cardinal such that \mathcal I is in L(A,\mathbb R) for some univ. Baire set A.

Suppose some LC axiom holds in M above the least Woodin cardinal of M.Then in V, every V_{\alpha} has a vertical extension in which the LC axiom holds above \alpha.

The existence of such an M for the LC axiom is a natural form of consistency of the LC axiom (closely related to the consistency in \Omega-logic).

Thus for any LC axiom (such as extendible etc.), it is compelling (modulo consistency) that every V_{\alpha} has a vertical extension in which LC axiom holds above \alpha.

But then any claim that the LC axiom does not hold in V, is in general an extraordinary claim in need of extraordinary evidence.

The maximality principles you have proposed do not (for me anyway) meet this standard.

Just to be clear. I am not saying that any LC axiom which is consistent in the sense described above, must be true. I do not believe this (there are adhoc LC axioms for which it is false).

I am just saying that the declaration, the LC axiom does not hold in V, in general requires extraordinary evidence, particularly in the case of LC axioms such as the LC axiom: there is an extendible cardinal.

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

On Sun, 3 Aug 2014, W Hugh Woodin wrote:

I really do not understand the basis for your conviction in the consistency of PD (or AD or ZFC + \omega many Woodin cardinals).

This results from the standard empirical fact that the consistency of a huge variety of statements in set theory is shown assuming the consistency of large cardinal axioms and there are even “lower bound results” calibrating the strength of the large cardinal axioms required, even at the level of a supercompact. For example, \textsf{PFA}(\mathfrak c^+ \text{linked}) is consistent relative to a degree of supercompactness yet there are models with slightly less large cardinal strength (subcompact cardinals) with the property that \textsf{PFA}(\mathfrak c^+ \text{linked}) fails in all of its proper (not necessarily generic) extensions. This is strong evidence that the consistency strength of \textsf{PFA}(\mathfrak c^+ \text{linked}) is at the level of a degree of supercompactness. Thus large cardinals provide a “hierarchy” of consistency strengths whose strictness is witnessed by numerous statements of set-theoretic interest. I see this as sufficient justification for the consistency of large cardinals; we don’t need inner model theory or some structure theory which follows from their existence to know that they are consistent.

Consider the Extended Reimann Hypothesis (ERH). ERH passes all the usual tests cited for PD (or AD or ZFC + \omega many Woodin cardinals) as the basis of its consistency. Tremendous structure theory, implications of theorems which are later proved by other means etc.

Those tests are used (not by me but by a number of set-theorists) in favour of the truth of PD, not just of its consistency! The consistency of PD only needs the consistency of large cardinals, as justified above, and none of these tests.

Yet there does not seem to be any conviction in the Number Theory community that even the Reimann Hypothesis is true (and of course RH follows from the consistency of ERH).

You have made an important point for me: a rich structure theory together with Gödelian “success” is insufficient to convince number-theorists that ERH is true, and by analogy these criteria should not suffice to convince set-theorists that PD is true.

Your analogy is problematic as there is no clear distinction between consistency and truth with regard to the ERH; this is because there is only one natural model of first-order arithmetic. But there is a clear distinction between consistency and truth for PD as there are many natural models of second-order arithmetic.

Look at the statement on the rules of the Millennium Prizes. A counterexample to RH is not unconditionally accepted as a solution. If there was any consensus that RH is true this escape clause would not be in the stated rules. Further the structure theory you cite as evidence for Con PD is in the context of PD etc. If one rejects that context then how can one maintain the conviction that Con PD is true?

The only “context” needed for Con PD is the empirical calibration provided by a strict “hierarchy” of consistency strengths. That makes no assumptions about PD.

Rephrased: The fact that the models of T (if any exist) have a rich internal theory is not evidence that there are any models of T. Something else is needed.

The consistency of large cardinals does not require any “rich internal theory”. In contrast, the truth of large large cardinal axioms is much harder to justify, indeed I have yet to see a convincing argument for that.

I completely agree this is the basic issue over which we disagree. The position that all extensions, class or set generic, are on an equal footing is at the outset already a bias against large cardinals.

You have it backwards. The point of beginning with arbitrary extensions is precisely to avoid any kind of bias. If one deliberately chooses a more restricted notion of extension to anticipate a later desire to have large cardinals then this is a very strong and in my view unacceptable bias towards the existence of large cardinals.

The HP is open to the conclusion that large large cardinals exist, but to achieve this one needs a philosophically well-justified and unbiased criterion for the choice of preferred universes. There may be such a criterion but so far the only ones I have come up with either contradict or tolerate large large cardinal existence; none of them succeed in proving large large cardinal existence.

The canonical objects identified by large cardinals, such as the generalizations of 0^\#, can disappear (i.e. cease to be recognized as such) if one passes to a class forcing extension.

Yes, but once again if you set things up to protect desired generalisations of 0^\# then you are showing a bias towards those generalisations. This is in my view unacceptable without justifying that way of setting things up with a philosophically well-justified and unbiased criterion for the choice of preferred universes.

In summary: The basic problem with what you are saying is that you are letting set-theoretic practice dictate the investigation of set-theoretic truth!

It would be ideal if the axioms and objects that you want to see in set theory would arise as a consequence of a philosophically well-justified approach to truth, but so far I don’t see how to do that (things are pointing in a different direction). I present this to you as an interesting challenge. It should be clear by now that the derivation of large large cardinal axioms from maximaality criteria is very problematic (but perhaps not impossible).

Rephrased: The claim that an inner model is just a proper class is a bias against large cardinals. Once one passes the level of one Woodin cardinal the existence of proper class inner models becomes analogous to the existence of transitive set models in the context of just ZFC. It has no real structural implications for V particularly in the context of for example IMH (which are not already implied by the existence of an inner model of just 1 Woodin cardinal). This fact is not irrelevant to HP since it lies at the core of the consistency proof of IMH.

See the above. The HP is not biased against large cardinals. Rather, imposing large cardinals at the start of any investigation of truth is where the bias lies.

Let me explain further and also clarify the relationship between \Omega-logic and set forcing. For this discussion and to simplify things grant that the \Omega Conjecture is provable and that the base theory is now ZFC + a proper class of Woodin cardinals.

OK, for the sake of discussion let us now relativise everything to large cardinal axioms. I.e., the base theory is now ZFC + Arbitrary large cardinals. This is not justified but surely worth investigating.

To a set theorist, a natural variation of ordinal maximality, let’s call this strong rank maximality, is that there are rank preserving extensions of M in which large cardinals exist above the ordinals of M (and here one wants to include all ‘possible large cardinals’ whatever that means).

Please see my paper with Honzik anout strong reflection. We capture ordinal maximality with the property of “\#-generation”. Roughly speaking, V is \#-generated if it arises from a “sharp” in the same way that L arises from 0^\#. This is in my view the strongest possible form of ordinal maximality. (Aside: It also allows one to argue that reflection is perfectly compatible with a potentialist view of V, no actualism is required.)

Then the context you want is \#-generation plus large cardinals. These are your preferred universes. (Note that large cardinals in V automatically generate large cardinals past the ordinals of V via \#-generation.) Below I will use “strong maximality” to mean \#-generated plus large cardinals.

Question 1:  How can we even be sure that there is no pairwise incompatibility here which argues against the very concept of the \Pi_2 consequences of strong rank maximality?

Let T be the set of first-order sentences which hold in all strongly maximal universes. These are precisely the first-order consequences of “strong maximality”. Then any two sentences in T are compatible with each other.

Maybe you meant to ask if any two strongly maximal universes satisfy the same \Pi_2 sentences? The answer is of course “no” if strongly maximal universes can differ about what large cardinals exist. This will surely be the case unless one requires that each strongly maximal universe satisfies “all large cardinal axioms”. How do you formulate that exactly?

Question 2:  If one can make sense of the \Pi_2 consequences of strong rank maximality and given that M is strongly rank maximal, can the \Pi_2 consequences of this for M be defined in M?

M. Stanley showed that there are countable transitive models M of ZFC with the property that the set of first-order sentences with parameters from M which hold in an arbitrary outer model of M is M-definable. (Note that this is immediate for any M if one considers only set-generic outer models of M!) This can also be done for the theory ZFC + Large cardinals (fixing a notion of large cardinal axiom). I expect that this can also be done for strong maximality (and indeed holds for all strongly maximal universes) and therefore the answer to your question is “yes”.

Here is the first point. If there is a proper class of X-cardinals  (and accepting also that an X-cardinal is preserved under forcing by partial orders of size less than the X-cardinal), then in every set-generic extension there is a proper class of X-cardinals and so in every set-generic extension, the sentence  \phi holds where \phi = “Every set A belongs to a set model with an X-cardinal above A.” \phi is a \Pi_2-sentence and therefore by the \Omega Conjecture this \Pi_2-sentence is \Omega provable.

So we know something about what happens in set-generic extensions. But this tells us nothing about what happens in more general extensions, even those which satisfy \phi.

Further these are arguably exactly the \Pi_2 sentences which generate the \Pi_2 consequences of strong rank maximality.

Why? You want to claim that if a \Pi_2 sentence \psi holds in all strongly maximal universes then it follows from some \phi as above. Equivalently, if a \Sigma_2 sentence is compatible with all large cardinal axioms then it holds in some strongly maximal universe. But again we have a problem with the concept of “all large cardinal axioms”. If this really could be formulated then we have the problem sentence “For some large cardinal axiom \phi there is no transitive model of \phi“. This is \Sigma_2, compatible with all large cardinal axioms but false in all strongly maximal universes. So you are forced to fix a bound on the large cardinal axioms you consider, but then you lose consequences of strong maximality.

It seems better to drop set-genericity and consider the IMH for strongly maximal universes. Honzik and I verified the consistency of the IMH for ordinal maximal (= \#-generated) universes using a variant of Jensen coding and an interesting challenge is to do this for strongly maximal universes (i.e. add the large cardinals). As you know the mathematics is now quite hard as so far we lack the inner model theory and Jensen coding theorems that are available for the smaller large cardinals.

Here is the second point. If M_1 is a rank initial segment of M_2 then every sentence which is \Omega-provable in M_2 is \Omega-provable in M_1. \Omega proofs have a notion of (ordinal) length and in the ordering of the \Omega-provable sentences by proofs of shortest  length, the sentences which are \Omega-provable in M_2 are an initial segment of the sentences which are \Omega-provable in M_1 (and they could be the same of course).

Putting everything together, the \Pi_2-consequences of the strong rank maximality of a given model M makes perfect sense (no pairwise incompatibility issues) and this set of \Pi_2-sentences is actually definable in M.

This connection with \Omega-logic naturally allows one to adapt strong rank maximality into the HP framework, one restricts to extensions in which the \Omega-proofs of the initial model are not de-certified in the extension (for example if a \Pi_2 sentence is \Omega-provable in the initial model M, it is \Omega-provable in the extension).

This includes set-forcing extensions but also many other extensions. So in this sense \Omega-logic is not just about set-forcing. \Omega-logic is about trying to clarify (or even make sense of) the \Pi_2 consequences of large cardinals (and how is this possibly not relevant to a discussion of truth in set theory?).

As I said above, I fail to see how you reach the conclusion that the \Pi_2 consequences of large cardinals are those generated by your sentences \phi. In any case your conclusion is that the \Pi_2 consequences of large cardinals are just the local versions of the very same large cardinal axioms. How does this constitute a clarification?

My concern with HP is this. I do not see a scenario in which HP even with strong rank maximality can lead anywhere on the fundamental questions involving the large cardinal hierarchy.  The reason is that strong rank maximality considerations will force one to restrict to the case that PD holds in V at which point strong rank maximality notions require consulting at the very least the \Omega-logic of V and this is not definable within the hyper-universe of V.

I don’t know what fundamental questions involving the large cardinal hierarchy you refer to. Consider IMH (Strong maximality). A consistency proof for this would be very revealing. Of course any model of this will satisfy PD but this does not mean that PD holds in V. As I said it is likely that the \Gamma-logic of any such universe will be definable in that universe, where \Gamma-logic is the (in my view more natural and better motivated) version of \Omega-logic which refers to arbitrary outer models and not just to set-generic ones.

Granting this, genuine progress on CH is even less plausible since how could that solution ever be certified in the context of strong rank maximality? A solution to CH which is not compatible with strong rank maximality is not a solution to CH since it is refuted by large cardinals.

No. Consider \textsf{SIMH}^\# (the IMH with absolute parameters for ordinal maximal = #-generated universes). I conjecture that this is consistent. Just as the \textsf{IMH}^\# is consistent with strong maximality I expect the same for the \textsf{SIMH}^\#. And the \textsf{SIMH}^\# implies that the continuum is very large.

You will disagree and perhaps that is the conclusion of this discussion, we simply disagree.

But here is a challenge for HP and this does presuppose any conception or application of a notion of strong rank maximality.

Identify a new family of axioms of strong infinity beyond those which have been identified to date (a next generation of large cardinal axioms) or failing this, generate some new insight into the hierarchy of large cardinal axioms we already have. For example, HP does not discriminate against the consistency of a Reinhardt cardinal. Can HP make a prediction here? If so what is that prediction?

This does sound interesting but I confess that I don’t quite understand it. The HP focuses on truth, not on consistency. It seems that the next generation of axioms will not be of the large cardinal variety (consistency appears to be already exhausted; I think it likely that Reinhardt cardinals are inconsistent even without AC) but concern new and subtle forms of absoluteness / powerset maximality.

Perhaps the most pressing challenge is to justify large cardinal existence as a consequence of well-justified criteria for the selection of preferred universes. This requires a new idea. Some have suggested structural reflection, but I don’t find this convincing due to the arbitrariness in the relationship between V and its reflected versions.

Many thanks, Hugh, for your stimulating comments,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

What form of ordinal maximality are you using? In my paper with Arrigoni I had a weaker form, with Honzik a stronger one. In the latter version, a countable ordinal maximal universe remains ordinal maximal in any outer model of V.

The notion of ordinal maximality to which I was referring was that in the bulletin paper and that which is used to formulate IMH* there.

Indeed, I do strongly endorse the consistency of AD and much more. I do not subscribe to the view that we need the inner model programme to justify the consistency of large cardinals. I think that there is enough evidence from the role of large cardinals in establishing consistency results to justify their own consistency, and indeed I would go further and assert their existence in inner models.

I really do not understand the basis for your conviction in the consistency of PD (or AD or ZFC + \omega many Woodin cardinals).

Consider the Extended Reimann Hypothesis (ERH). ERH passes all the usual tests cited for PD (or AD or ZFC + \omega many Woodin cardinals) as the basis of its consistency. Tremendous structure theory, implications of theorems which are later proved by other means etc.

Yet there does not seem to be any conviction in the Number Theory community that even the Reimann Hypothesis is true (and of course RH follows from the consistency of ERH).  Look at the statement on the rules of the Millennium Prizes. A counterexample to RH is not unconditionally accepted as a solution. If there was any consensus that RH is true this escape clause would not be in the stated rules.

Further the structure theory you cite as evidence for Con PD is in the context of PD etc. If one rejects that context then how can one maintain the conviction that Con PD is true?

Rephrased: The fact that the models of T (if any exist) have a rich internal theory is not evidence that there are any models of T. Something else is needed.

I think this is another example of the fundamental difference in our points of view. Yes, “iterable and correct” inner models are important for the relationship between large cardinals and descriptive set theory. But the fundamental concept of inner model is simply a transitive class containing all the ordinals and modeling ZFC, there is no technical requirement of ‘iterability’ involved. Thus again we have the difference between the interpretation of a basic notion in (a particular area of) set-theoretic practice and its natural interpretation in discussions of set-theoretic truth. And there is no hope of producing useful inner models which are correct for 2nd order arithmetic without special assumptions on V, such as the existence of large cardinals. And even if one puts large cardinal axioms into the base theory one still has no guarantee of even Sigma-1-3 correctness for outer models which are not set-generic. So to say that large cardinals “freeze projective truth” is not accurate, unless one adopts a set-generic interpretation of “freezing.”

I completely agree this is the basic issue over which we disagree.

The position that all extensions, class or set generic, on on an equal footing is at the outset already a bias against large cardinals. The canonical objects identified by large cardinals, such as the generalizations of 0^\#, can disappear (i.e. cease to be recognized as such) if one passes to a class forcing extension.

Rephrased: The claim that an inner model is just a proper class is a bias against large cardinals. Once one passes the level of one Woodin cardinal the existence of proper class inner models becomes analogous to the existence of transitive set models in the context of just ZFC. It has no real structural implications for V particularly in the context of for example IMH (which are not already implied by the existence of an inner model of just 1 Woodin cardinal). This fact is not irrelevant to HP since it lies at the core of the consistency proof of IMH.

Let me explain further and also clarify the relationship between \Omega-logic and set forcing. For this discussion and to simplify things grant that the \Omega Conjecture is provable and that the base theory is now ZFC + a proper class of Woodin cardinals.

To a set theorist, a natural variation of ordinal maximality, let’s call this strong rank maximality, is that there are rank preserving extensions of M in which large cardinals exist above the ordinals of M (and here one wants to include all “possible large cardinals” whatever that means).

Question 1:  How can we even be sure that there is no pairwise incompatibility here which argues against the very concept of the \Pi_2 consequences of strong rank maximality?

Question 2:  If one can make sense of the \Pi_2 consequences of strong rank maximality and given that M is strongly rank maximal, can the \Pi_2 consequences of this for M be defined in M?

Here is the first point. If there is a proper class of X-cardinals  (and accepting also that an X-cardinal is preserved under forcing by partial orders of size less than the X-cardinal), then in every set-generic extension there is a proper class of X-cardinals and so in every set-generic extension, the sentence  \phi holds where

\phi = “Every set A belongs to a set model with an X-cardinal above A.”

\phi is a \Pi_2-sentence and therefore by the \Omega Conjecture this \Pi_2-sentence is \Omega provable. Further these are arguably exactly the \Pi_2 sentences which generate the \Pi_2 consequences of strong rank maximality.

Here is the second point. If M_1 is a rank initial segment of M_2 then every sentence which is \Omega-provable in M_2 is \Omega-provable in M_1. \Omega proofs have a notion of (ordinal) length and in the ordering of the \Omega-provable sentences by proofs of shortest  length, the sentences which are \Omega-provable in M_2 are an initial segment of the sentences which are \Omega-provable in M_1 (and they could be the same of course).

Putting everything together, the \Pi_2-consequences of the strong rank maximality of a given model M makes perfect sense (no pairwise incompatibility issues) and this set of \Pi_2-sentences is actually definable in M.

This connection with \Omega-logic naturally allows one to adapt strong rank maximality into the HP framework, one restricts to extensions in which the \Omega-proofs of the initial model are not de-certified in the extension (for example if a \Pi_2 sentence is \Omega-provable in the initial model M, it is \Omega-provable in the extension).

This includes set-forcing extensions but also many other extensions. So in this sense \Omega-logic is not just about set-forcing. \Omega-logic is about trying to clarify (or even make sense of) the \Pi_2 consequences of large cardinals (and how is this possibly not relevant to a discussion of truth in set theory?).

My concern with HP is this. I do not see a scenario in which HP even with strong rank maximality can lead anywhere on the fundamental questions involving the large cardinal hierarchy.  The reason is that strong rank maximality considerations will force one to restrict to the case that PD holds in V at which point strong rank maximality notions require consulting at the very least the \Omega-logic of V and this is not definable within the hyper-universe of V.

Granting this, genuine progress on CH is even less plausible since how could that solution ever be certified in the context of strong rank maximality? A solution to CH which is not compatible with strong rank maximality is not a solution to CH since it is refuted by large cardinals.

You will disagree and perhaps that is the conclusion of this discussion, we simply disagree.

But here is a challenge for HP and this does presuppose any conception or application of a notion of strong rank maximality.

Identify a new family of axioms of strong infinity beyond those which have been identified to date (a next generation of large cardinal axioms) or failing this, generate some new insight into the hierarchy of large cardinal axioms we already have. For example, HP does not discriminate against the consistency of a Reinhardt cardinal. Can HP make a prediction here? If so what is that prediction?

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Sy,

You have asked for a response so I suppose I should respond. I will be brief and take just a few of your points in order.

Indeed regarding \Omega-logic I will go further and claim that it has little relevance for discussions of set-theoretic truth anyway.

If there is a proper class of Woodin cardinals and the \Omega Conjecture holds then the set of \Sigma_2 sentences which can be forced to hold by set forcing is \Delta_2-definable. If the Strong \Omega Conjecture holds then this set is definable in H(\mathfrak c^+).

On the other hand if V = L then this set is \Sigma_2-complete.

How is this not relevant to a discussion of truth?

In your Section 6 you discuss two programmes, Omega-logic and the Inner Model Programme. In my view, the latter is not worthy of much discussion, as it is still just a set of unverified conjectures, despite it having been launched by Dodd and Jensen about 40(?) years ago.

You seem to be intentionally provocative here.

If the existence of a measurable limit of Woodin cardinals is inconsistent then these conjectures along with the Strong \Omega Conjecture are all true.

Are you saying that the potential inconsistency of a measurable limit of Woodin cardinals is not “worthy of much discussion”? This is a surprisingly strong endorsement of the consistency of AD (and much more).

Let me now briefly explain what the HP is about….the idea behind the programme is to make no biased assumptions based on mathematical concepts like genericity, but rather to select preferred pictures of V based on intrinsic philosophical principles such as maximality (another is ‘omniscience’). The challenge in the programme is to arrive at a stable set of criteria for preferred universes based on such principles. This will take time (the programme is still quite new). Also the mathematics is quite hard (for example sophisticated variants of Jensen coding are required). The current status is as follows: The programme suggests that small large cardinals exist, large large cardinals exist in inner models and CH is very false (the continuum is very large). But there are many loose ends at the moment, both philosophical and mathematical. It is too early to predict what the long-term conclusions will be. But it is clear to me that a solution to the continuum problem is quite possible via this programme; indeed there is a proposed criterion, the Strong Inner Model Hypothesis which will lead to this outcome. A serious mathematical obstacle is the difficulty in showing that the SIMH is consistent.

I see absolutely no basis for the claim that HP suggests the existence of inner models for (large) large cardinals holds (within the preferred universes). At best, as you implement it,  HP just seems to be able to suggest that if inner models of large large cardinals can exist then these inner models do exist. There is no insight here as to whether the inner models actually exist. The reason of course is that there no difference in your treatment  between inner models of large large cardinals and inner models which satisfy some arbitrary sentence \phi.

The inner models relevant to current practice in Set Theory are correct inner models and their existence (at the level of infinitely many Woodin cardinals)  implies that PD holds in V. Rephrased, the core model technology for building inner models can really only build correct (iterable) inner models once one passes even the level of 1 Woodin cardinal. This is why in the restricted setting of for example V = L[x] for some real x, core model methods cannot go past 1 Woodin cardinal.

Example: The proof of the theorem of Steel that PFA implies the existence of an inner model with infinitely many Woodin cardinals shows that PFA implies PD (in fact that AD holds in L(\mathbb R)). There is no other proof known. This phenomenon is ubiquitous in Set Theory. Combinatorial statements are shown to imply say PD as a by-product of establishing lower bounds for their consistency strength.

There is a serious issue in HP with regard to the structure of the hyper-universe (which you define as the set of all countable models of ZFC).  The formulation of ordinal maximally requires a fairly rich structure since the property that a countable transitive model M satisfy ordinal maximality is not absolute to transitive models of ZFC in which M is countable.

Consider the following principle where H denotes the hyper-universe and (H)^M denotes the hyper-universe as defined in M.

(Hyper-reflection)  There exist universes M within H such that (H)^M is an elementary substructure of H.

Does one reject hyper-reflection?  Why?

If one allows hyper-reflection then it seems quite reasonable to take the position that the preferred universes satisfy hyper-reflection. But no witness of hyper-reflection can satisfy IMH or any of the stronger versions such as IMH*.

One could take the position that H should satisfy PD (by asserting that for each n, H verifies the existence of M_n^\# where M_n is the inner model of n Woodin cardinals) in which case taking the witnesses of hyper-reflection as preferred universes one concludes PD is true in the preferred universes.

In summary the entire approach of HP seems to start from a basic premise (a failure of richness of hyper-universe) that is biased against ever concluding PD is true in the preferred universes. If the hyper-universe is taken to be limited then it is not surprising that one comes to a definition of preferred universes which is similarly limited since one is using properties of the universes within the hyper-universe in defining those preferred universes.

More generally, the failure of PD is a higher notion of the inconsistency of PD. Rejecting PD has enormous structural consequences for V just as rejecting 0^\# does.  It seems to me that your entire implementation of HP is just another version of this.

But this is not an argument against PD.

Regards,
Hugh