Dear Sy,

You have asked for a response so I suppose I should respond. I will be brief and take just a few of your points in order.

Indeed regarding -logic I will go further and claim that it has little relevance for discussions of set-theoretic truth anyway.

If there is a proper class of Woodin cardinals and the Conjecture holds then the set of sentences which can be forced to hold by set forcing is -definable. If the Strong Conjecture holds then this set is definable in .

On the other hand if V = L then this set is -complete.

How is this not relevant to a discussion of truth?

In your Section 6 you discuss two programmes, Omega-logic and the Inner Model Programme. In my view, the latter is not worthy of much discussion, as it is still just a set of unverified conjectures, despite it having been launched by Dodd and Jensen about 40(?) years ago.

You seem to be intentionally provocative here.

If the existence of a measurable limit of Woodin cardinals is inconsistent then these conjectures along with the Strong Conjecture are all true.

Are you saying that the potential inconsistency of a measurable limit of Woodin cardinals is not “worthy of much discussion”? This is a surprisingly strong endorsement of the consistency of AD (and much more).

Let me now briefly explain what the HP is about….the idea behind the programme is to make no biased assumptions based on mathematical concepts like genericity, but rather to select preferred pictures of V based on intrinsic philosophical principles such as maximality (another is ‘omniscience’). The challenge in the programme is to arrive at a stable set of criteria for preferred universes based on such principles. This will take time (the programme is still quite new). Also the mathematics is quite hard (for example sophisticated variants of Jensen coding are required). The current status is as follows: The programme suggests that small large cardinals exist, large large cardinals exist in inner models and CH is very false (the continuum is very large). But there are many loose ends at the moment, both philosophical and mathematical. It is too early to predict what the long-term conclusions will be. But it is clear to me that a solution to the continuum problem is quite possible via this programme; indeed there is a proposed criterion, the Strong Inner Model Hypothesis which will lead to this outcome. A serious mathematical obstacle is the difficulty in showing that the SIMH is consistent.

I see absolutely no basis for the claim that HP suggests the existence of inner models for (large) large cardinals holds (within the preferred universes). At best, as you implement it, HP just seems to be able to suggest that if inner models of large large cardinals can exist then these inner models do exist. There is no insight here as to whether the inner models actually exist. The reason of course is that there no difference in your treatment between inner models of large large cardinals and inner models which satisfy some arbitrary sentence .

The inner models relevant to current practice in Set Theory are *correct* inner models and their existence (at the level of infinitely many Woodin cardinals) implies that PD holds in V. Rephrased, the core model technology for building inner models can really only build correct (iterable) inner models once one passes even the level of 1 Woodin cardinal. This is why in the restricted setting of for example for some real , core model methods cannot go past 1 Woodin cardinal.

**Example:** The proof of the theorem of Steel that PFA implies the existence of an inner model with infinitely many Woodin cardinals shows that PFA implies PD (in fact that AD holds in ). There is no other proof known. This phenomenon is ubiquitous in Set Theory. Combinatorial statements are shown to imply say PD as a by-product of establishing lower bounds for their consistency strength.

There is a serious issue in HP with regard to the structure of the hyper-universe (which you define as the set of all countable models of ZFC). The formulation of ordinal maximally requires a fairly rich structure since the property that a countable transitive model satisfy ordinal maximality is not absolute to transitive models of ZFC in which is countable.

Consider the following principle where denotes the hyper-universe and denotes the hyper-universe as defined in M.

**(Hyper-reflection)** There exist universes within such that is an elementary substructure of .

Does one reject hyper-reflection? Why?

If one allows hyper-reflection then it seems quite reasonable to take the position that the preferred universes satisfy hyper-reflection. But no witness of hyper-reflection can satisfy IMH or any of the stronger versions such as IMH*.

One could take the position that H should satisfy PD (by asserting that for each , H verifies the existence of where is the inner model of Woodin cardinals) in which case taking the witnesses of hyper-reflection as preferred universes one concludes PD is true in the preferred universes.

In summary the entire approach of HP seems to start from a basic premise (a failure of richness of hyper-universe) that is biased against ever concluding PD is true in the preferred universes. If the hyper-universe is taken to be limited then it is not surprising that one comes to a definition of preferred universes which is similarly limited since one is using properties of the universes within the hyper-universe in defining those preferred universes.

More generally, the failure of PD is a higher notion of the inconsistency of PD. Rejecting PD has enormous structural consequences for V just as rejecting does. It seems to me that your entire implementation of HP is just another version of this.

But this is not an argument against PD.

Regards,

Hugh