Dear Hugh (likely my last mail for a while, due to my California trip),

On Sun, 19 Oct 2014, W Hugh Woodin wrote:

More details: Take the IMH as an example. It is expressible in V-logic. And V-logic is first-order over the least admissible (Goedel-) lengthening of V (i.e. we go far enough in the L-hierarchy built over V until we get a model of KP). We apply LS to this admissible lengthening, that’s all.This is of course fine for IMH. But this does not work at all for SIMH#. One really seems to need the hyperuniverse for that.

Details: SIMH# is _not_ in general a first order property of M in L(M) or even in L(N,U) where (N,U) witnesses that M is #-generated.

You are of course right and I have been suppressing the technical details required to deal with this to avoid complicating the discussion. The point is that with any property that refers to “thickenings” one must make use of V-logic, let’s call it M-logic to match your notation. Then what I have been suppressing is that #-generation is to be taken as the consistency in M-logic (extended with new axioms making the ordinals standard) of the obvious theory expressing the iterability of a presharp that generates M. LS can be applied because any presharp that embeds into an iterable presharp is iterable. Handling variants of the takes more work, but can be done. Of course the difficulty is that we are not dealing with actual objects but with the consistency of theories. I’ll write more about this when I get a chance.

Of course if one is happy to adopt the Hyperuniverse from the start (without the “reduction”) then these technical issues disappear.

(Fine let’s call it HP and not CTMP). HP seems now to be a one-principle program ().

This is nonsense. One may bring in cardinal maximality, inner model reflection or various forms of unreachability as well. You have an fixation.

Further progress seems to require at the very least, understanding the implications of .

There could be progress with other criteria in the meantime.

As I said in my last email on this, it is impossible to have a mathematical discussion (now) of since it has been formulated so that one cannot do anything with it without first solving a number of problems which look extremely difficult. And I am not even talking about the consistency problem of .

Just to be clear: This is not a criticism of , it just saying it is premature to have a mathematically oriented discussion of it, and therefore of HP.

I partly agree: The important issues now concern the formulation of the HP, not the details of the mathematical analysis of the maximality criteria.

So (and this is why I have repeated myself above), I do not yet draw the distinction you do on HP versus the study of ctm’s because there not yet enough mathematical data for me to do this.

This is quite ridiculous. The study of ctm’s is obviously much, much broader than the specific types of questions relevant to the HP (maximality criteria).

Hopefully this situation will clarify as more data becomes available.

That is what I have been trying, without success until now, to say to you for several weeks already. The important task for now is to just recognise the legitimacy of the approach, not to evaluate the results of the programme, which will take time.

Have a productive week in California!

Thanks!

Concerning the future of HP, I will make a prediction: HP ends up with PD.

This has already almost happened a number of times (strong unreadability and the unreachability of V by HOD).

I think this very likely, plus a lot more!

Best,

Sy