As I said, if you synthesise the IMH with a weak form of reflection then you will contradict large cardinals. For example if you relativise the IMH to models which are only generated by a presharp which is iterable to the height of the model then you will contradict #’s for reals. The only synthesis that is friendly to large cardinals is with the strongest possible form of reflection, given by #-generation. More on this below:
Consider the following extreme version of :
Suppose M is a ctm and M \models ZFC. Then M witnesses extreme- if:
- There is a thickening of M, satisfying ZFC, in which M is a #-generated inner model.
??? This just says that M is #-generated to its own height! It is a weakened form of #-generation.
- M witnesses in all thickenings of M, satisfying ZFC, in which M is a #-generated inner model.
This makes no sense to me. The point of the synthesis is to say that IMH holds for models that satisfy reflection. You are only looking at models which satisfy weak reflection, i.e. which are presharp-generated up to their height! How do you motivate this? Even in the basic V-logic you get iterability up to the least admissible past the height. Of course we want our presharps to stay iterable past the height of the model; this is necessary to capture reflection to its fullest.
One advantage to extreme- is that the formulation does not need to refer to sharps in the hyperuniverse (and so there is a natural variation which can be formulated just using the V-logic of M). This also implies that the property that M witnesses extreme- is as opposed to which is not even in general .
Yes, your weak version of reflection can be captured in V-logic. But this is not much of an advantage as it is heavily outweighed by its disadvantages: We don’t want weak reflection (weak #-generation), we want reflection (#-generation), and this is captured by the natural infinitary logics fixing V defined in arbitrary “lengthenings” of V resulting by adding new L-levels (like the “lengthenings” that Pen, Geoffrey and I have discussed, but instead of iterating powerset to get new von Neumann ranks, one iterates *definable* powerset to generate new Gödel ranks). So once again, full #-generation is a property captured by logics associated to “lengthenings” of V, just like the IMH. It simply makes no sense to stop with weak #-generation, as there is no advantage of doing so.
Given the motivations you have cited for etc., it seems clear that extreme- is the correct result of synthesizing IMH with reflection unless it is inconsistent.
No! The motivation I cited was to assert the IMH for models that obey reflection and to do this you need to use the correct form of reflection, not what you are suggesting.
Thm: Assume every real has a sharp and that some countable
ordinal is a Woodin cardinal in a definable inner model. Then there is a ctm which witnesses that extreme- holds.
However unlike , extreme- is not consistent with all large cardinals.
Thm: If M satisfies extreme- then there is a real x in M such that in M, does not exist.
Originally, when I formulated #-generation I had the weaker form that you are suggesting in mind, knowing this result quite well. But later I came to the full form of #-generation and this problem with large cardinal nonexistence disappeared. I wasn’t actually looking for a way to rescue large cardinals but that was a nice consequence of the correct point of view.
Once again: Any form of reflection weaker than (full) #-generation will kill large cardinals when synthesised with the IMH. And even full #-generation is perfectly compatible with width actualism; it just requires consideration of logics in arbitrary “Gödel-lengthenings” of V.
This seems to be a bit of an issue for the motivation of IMH# and . How will you deal with this?