Dear Hugh and Pen,
1. You proposed:
Coding Assumption: if M is weakly #-generated then M can be coded by a real in an outer model which is weakly #-generated
I can’t see why this would be true. One needs alpha-iterable presharps for each alpha to witness the weak #-generation of M, and although each of these presharps can be preserved by some real coding M, there is no single real that does this for all alpha simultaneously.
Instead, I realise that the theory-version of results in a statement for countable models which is a bit weaker than what I said. So I have to change the formulation of again! (Peter, before you go crazy, let me again emphasize that this is how the HP works: We make an investigation of maximality criteria and only through a lot of math and contemplation do we start to understand what is really going on. It requires time and patience.)
OK, the theory version would say: #-generation for V is consistent in V-logic (formulated in any lengthening of V) and for every phi, the theory in V-logic which says that V is #-generated and phi holds in an outer model M of V which is #-generated proves that phi holds in an inner model of V.
What this translates to for a countable model V is then this:
(*) V is weakly #-generated and for all : Suppose that whenever g is a generator for V (iterable at least to the height of V), phi holds in an outer model M of V with a generator which is at least as iterable as g. Then holds in an inner model of V.
For each the above hypothesis implies that for each countable , holds in an outer model of V with an -iterable generator. But if V is in fact fully #-generated then the hypothesis implies that holds in an outer model of V which is also fully #-generated. So now we get consistency just like we did for the original oversimplified form of the for countable models.
2. You said:
I think if our evolving understanding of the large cardinal hierarchy rests primarily on the context of V = Ultimate L then very likely the rich generic extensions are not playing much of a role in understanding the large cardinal hierarchy … This for me would build the case for V = Ultimate L and against these rich extensions. It would then take something quite significant in the theory of the rich extensions to undermine that.
Sorry, I still don’t get it. Forcing extensions of L don’t play much of a role in understanding small large cardinals, do they? Yet if provably does not exist I don’t see the argument for V = L; in fact I don’t even see the argument for CH. Now why wouldn’t you favour something like “V is a forcing extension of Ultimate L which satisfies MM” or something like that?
3. The problem
(*) Suppose is not correctly computed by HOD for any infinite cardinal . Must weak square hold at some singular strong limit cardinal?
actually grew out of my recent AIM visit with Cummings, Magidor, Rinot and Sinapova. We showed that the successor of a singular strong limit of cofinality can be large in HOD, and I started asking about Weak Square. It holds at in our model.
You have caved into Peter’s P’s an V’s (Predictions and Verifications)!
The notion of “good set theory” is too vague to do much work here. Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise. The key notion is evidence, evidence of a form that people can agree on.
Then you said:
I probably should have stepped in at the time to remark that I’ve been using the term ‘good set theory’ for the set theory that enjoys the sort of evidence Peter is referring to here …
Now I want to defend “Defending”! There you describe “the objective reality that set-theoretic methods track” in terms of “depth, fruitfulness, effectiveness, importance and so on” (each with the prefix “mathematical”), there is no mention of P’s and V’s. And I think you got it dead right there, why back off now? I want my former Pen back!
As I said, I do agree that P’s and V’s are of value, they make a “good set theory” better, but they are not the be-all and end-all of “good set theory”! For a set theory to be good there is no need for it to make “verified predictions”; look at Forcing Axioms. And do you really think that they solve the “consensus” problem? Isn’t there also a lack of consensus about what predictions a theory should make and how devastating it is to the theory if they are not verified? Do you really think that we working set-theorists are busily making “predictions”, hopefully before Inspector Koellner of the Set Theory Council makes his annual visit to see how we are doing with their “verification”?
Pen, I really think you’ve made a wrong turn here. You were basing your Thin Realism very sensibly on what set-theorists actually do in their practice, what they think is important, what will lead to exciting new developments. P’s and V’s are a side issue, sometimes of value but surely not central to the practice of “good set theory”.
There is another point. Wouldn’t you want a discussion of truth in set theory to be receptive to what is going on in the rest of mathematics? Everyone keeps ignoring this point in this thread, despite my repeated attempts to bring it forward. Does a functional analyst or algebraist care about Ultimate L or the HP? Of course not! They might laugh if they were to hear about the arguments that we have been having, which for them are just esoteric and quite irrelevant to mathematics as a whole. Forcing Axioms can at least lay a claim to be really useful both for set theory and other areas of mathematics, surely they have to be part of theory of truth. Anyone who makes claims about set-theoretic truth, be it Ultimate L or HP or anything else, which ignores them is missing something important. And won’t it be embarrassing if 100 years from now, set-theorists will announce that they finally figured out what the “correct axioms for set theory” are and mathematicians from other fields don’t care as the “new and true axioms” are either quite useless for what they are doing or even conflict with the axioms that they would like to have for their own “good mathematics”?
Hence my conclusion is that the only sensible move for us to make is to gather evidence from all three sources: Set theory as an exciting and rapidly-developing branch of math and as a useful foundation for math, together with evidence we can draw from the concept of set itself via the maximality of V in height and width. None of these three types (which I have labelled as Types 1, 2 and 3, respectively) should be ignored. And we must also recognise that the procedure for uncovering evidence of these three types depends heavily on the type in question. “Defending” (even without P’s and V’s) teaches us how the process works in Type 1. For Type 2 we have to get into the trenches and see what the weapons being used in core mathematics are, and how we can help when independence infiltrates. For Type 3 it has to be what I am doing: an open-minded, sometimes sloppy and contantly changing (at least at the start) “shotgun approach” to investigating maximality criteria with the optimistic and determined aim of seeing a clear picture after a lot of very hard work is accomplished. The math is very challenging and as you have seen it is even hard to get things formulated properly. But I have lost patience with and will now ignore all complaints that “it cannot be done”, complaints based on nothing more than unjustified pessimism.
Yes, there is a lack of consensus regarding “good set theory”. But Peter is plain wrong to say that it has “no place in a foundational enterprise”. It has a very important place, but to reach a consensus about what the “correct” axioms of set theory should be, the evidence from “good set theory” must be augmented, not just by P’s and V’s but also by other forms of evidence coming from math outside of set theory and from the study of the maximality of V in height and width.
Pen, I would value your open-minded views on this. I hope that you are not also going to reduce “good set theory” to P’s and V’s and complain that the HP “cannot be done”.