Tag Archives: IMH

Re: Paper and slides on indefiniteness of CH: My final mail to the Thread

Dear Sy,

Before we close this thread, it would be nice if you could state what the current version of \textsf{IMH}^\# is. This would at least leave me with something specific to think about.

Is it:

1) (SDF: Nov 5) M is weakly #-generated and for each phi, if
for each countable alpha, phi holds in an outer model of M which
is generated by an alpha-iterable presharp then phi holds in an inner model of M.

2) (SDF: Nov 8) M is weakly #-generated and for all \phi: Suppose that whenever g is a generator for M (iterable at least to the height of M), \phi holds in an outer model M with a generator which is at least as iterable as g. Then \phi holds in an inner model of M.

or something else? Or perhaps it is now a work in progress?

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Sy,

In an attempt to move things along, I would like to both summarize where we are
and sharpen what I was saying in my (first) message of Nov 8. My points were
possibly swamped by the technical questions I raised.

1) We began with Original-\textsf{IMH}^\#

This is the #-generated version. In an attempt to provide a V-logic formulation
you proposed a principle which I called (in my message of Nov 5):

2) New-\textsf{IMH}^\#

I raised the issue of consistency and you then came back on Nov 8 with the principle (*):

What this translates to for a countable model V is then this:

(*) V is weakly #-generated and for all phi: Suppose that whenever g is a generator for V (iterable at least to the height of V), \phi holds in an outer model M of V with a generator which is at least as iterable as g. Then \phi holds in an inner model of V.

Let’s call this:

3) Revised-New-\textsf{IMH}^\#

(There are too many (*) principles)

But: Revised-New-\textsf{IMH}^\# is just the disjunct of Original-\textsf{IMH}^\# and New-\textsf{IMH}^\#

So Revised-New-\textsf{IMH}^\# is consistent. But is Revised-New-\textsf{IMH}^\# really what you had in mind?

(The move from New-\textsf{IMH}^\# to the disjunct of Original-\textsf{IMH}^\# and New-\textsf{IMH}^\# seems a bit problematic to me.)

Assuming Revised-New-\textsf{IMH}^\# is what you have in mind, I will continue.

Thus, if New-\textsf{IMH}^\# is inconsistent then Revised-New-\textsf{IMH}^\# is just Original-\textsf{IMH}^\#.

So we are back to the consistency of New-\textsf{IMH}^\#.

The theorem (of my message of Nov 8 but slightly reformulated here)

Theorem. Assume PD. Then there is a countable ordinal \eta and a real x such that if M is a ctm such that
1) x is in M and M \vDash ``V = L[t]\text{ for real }t"
2) M satisfies Revised-New-\textsf{IMH}^\# with parameter \eta
then M is #-generated (and so M satisfies Original-\textsf{IMH}^\#)

strongly suggests (but does not prove) that New-\textsf{IMH}^\# is
inconsistent if one also requires M be a model of “V = L[Y] for some set Y”.

Thus if New-\textsf{IMH}^\# is consistent it likely must involve weakly #-generated models M which cannot be coded by a real in an outer model which is #-generated.

So just as happened with SIMH, one again comes to an interesting CTM question whose resolution seem essential for further progress.

Here is an extreme version of the question for New-\textsf{IMH}^\#:

Question: Suppose M is weakly #-generated. Must there exist a weakly #-generated outer model of M which contains a set which is not set-generic over M?

[This question seems to have a positive solution. But, building weakly #-generated models which cannot be coded by a real in an outer model which is weakly #-generated still seems quite difficult to me. Perhaps Sy has some insight here.]

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Sy,

Theorem. Assume PD. Then there is a countable ordinal \eta and a real x such that if M is a ctm such that

(1) x is in M and M \vDash ``V = L[t] \text{ for a real }t"

(2) M satisfies (*)(\eta) (this (*) but allowing \eta as a parameter),

then M is #-generated.

So, you still have not really addressed the ctm issue at all.

Here is the critical question:

Key Question: Can there exist a ctm M such that M satisfies (*) in the hyper-universe of L(M)[G] where G is L(M)-generic for collapsing all sets to be countable.

Or even:

Lesser Key Question: Suppose that M is a ctm which satisfies (*). Must M be #-generated?

Until one can show the answer is “yes” for the Key Question, there has been no genuine reduction of this version of \textsf{IMH}^\# to V-logic.

If the answer to the Lessor Key Question is “yes” then there is no possible reduction to V-logic.

The theorem stated above strongly suggests the answer to the Lesser Key Question is actually “yes” if one restricts to models satisfying “V = L[Y]\text{ for some set }Y”.

The point of course is that if M is a ctm which satisfies “V = L[Y]\text{ for some set }Y” and M witnesses (*) then M[g] witnesses (*) where g is an M-generic collapse of Y to $lateex \omega$.

The simple consistency proofs of Original-\textsf{IMH}^\# all easily give models which satisfy “V = L[Y]\text{ for some set }Y”.

The problem

(*) Suppose \gamma^+ is not correctly computed by HOD for any infinite cardinal \gamma. Must weak square hold at some singular strong limit cardinal?

actually grew out of my recent AIM visit with Cummings, Magidor, Rinot and Sinapova. We showed that the successor of a singular strong limit kappa of cof omega can be large in HOD, and I started asking about Weak Square. It holds at kappa in our model.

Assuming the Axiom \textsf{I}0^\# is consistent one gets a model of ZFC in which for some singular strong limit \kappa of uncountable cofinality, weak square fails at \kappa and \kappa^+ is not correctly computed by HOD.

So one cannot focus on cofinality \omega (unless Axiom \textsf{I}0^\# is inconsistent).

So born of this thread is the correct version of the problem:

Problem: Suppose \gamma is a singular strong limit cardinal of uncountable cardinality such that \gamma^+ is not correctly computed by HOD. Must weak square hold at \gamma?

Aside: \textsf{I}0^\# asserts there is an elementary embedding j:L(V_{\lambda+1}^\#) \to L(V_{\lambda+1}^\#) with critical point below \lambda.

Regards, Hugh

Re: Paper and slides on indefiniteness of CH

Dear Hugh and Pen,

Hugh:

1. You proposed:

Coding Assumption: if M is weakly #-generated then M can be coded by a real in an outer model which is weakly #-generated

I can’t see why this would be true. One needs alpha-iterable presharps for each alpha to witness the weak #-generation of M, and although each of these presharps can be preserved by some real coding M, there is no single real that does this for all alpha simultaneously.

Instead, I realise that the theory-version of \textsf{IMH}^\# results in a statement for countable models which is a bit weaker than what I said. So I have to change the formulation of \textsf{IMH}^\# again! (Peter, before you go crazy, let me again emphasize that this is how the HP works: We make an investigation of maximality criteria and only through a lot of math and contemplation do we start to understand what is really going on. It requires time and patience.)

OK, the theory version would say: #-generation for V is consistent in V-logic (formulated in any lengthening of V) and for every phi, the theory in V-logic which says that V is #-generated and phi holds in an outer model M of V which is #-generated proves that phi holds in an inner model of V.

What this translates to for a countable model V is then this:

(*) V is weakly #-generated and for all \phi: Suppose that whenever g is a generator for V (iterable at least to the height of V), phi holds in an outer model M of V with a generator which is at least as iterable as g. Then \phi holds in an inner model of V.

For each \phi the above hypothesis implies that for each countable \alpha, \phi holds in an outer model of V with an \alpha-iterable generator. But if V is in fact fully #-generated then the hypothesis implies that \phi holds in an outer model of V which is also fully #-generated. So now we get consistency just like we did for the original oversimplified form of the \textsf{IMH}^\# for countable models.

2. You said:

I think if our evolving understanding of the large cardinal hierarchy rests primarily on the context of V = Ultimate L then very likely the rich generic extensions are not playing much of a role in understanding the large cardinal hierarchy … This for me would build the case for V = Ultimate L and against these rich extensions. It would then take something quite significant in the theory of the rich extensions to undermine that.

Sorry, I still don’t get it. Forcing extensions of L don’t play much of a role in understanding small large cardinals, do they? Yet if 0^\# provably does not exist I don’t see the argument for V = L; in fact I don’t even see the argument for CH. Now why wouldn’t you favour something like “V is a forcing extension of Ultimate L which satisfies MM” or something like that?

3. The problem

(*) Suppose \gamma^+ is not correctly computed by HOD for any infinite cardinal \gamma. Must weak square hold at some singular strong limit cardinal?

actually grew out of my recent AIM visit with Cummings, Magidor, Rinot and Sinapova. We showed that the successor of a singular strong limit \kappa of cofinality \omega can be large in HOD, and I started asking about Weak Square. It holds at \kappa in our model.

Pen:

You have caved into Peter’s P’s an V’s (Predictions and Verifications)!

Peter wrote:

The notion of “good set theory” is too vague to do much work here. Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise. The key notion is evidence, evidence of a form that people can agree on.

Then you said:

I probably should have stepped in at the time to remark that I’ve been using the term ‘good set theory’ for the set theory that enjoys the sort of evidence Peter is referring to here …

Now I want to defend “Defending”! There you describe “the objective reality that set-theoretic methods track” in terms of “depth, fruitfulness, effectiveness, importance and so on” (each with the prefix “mathematical”), there is no mention of P’s and V’s. And I think you got it dead right there, why back off now? I want my former Pen back!

As I said, I do agree that P’s and V’s are of value, they make a “good set theory” better, but they are not the be-all and end-all of “good set theory”! For a set theory to be good there is no need for it to make “verified predictions”; look at Forcing Axioms. And do you really think that they solve the “consensus” problem? Isn’t there also a lack of consensus about what predictions a theory should make and how devastating it is to the theory if they are not verified? Do you really think that we working set-theorists are busily making “predictions”, hopefully before Inspector Koellner of the Set Theory Council makes his annual visit to see how we are doing with their “verification”?

Pen, I really think you’ve made a wrong turn here. You were basing your Thin Realism very sensibly on what set-theorists actually do in their practice, what they think is important, what will lead to exciting new developments. P’s and V’s are a side issue, sometimes of value but surely not central to the practice of “good set theory”.

There is another point. Wouldn’t you want a discussion of truth in set theory to be receptive to what is going on in the rest of mathematics? Everyone keeps ignoring this point in this thread, despite my repeated attempts to bring it forward. Does a functional analyst or algebraist care about Ultimate L or the HP? Of course not! They might laugh if they were to hear about the arguments that we have been having, which for them are just esoteric and quite irrelevant to mathematics as a whole. Forcing Axioms can at least lay a claim to be really useful both for set theory and other areas of mathematics, surely they have to be part of theory of truth. Anyone who makes claims about set-theoretic truth, be it Ultimate L or HP or anything else, which ignores them is missing something important. And won’t it be embarrassing if 100 years from now, set-theorists will announce that they finally figured out what the “correct axioms for set theory” are and mathematicians from other fields don’t care as the “new and true axioms” are either quite useless for what they are doing or even conflict with the axioms that they would like to have for their own “good mathematics”?

Hence my conclusion is that the only sensible move for us to make is to gather evidence from all three sources: Set theory as an exciting and rapidly-developing branch of math and as a useful foundation for math, together with evidence we can draw from the concept of set itself via the maximality of V in height and width. None of these three types (which I have labelled as Types 1, 2 and 3, respectively) should be ignored. And we must also recognise that the procedure for uncovering evidence of these three types depends heavily on the type in question. “Defending” (even without P’s and V’s) teaches us how the process works in Type 1. For Type 2 we have to get into the trenches and see what the weapons being used in core mathematics are, and how we can help when independence infiltrates. For Type 3 it has to be what I am doing: an open-minded, sometimes sloppy and contantly changing (at least at the start) “shotgun approach” to investigating maximality criteria with the optimistic and determined aim of seeing a clear picture after a lot of very hard work is accomplished. The math is very challenging and as you have seen it is even hard to get things formulated properly. But I have lost patience with and will now ignore all complaints that “it cannot be done”, complaints based on nothing more than unjustified pessimism.

Yes, there is a lack of consensus regarding “good set theory”. But Peter is plain wrong to say that it has “no place in a foundational enterprise”. It has a very important place, but to reach a consensus about what the “correct” axioms of set theory should be, the evidence from “good set theory” must be augmented, not just by P’s and V’s but also by other forms of evidence coming from math outside of set theory and from the study of the maximality of V in height and width.

Pen, I would value your open-minded views on this. I hope that you are not also going to reduce “good set theory” to P’s and V’s and complain that the HP “cannot be done”.

Thanks, Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

A. The principles in the hierarchy IMH(Inaccessible), IMH(Mahlo), IMH(the Erdos cardinal \kappa_\omega exists), etc. up to \textsf{IMH}^\# must be regarded as ad hoc unless one can explain the restriction to models that satisfy Inaccessibles, Mahlos, \kappa_\omega, etc. and #-generation, respectively.

One of my points was that #-generation is ad hoc (for many reasons,
one being that you use the # to get everything below it and then you
ignore the #). There has not been a discussion of the case for
#-generation in this thread. It would be good if you could give an
account of it and make a case for it on the basis of “length
maximality”. In particular, it would be good if you could explain how
it is a form of “reflection” that reaches the Erdos cardinal
\kappa_\omega.

B. It is true that we now know (after Hugh’s consistency proof of
\textsf{IMH}^\#) that \textsf{IMH}^\#(\omega_1) is stronger than \textsf{IMH}^\# in the sense that the large cardinals required to obtain its consistency are stronger. But in contrast to \textsf{IMH}^\# it has the drawback that it is not consistent with all large cardinals. Indeed it implies that there is a real x such that \omega_1=\omega_1^{L[x]} and (in your letter about Max) you have already rejected any principle with that implication. So I am not sure why you are bringing it up.

(The case of \textsf{IMH}^\#\text{(card-arith)} is more interesting. It has a standing chance, by your lights. But it is reasonable to conjecture (as Hugh did) that it implies GCH and if that conjecture is true then there is a real x such that \omega_1=\omega_1^{L[x]}, and, should that turn out to be true, you would reject it.)

2. What I called “Magidor’s embedding reflection” in fact appears in a paper by Victoria Marshall (JSL 54, No.2). As it violates V = L it is not a form of height maximality (the problem  is with the internal embeddings involved; if the embeddings are external then one gets a weak form of #-generation). Indeed Marshall Reflection does not appear to be a form of maximality in height or width at all.

No, Magidor Embedding Reflection appears in Magidor 1971, well before Marshall 1989. [Marshall discusses Kanamori and Magidor 1977, which contains an account of Magidor 1971.]

You say: “I don’t think that any arguments based on the vague notion of “maximality” provide us with much in the way of justification”. Wow! So you don’t think that inaccessibles are justified on the basis of reflection! Sounds like you’ve been talking to the radical Pen Maddy, who doesn’t believe in any form of intrinsic justification.

My comment was about the loose notion of “maximality” as you use it, not about “reflection”. You already know what I think about “reflection”.

3. Here’s the most remarkable part of your message. You say:

Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.

In this thread I have repeatedly and without objection taken Pen’s Thin Realism to be grounded on “good set theory” (or if looking beyond set theory, on “good mathematics”). So you have now rejected not only the HP, but also Thin Realism. My view is that Pen got it exactly right when it comes to evidence from the practice of set theory, one must only acknowledge that such evidence is limited by the lack of consensus on what “good set theory” means.

You are right to say that there is value to “predictions” and “verifications”. But these only serve to make a “good set theory” better. They don’t really change much, as even if a brand of “good set theory” fails to fulfill one of its “predictions”, it can still maintain its high status. Take the example of Forcing Axioms: They have been and always will be regarded as “good set theory”, even if the “prediction” that you attribute to them fails to materialise.

Peter, your unhesitating rejection of approaches to set-theoretic truth is not helpful. You faulted the HP for not being “well-grounded” as its grounding leans on a consensus regarding the “maximality of V in height and width”. Now you fault Thin Realism (TR) for not being “well-grounded” as its grounding leans on “good set theory”. There is an analogy between TR and the HP: Like Pen’s second philosopher, Max (the Maximality Man) is fascinated by the idea of maximality of V in height and width and he “sets out to discover what the world of maximality is like, the range of what there is to the notion and its various properties and behaviours”. In light of this analogy, it is reasonable that someone who likes Thin Realism would also like the HP and vice-versa. It seems that you reject both, yet fail to provide a good substitute. How can we possibly make progress in our understanding of set-theoretic truth with such skepticism? What I hear from Pen and Hugh is a “wait and see” attitude, they want to know what criteria and consensus comes out of the HP. Yet you want to reject the approach out of hand. I don’t get it. Are you a pessimist at heart?

No, I am an unrepentant optimist. (More below.)

It seems to me that in light of your rejection of both TR and HP, the natural way for you to go is “radical skepticism”, which denies this whole discussion of set-theoretic truth in the first place. (Pen claimed to be a radical skeptic, but I don’t entirely believe it, as she does offer us Thin Realism.) Maybe Pen’s Arealism is your real cup of tea?

A. I don’t see how you got from my statement

Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.

to conclusions about my views realism and truth (e.g. my “[rejection]
….of Thin Realism” and my “unhesitating rejection of approaches to
set-theoretic truth”)!

Let’s look at the rest of the passage:

“The key notion is evidence, evidence of a form that people can agree on. That is the virtue of actually making a prediction for which there is agreement (not necessarily universal — there are few things beyond the law of identity that everyone agrees on — but which is widespread) that if it is proved it will strengthen the case and if it is refuted it will weaken the case.

I said nothing about realism or about truth. I said something only about the epistemic notion that is at play in a case (of the kind you call Type-1) for new axioms, namely, that it is not the notion of “good set theory” (a highly subjective, personal notion, where there is little agreement) but rather the notion of evidence (of a sort where there is agreement).

B. I was criticizing the employment of the notion of “good set theory” as you use it, not as Pen uses it.

As you use it Jensen’s work on V = L is “good set theory” and the work on ZF+AD is “good set theory” (in fact, both are cases of “great set theory”). On that we can all agree. (One can multiply examples:
Barwise on KP, etc.) But that has nothing to do with whether we should accept V = L or ZF+AD.

As Pen uses it involves evidence in the straightforward sense that I have been talking about.  (Actually, as far as I can tell she doesn’t use the phrase in her work. E.g. it does not occur in her most detailed book on the subject, “Second Philosophy”. She has simply used it in this thread as a catch phrase for something she describes in more detail, something involving evidence). Moreover, as paradigm examples of evidence she cites precisely the examples that John, Tony, Hugh, and I have given.

In summary, I was saying nothing about realism or truth; I was saying something about epistemology. I was saying: The notion of “good set theory” (as you use it) has no epistemic role to play in a case for new axioms. But the notion of evidence does.

So when you talk about Type 1 evidence, you shouldn’t be talking about “practice” and “good set theory”. The key epistemic notion is rather evidence of the kind that has been put forward, e.g. the kind that involves sustained prediction and confirmation.

[I don't pretend that the notion of evidence in mathematics (and
especially in this region, where independence reigns), is a straightforward matter. The explication of this notion is one of the main things I have worked on. I talked about it in my tutorial when we were at Chiemsee. You already have the slides but I am attaching them here in case anyone else is interested. It contains both an outline of the epistemic framework and the case for \text{AD}^{L(\mathbb R)} in the context of this framework. A more detailed account is in the book I have been writing (for several years now...)]

[C. Aside: Although I said nothing about realism, since you attributed views on the matter to me, let me say something briefly: It is probably the most delicate notion in philosophy. I do not have a settled view. But I am certainly not a Robust Realist (as characterized by Pen) or a Super-Realist (as characterized by Tait), since each leads to what Tait calls "an alienation of truth from proof." The view I have defended (in "Truth in Mathematics: The Question of Pluralism") has much more in common with Thin Realism.]

So I was too honest, I should not have admitted to a radical form of multiversism (radical potentialism), as it is then easy to misundertand the HP as you have. As far as the choice of maximality criteria, I can only repeat myself: Please be open-minded and do not prejudge the programme before seeing the criteria that it generates. You will see that our intuitions about maximality criteria are more robust than our intuitions about “good set theory”.

I have been focusing on CTM-Space because (a) you said quite clearly that you were a radical potentialist and (b) the principles you have put forward are formulated in that context. But now you have changed your program yet again. There have been a lot of changes.
(1) The original claim

I conjecture that CH is false as a consequence of my Strong Inner Model Hypothesis (i.e. Levy absoluteness with “cardinal-absolute parameters” for cardinal-preserving extensions) or one of its variants which is compatible with large cardinals existence. (Aug. 12)

has been updated to

With apologies to all, I want to say that I find this focus on CH to be exaggerated. I think it is hopeless to come to any kind of resolution of this problem, whereas I think there may be a much better chance with other axioms of set theory such as PD and large cardinals. (Oct. 25)

(2) The (strong) notion of “intrinsic justification” has been replaced by the (weak) notion of “intrinsic heurisitic”.

(3) Now, the background picture of “radical potentialism” has been
replaced by “width-actualism + height potentialism”.

(4) Now, as a consequence of (3), the old principles \textsf{IMH}^\#\textsf{IMH}^\#(\omega_1), \textsf{IMH}^\#\text{(card-arith)}, \textsf{SIMH}, \textsf{SIMH}^\#, etc. have been  replaced by New-\textsf{IMH}^\#, New-\textsf{IMH}^\#(\omega_1), etc.

Am I optimistic about this program, program X? Well, it depends on
what X is. I have just been trying to understand X.

Now, in light of the changes (3) and (4), X has changed and we have to start over. We have a new philosophical picture and a whole new collection of mathematical principles. The first question is obviously: Are these new principles even consistent?

I am certainly optimistic about this: If under the scrutiny of people like Hugh and Pen you keep updating X, then X will get clearer and more tenable.

That, I think, is one of the great advantages of this forum. I doubt that a program has ever received such rapid philosophical and mathematical scrutiny. It would be good to do this for other programs, like the Ultimate-L program. (We have given that program some scrutiny. So far, there has been no need for mathematical-updating — there has been no need to modify the Ultimate-L Conjecture or the HOD-Conjecture.)

Best,
Peter

Chiemsee_1 Chiemsee_2

Re: Paper and slides on indefiniteness of CH

Dear Pen and Hugh,

Pen:

Well I said that we covered everything, but I guess I was wrong! A new question for you popped into my head. You said:

The HP works quite differently. There the picture leads the way — the only legitimate evidence is Type 3. As we’ve determined over the months, in this case the picture involved
has to be shared, so that it won’t degenerate into ‘Sy’s truth’.

I just realised that I may have misunderstood this.

When it comes to Type 1 evidence (from the practice of set theory as mathematics) we don’t require that opinions about what is “good set theory” be shared (and “the picture” is indeed determined by “good set theory”). As Peter put it:

Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.

I disagree with the last sentence of this quote (I expect that you do too), but the fact remains that if we don’t require a consensus about “good set theory” then truth does break into (“degenerate into” is inappropriate) “Hugh’s truth”, “Saharon’s truth”, “Stevo’s truth”, “Ronald’s truth” and so on. (Note: I don’t mean to imply that Saharon or Stevo really have opinions about truth, here I only refer to what one reads off from their forms of “good set theory”.) I don’t think that’s bad and see no need for one form of “truth” that “swamps all the others”.

Now when it comes to the HP you insist that there is just one “shared picture”. What do you mean now by “picture”? Is it just the vague idea of a single V which is maximal in terms of its lengthenings and “thickenings”? If so, then I agree that this is the starting point of the HP and should be shared, independently of how the HP develops.

In my mail to you of 31.October I may have misinterpreted you by assuming that by “picture” you meant something sensitive to new developments in the programme. For example, when I moved from a short fat “picture” based on the IMH to a taller one based on the \textsf{IMH}^\#, I thought you were regarding that as a change in “picture”. Let me now assume that I made a mistake, i.e., that the “shared picture” to which you refer is just the vague idea of a single V which is maximal in terms of its lengthenings and “thickenings”.

Now I ask you this: Are you going further and insisting that there must be a consensus about what mathematical consequences this “shared picture” has? That will of course be necessary if the HP is to claim “derivable consequences” of the maximality of V in height and width, and that is indeed my aim with the HP. But what if my aim were more modest, simply to generate “evidence” for axioms based on maximality just as TR generates “evidence” for axioms based on “good set theory”; would you then agree that there is no need for a consensus, just as there is in fact no consensus regarding evidence based on “good set theory”?

In this way one could develop a good analogy between Thin Realism and a gentler form of the HP. In TR one investigates different forms of “good set theory” and as a consequence generates evidence for what is true in the resulting “pictures of V”. In the gentler form of the HP one investigates different forms of “maximality in height and width” to generate evidence for what is true in a “shared picture of V”. In neither case is there the presumption of a consensus concerning the evidence generated (in the original HP there is). This gentler HP would still be valuable, just as generating different forms of evidence in TR is valuable. What it generates will not be “intrinsic to the concept of set” as in the original ambitious form of the HP, but only “intrinsically-based evidence”, a form of evidence generated through an examination of the maximality of V in height and width, rather than by “good set theory”.

Hugh:

1. Your formulation of \textsf{IMH}^\# is almost correct:

M witnesses \textsf{IMH}^\# if

1) M is weakly #-generated.

2) If \phi holds in an outer model of M which is weakly
#-generated then \phi holds in an inner model of M.

But as we have to work with theories, 2) has to be: If for each countable \alpha, \phi holds in an outer model of M which is generated by an alpha-iterable presharp then phi holds in an inner model of M.

2. Could you explain a bit more why V = Ultimate L is attractive? You said: “For me, the “validation” of V = Ultimate L will have to come from the insights V = Ultimate L gives for the hierarchy of large cardinals beyond supercompact.” But why would those insights disappear if V is, for example, some rich generic extension of Ultimate L? If Jack had proved that 0^\# does not exist I would not favour V = L but rather V = some rich outer model of L.

3. I told Pen that finding a GCH inner model over which V is generic is a leading open question in set theory. But you gave an argument suggesting that this has to be strengthened. Recently I gave a talk about HOD where I discussed the following four properties of an inner model M:

Genericity: V is a generic extension of M.

Weak Covering: For a proper class of cardinals alpha, alpha^+ = alpha^+ of M.

Rigidity: There is no nontrivial elementary embedding from M to M.

Large Cardinal Witnessing: Any large cardinal property witnessed in V is witnessed in M.

(When 0^\# does not exist, all of these hold for M = L except for Genericity: V need not be class-generic over L. As you know, there has been a lot of work on the case M = \text{HOD}.)

Now I’d like to offer Pen a new “leading open question”. (Of course I could offer the PCF Conjecture, but I would prefer to offer something closer to the discussion we have been having.) It would be great if you and I could agree on one. How about this: Is there an inner model M satisfying GCH together with the above four properties?

Thanks,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

On Nov 3, 2014, at 3:38 AM, Sy David Friedman wrote:

Hugh:

1. The only method I know to obtain the consistency of the maximality criterion I stated involves Prikry-like forcings, which add Weak Squares. Weak Squares contradict supercompactness.

So you think that if the Maximality Criterion holds then weak square holds at some singular strong limit?

3. I was postponing the discussion of the reduction of #-generation to ctm’s (countable transitive models) as long as possible as it is quite technical, but as you raised it again I’ll deal with it now. Recall that in the HP “thickenings” are dealt with via theories. So #-generation really means that for each Gödel lengthening L_\alpha(V) of V, the theory in L_\alpha(V) which expresses that V is generated by a presharp which is \alpha-iterable is consistent. Another way to say this is that for each \alpha, there is an \alpha-iterable presharp which generates V in a forcing extension of L(V) in which \alpha is made countable. For ctm’s this translates to: A ctm M is (weakly) #-generated if for each countable \alpha, M is generated by an \alpha-iterable presharp. This is weaker than the cleaner, original form of #-generation. With this change, one can run the LS argument and regard \textsf{IMH}^\# as a statement about ctm’s. In conclusion: You are right, we can’t apply LS to the raw version of \textsf{IMH}^\#, essentially because #-generation for a (real coding a) countable V is a \Sigma^1_3 property; but weak #-generation is \Pi^1_2 and this is the only change required.

Just be clear you are now proposing that \textsf{IMH}^\# is:

M witnesses \textsf{IMH}^\# if

1) M is weakly #-generated.

2) If \phi holds in an outer model of M which is weakly #-generated then \phi holds in an inner model of M.

Here: a ctm K is weakly #-generated if for each countable ordinal \alpha, there is an \alpha-iterable (N,U) whose \text{Ord}^K-iterate gives K.

Is this correct?

Regards, Hugh

Re: Paper and slides on indefiniteness of CH

Dear Peter and Sy,

I would like to add a short comment about the move to \textsf{IMH}^\#. This concerns to what extent it can be formulated without consulting the hyper-universe is an essential way (which is the case for \textsf{IMH} since \textsf{IMH} can be so formulated). This issue has been raised several times in this thread.

Here is the relevant theorem which I think sharpens the issues.

Theorem. Suppose \textsf{PD} holds, Let X be the set of all ctm M such that M satisfies \textsf{IMH}^\#. Then X is not \Sigma_2-definable over the hyperuniverse. (lightface).

Aside: X is always \Pi_2 definable modulo being #-generated and being #-generated is \Sigma_2-definable. So X is always \Sigma_2\wedge \Pi_2-definable. If one restricts to M of the form L_{\alpha}[t] for some real t, then X is \Pi_2-definable but still not \Sigma_2-definable.

So it would seem that internalization \textsf{IMH}^\# to M via some kind of vertical extension etc., might be problematic or might lead to a refined version of \textsf{IMH}^\# which like IMH has strong anti-large cardinal consequences.

I am not sure what if anything to make of this, but I thought I should point it out.

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Sy,

I owe you a response to your other letters (things have been busy) but your letter below presents an opportunity to make some points now.

On Oct 31, 2014, at 12:20 PM, Sy David Friedman wrote:

Why? I think I made my position clear enough: I stated a consistent Maximality Criterion and based on my proof (with co-authors) of its consistency I have the impression that this Criterion contradicts supercompacts (not just extendibles). So that leads to a tentative rejection of supercompacts until the situation changes through further understanding of further Maximality Criteria. It’s analagous to what happened with the IMH: It led to a tentative rejection of inaccessibles, but then when Vertical Maximality was taken into account, it became obvious that the IMH# was a better criterion than the IMH and the \textsf{IMH}^\# is compatible with inaccessibles and more.

I don’t buy this. Let’s go back to IMH. It violates inaccessibles (in a dramatic fashion). One way to repair it would have been to simply restrict to models that have inaccessibles. That would have been pretty ad hoc. It is not what you did. What you did is even more ad hoc. You restricted to models that are #-generated. So let’s look at that.

We take the presentation of #’s in terms of \omega_1-iterable countable models of the form (M,U). We iterate the measure out to the height of the universe. Then we throw away the # (“kicking away the ladder once we have climbed it”) and imagine we are locked in the universe it generated. We restrict IMH to such universes. This gives \textsf{IMH}^\#.

It is hardly surprising that the universes contain everything below the # (e.g. below 0^\# in the case of a countable transitive model of V=L) used to generate it and, given the trivial consistency proof of \textsf{IMH}^\# it is hardly surprising that it is compatible with all large cardinal axioms (even choicless large cardinal axioms). My point is that the maneuver is even more ad hoc than the maneuver of simply restricting to models with inaccessibles. [I realized that you try to give an "internal" account of all of this, motivating what one gets from the # without grabbing on to it. We could get into it. I will say now: I don't buy it.]

I also think that the Maximality Criterion I stated could be made much stronger, which I think is only possible if one denies the existence of supercompacts. (Just a conjecture, no theorem yet.)

First you erroneously thought that I wanted to reject PD and now you think I want to reject large cardinals! Hugh, please give me a chance here and don’t jump to quick conclusions; it will take time to understand Maximality well enough to see what large cardinal axioms it implies or tolerates. There is something robust going on, please give the HP time to do its work. I simply want to take an unbiased look at Maximality Criteria, that’s all. Indeed I would be quite happy to see a convincing Maximality Criterion that implies the existence of supercompacts (or better, extendibles), but I don’t know of one.

We do have “maximality” arguments that give supercompacts and extendibles, namely, the arguments put forth by Magidor and Bagaria. To be clear: I don’t think that such arguments provide us with much in the way of justification. On that we agree. But in my case the reason is that is that I don’t think that any arguments based on the vague notion of “maximality” provide us with much in the way of justification. With such a vague notion “anything goes”. The point here, however, is that you would have to argue that the “maximality” arguments you give concerning HOD (or whatever) and which may violate large cardinal axioms are more compelling than these other “maximality” arguments for large cardinals. I am dubious of the whole enterprise — either for or against — of basing a case on “maximality”. It is a pitting of one set of vague intuitions against another. The real case, in my view, comes from another direction entirely.

An entirely different issue is why supercompacts are necessary for “good set theory”. I think you addressed that in the second of your recent e-mails, but I haven’t had time to study that yet.

The notion of “good set theory” is too vague to do much work here. Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise. The key notion is evidence, evidence of a form that people can agree on. That is the virtue of actually making a prediction for which there is agreement (not necessarily universal — there are few things beyond the law of identity that everyone agrees on — but which is widespread) that if it is proved it will strengthen the case and if it is refuted it will weaken the case.

Best,
Peter