Tag Archives: PD

Re: Paper and slides on indefiniteness of CH

Dear Sy,

A. The principles in the hierarchy IMH(Inaccessible), IMH(Mahlo), IMH(the Erdos cardinal \kappa_\omega exists), etc. up to \textsf{IMH}^\# must be regarded as ad hoc unless one can explain the restriction to models that satisfy Inaccessibles, Mahlos, \kappa_\omega, etc. and #-generation, respectively.

One of my points was that #-generation is ad hoc (for many reasons,
one being that you use the # to get everything below it and then you
ignore the #). There has not been a discussion of the case for
#-generation in this thread. It would be good if you could give an
account of it and make a case for it on the basis of “length
maximality”. In particular, it would be good if you could explain how
it is a form of “reflection” that reaches the Erdos cardinal

B. It is true that we now know (after Hugh’s consistency proof of
\textsf{IMH}^\#) that \textsf{IMH}^\#(\omega_1) is stronger than \textsf{IMH}^\# in the sense that the large cardinals required to obtain its consistency are stronger. But in contrast to \textsf{IMH}^\# it has the drawback that it is not consistent with all large cardinals. Indeed it implies that there is a real x such that \omega_1=\omega_1^{L[x]} and (in your letter about Max) you have already rejected any principle with that implication. So I am not sure why you are bringing it up.

(The case of \textsf{IMH}^\#\text{(card-arith)} is more interesting. It has a standing chance, by your lights. But it is reasonable to conjecture (as Hugh did) that it implies GCH and if that conjecture is true then there is a real x such that \omega_1=\omega_1^{L[x]}, and, should that turn out to be true, you would reject it.)

2. What I called “Magidor’s embedding reflection” in fact appears in a paper by Victoria Marshall (JSL 54, No.2). As it violates V = L it is not a form of height maximality (the problem  is with the internal embeddings involved; if the embeddings are external then one gets a weak form of #-generation). Indeed Marshall Reflection does not appear to be a form of maximality in height or width at all.

No, Magidor Embedding Reflection appears in Magidor 1971, well before Marshall 1989. [Marshall discusses Kanamori and Magidor 1977, which contains an account of Magidor 1971.]

You say: “I don’t think that any arguments based on the vague notion of “maximality” provide us with much in the way of justification”. Wow! So you don’t think that inaccessibles are justified on the basis of reflection! Sounds like you’ve been talking to the radical Pen Maddy, who doesn’t believe in any form of intrinsic justification.

My comment was about the loose notion of “maximality” as you use it, not about “reflection”. You already know what I think about “reflection”.

3. Here’s the most remarkable part of your message. You say:

Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.

In this thread I have repeatedly and without objection taken Pen’s Thin Realism to be grounded on “good set theory” (or if looking beyond set theory, on “good mathematics”). So you have now rejected not only the HP, but also Thin Realism. My view is that Pen got it exactly right when it comes to evidence from the practice of set theory, one must only acknowledge that such evidence is limited by the lack of consensus on what “good set theory” means.

You are right to say that there is value to “predictions” and “verifications”. But these only serve to make a “good set theory” better. They don’t really change much, as even if a brand of “good set theory” fails to fulfill one of its “predictions”, it can still maintain its high status. Take the example of Forcing Axioms: They have been and always will be regarded as “good set theory”, even if the “prediction” that you attribute to them fails to materialise.

Peter, your unhesitating rejection of approaches to set-theoretic truth is not helpful. You faulted the HP for not being “well-grounded” as its grounding leans on a consensus regarding the “maximality of V in height and width”. Now you fault Thin Realism (TR) for not being “well-grounded” as its grounding leans on “good set theory”. There is an analogy between TR and the HP: Like Pen’s second philosopher, Max (the Maximality Man) is fascinated by the idea of maximality of V in height and width and he “sets out to discover what the world of maximality is like, the range of what there is to the notion and its various properties and behaviours”. In light of this analogy, it is reasonable that someone who likes Thin Realism would also like the HP and vice-versa. It seems that you reject both, yet fail to provide a good substitute. How can we possibly make progress in our understanding of set-theoretic truth with such skepticism? What I hear from Pen and Hugh is a “wait and see” attitude, they want to know what criteria and consensus comes out of the HP. Yet you want to reject the approach out of hand. I don’t get it. Are you a pessimist at heart?

No, I am an unrepentant optimist. (More below.)

It seems to me that in light of your rejection of both TR and HP, the natural way for you to go is “radical skepticism”, which denies this whole discussion of set-theoretic truth in the first place. (Pen claimed to be a radical skeptic, but I don’t entirely believe it, as she does offer us Thin Realism.) Maybe Pen’s Arealism is your real cup of tea?

A. I don’t see how you got from my statement

Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.

to conclusions about my views realism and truth (e.g. my “[rejection]
….of Thin Realism” and my “unhesitating rejection of approaches to
set-theoretic truth”)!

Let’s look at the rest of the passage:

“The key notion is evidence, evidence of a form that people can agree on. That is the virtue of actually making a prediction for which there is agreement (not necessarily universal — there are few things beyond the law of identity that everyone agrees on — but which is widespread) that if it is proved it will strengthen the case and if it is refuted it will weaken the case.

I said nothing about realism or about truth. I said something only about the epistemic notion that is at play in a case (of the kind you call Type-1) for new axioms, namely, that it is not the notion of “good set theory” (a highly subjective, personal notion, where there is little agreement) but rather the notion of evidence (of a sort where there is agreement).

B. I was criticizing the employment of the notion of “good set theory” as you use it, not as Pen uses it.

As you use it Jensen’s work on V = L is “good set theory” and the work on ZF+AD is “good set theory” (in fact, both are cases of “great set theory”). On that we can all agree. (One can multiply examples:
Barwise on KP, etc.) But that has nothing to do with whether we should accept V = L or ZF+AD.

As Pen uses it involves evidence in the straightforward sense that I have been talking about.  (Actually, as far as I can tell she doesn’t use the phrase in her work. E.g. it does not occur in her most detailed book on the subject, “Second Philosophy”. She has simply used it in this thread as a catch phrase for something she describes in more detail, something involving evidence). Moreover, as paradigm examples of evidence she cites precisely the examples that John, Tony, Hugh, and I have given.

In summary, I was saying nothing about realism or truth; I was saying something about epistemology. I was saying: The notion of “good set theory” (as you use it) has no epistemic role to play in a case for new axioms. But the notion of evidence does.

So when you talk about Type 1 evidence, you shouldn’t be talking about “practice” and “good set theory”. The key epistemic notion is rather evidence of the kind that has been put forward, e.g. the kind that involves sustained prediction and confirmation.

[I don't pretend that the notion of evidence in mathematics (and
especially in this region, where independence reigns), is a straightforward matter. The explication of this notion is one of the main things I have worked on. I talked about it in my tutorial when we were at Chiemsee. You already have the slides but I am attaching them here in case anyone else is interested. It contains both an outline of the epistemic framework and the case for \text{AD}^{L(\mathbb R)} in the context of this framework. A more detailed account is in the book I have been writing (for several years now...)]

[C. Aside: Although I said nothing about realism, since you attributed views on the matter to me, let me say something briefly: It is probably the most delicate notion in philosophy. I do not have a settled view. But I am certainly not a Robust Realist (as characterized by Pen) or a Super-Realist (as characterized by Tait), since each leads to what Tait calls "an alienation of truth from proof." The view I have defended (in "Truth in Mathematics: The Question of Pluralism") has much more in common with Thin Realism.]

So I was too honest, I should not have admitted to a radical form of multiversism (radical potentialism), as it is then easy to misundertand the HP as you have. As far as the choice of maximality criteria, I can only repeat myself: Please be open-minded and do not prejudge the programme before seeing the criteria that it generates. You will see that our intuitions about maximality criteria are more robust than our intuitions about “good set theory”.

I have been focusing on CTM-Space because (a) you said quite clearly that you were a radical potentialist and (b) the principles you have put forward are formulated in that context. But now you have changed your program yet again. There have been a lot of changes.
(1) The original claim

I conjecture that CH is false as a consequence of my Strong Inner Model Hypothesis (i.e. Levy absoluteness with “cardinal-absolute parameters” for cardinal-preserving extensions) or one of its variants which is compatible with large cardinals existence. (Aug. 12)

has been updated to

With apologies to all, I want to say that I find this focus on CH to be exaggerated. I think it is hopeless to come to any kind of resolution of this problem, whereas I think there may be a much better chance with other axioms of set theory such as PD and large cardinals. (Oct. 25)

(2) The (strong) notion of “intrinsic justification” has been replaced by the (weak) notion of “intrinsic heurisitic”.

(3) Now, the background picture of “radical potentialism” has been
replaced by “width-actualism + height potentialism”.

(4) Now, as a consequence of (3), the old principles \textsf{IMH}^\#\textsf{IMH}^\#(\omega_1), \textsf{IMH}^\#\text{(card-arith)}, \textsf{SIMH}, \textsf{SIMH}^\#, etc. have been  replaced by New-\textsf{IMH}^\#, New-\textsf{IMH}^\#(\omega_1), etc.

Am I optimistic about this program, program X? Well, it depends on
what X is. I have just been trying to understand X.

Now, in light of the changes (3) and (4), X has changed and we have to start over. We have a new philosophical picture and a whole new collection of mathematical principles. The first question is obviously: Are these new principles even consistent?

I am certainly optimistic about this: If under the scrutiny of people like Hugh and Pen you keep updating X, then X will get clearer and more tenable.

That, I think, is one of the great advantages of this forum. I doubt that a program has ever received such rapid philosophical and mathematical scrutiny. It would be good to do this for other programs, like the Ultimate-L program. (We have given that program some scrutiny. So far, there has been no need for mathematical-updating — there has been no need to modify the Ultimate-L Conjecture or the HOD-Conjecture.)


Chiemsee_1 Chiemsee_2

Re: Paper and slides on indefiniteness of CH

That doesn’t answer the question: If you assert that we will know the truth value of CH, how do you account for the fact that we have many different forms of set-theoretic practice? Do you really think that one form (Ultimate L perhaps) will “have virtues that will swamp all the others”, as Pen suggested?

Look, as I have stated repeatedly I see the subject of the model theory of ctm’s as separate from the study of V (but this is not to say that theorems in the mathematical study of ctm’s cannot have significant consequences for the study of V). I see nothing wrong with this view or the view that the practice you cite is really in the subject of ctm’s, however it is presented.

??? My question has nothing to do with ctm’s! It has nothing to do with the HP either (which I repeat can proceed perfectly well without discussing ctm’s anyway). I was referring to the many different forms of set-theoretic practice which disagree with each other on basic questions like CH. How do you assign a truth value to CH in light of this fact?

For your second question, If the tests are passed, then yes I do think that V = Ulitmate-L will “swamp all the others” but only in regard to a conception of V, not with regard to the mathematics of ctm’s. There are a number of conjectures already which I think would argue for this. But we shall see (hopefully sooner rather than later).

Here come the irrelevant ctm’s again. But you do say that V = Ultimate L will “swamp all the others”, so perhaps that is your answer to my question. Now do you really believe that? You suggested that Forcing Axioms can somehow be “part of the picture” even under V = Ultimate L, but that surely doesn’t mean that Forcing Axioms are false and Ultimate L is true.

Pen and Peter, can you please help here? Pen hit me very hard for developing what could be regarded as “Sy’s personal theory of truth” and it seems to me that we now have “Hugh’s personal theory of truth”, i.e., when Hugh develops a powerful piece of set theory he wants to declare it as “true” and wants us all to believe that. This goes far beyond Thin Realism, it goes to what Hugh calls a “conception of V” which far exceeds what you can read off from set-theoretic practice in its many different forms. Another example of this is Hugh’s claim that large cardinal existence is justified by large cardinal consistency; what notion of “truth” is this, if not “Hugh’s personal theory of truth”?

Pen’s Thin Realism provides a grounding for Type 1 truth. Mathematical practice outside of set theory provides a grounding for Type 2 truth. Out intuitions about the maximality of V in height and width provide a grounding for Type 3 truth. How is Hugh’s personal theory of truth grounded?

Look: There is a rich theory about the projective sets in the context of not-PD (you yourself have proved difficult theorems in this area). There are a number of questions which remain open about the projective sets in the context of not-PD which seem very interesting and extremely difficult. But this does not argue against PD. PD is true.

I want to know what you mean when you say “PD is true”. Is it true because you want it to be true? Is it true because ALL forms of good set theory imply PD? I have already challenged, in my view successfully, the claim that all sufficiently strong natural theories imply it; so what is the basis for saying that PD is true?

If the Ultimate-L Conjecture is false then for me it is “back to square one” and I have no idea about an resolution to CH.

I see no virtue in “back to square one” conjectures. In the HP the whole point is to put out maximality criteria and test them; it is foolish to make conjectures without doing the mathematics. Why should your programme be required to make “make or break” conjectures, and what is so attractive about that? As I understand the way Pen would put it, it all comes down to “good set theory” for your programme, and for that we need only see what comes out of your programme and not subject it to “death-defying” tests.

One more question at this point: Suppose that Jack had succeeded in proving in ZFC that 0^\# does not exist. Would you infer from this that V = L is true? On what grounds? Your V = Ultimate L programme (apologies if I misunderstand it) sounds very much like saying that Ultimate L is provably close to V so we might as well just take V = Ultimate L to be true. If I haven’t misunderstood then I find this very dubious indeed. As Pen would say, axioms which restrict set-existence are never a good idea.


Re: Paper and slides on indefiniteness of CH

Dear Pen,

On Fri, 24 Oct 2014, Penelope Maddy wrote:

Dear Sy,

We already know why CH doesn’t have a determinate truth value, it is because there are and always will be axioms which generate good set theory  which imply CH and others which imply not-CH. Isn’t this clear when one looks at what’s been going on in set theory?

Well, I’m not sure it is clear that there will never be a theory whose virtues swamp the rest.

What evidence do you see for the existence of such a theory? All the evidence points to the contrary: The number of different valuable directions in set theory just keeps multiplying.

What I could imagine is that a particular truth value of CH will be required for an optimal foundation for mathematics (Type 2 evidence), but that is just wild speculation at this point. If that occurred, then maybe it would tip the balance between axioms which are valuable for the mathematical development of set theory (Type 1 evidence) yet give different verdicts on CH.

I am however inclined to think that Type 3 evidence (the HP) will not have as much influence as Type 2 evidence, simply because people regard the foundations of mathematics as more important than what can be derived from the maximality of the set concept.


Is CH one of the leading open questions of set theory?

No! The main reason is that, as Sol has pointed out, it is not a mathematical problem but a logical one. The leading open questions of set theory are mathematical.

I didn’t realize that you’d been convinced by Sol’s arguments here.  My impression was that you thought it  might be possible to resolve CH mathematically:

I started by telling Sol that the HP might give a definitive refutation of
CH! You told me that it’s OK to change my mind as long as I admit it, and I admit it now!

That’s why I posed the question to you as I did.

You misunderstood me. I didn’t need Sol to convince me that CH is a logical but not mathematical problem. The HP is a programme based on logic, so any conclusion about CH via the HP would be a logical solution, not a mathematical one.

With apologies to all, I want to say that I find this focus on CH to be exaggerated. I think it is hopeless to come to any kind of resolution of this problem, whereas I think there may be a much better chance with other axioms of set theory such as PD and large cardinals.

PD has a decent chance of winning the blessing of all 3 forms of evidence: It may be that you need it for the best set theory (as mathematics), if mathematicians ever start worrying about the higher projective levels they may appreciate having the Lebesgue measurability of the projective sets, and as Hugh and I mentioned, PD may be a consequence of maximality according to the HP. (Hugh if you thought that I was claiming otherwise then you got confused; where did I say that?) Of course it is too soon to come to any definitive conclusion about PD, but there is a fighting chance for its truth.

I am less optimistic about large cardinals. This past week I was at AIM (American Institute of Mathematics) and based on the work we did I am willing to conjecture that the following is consistent:

(*) Every uncountable cardinal is inaccessible in HOD (the hereditarily ordinal definable sets).

(Cummings, Golshani and I already got the consistency of a weaker version of this: alpha^+ of HOD is less than alpha^+ for all infinite cardinals alpha.)

Now (*) is clearly a maximality principle, but the mathematical evidence is that it contradicts the existence of large cardinals. Indeed, the consistency proofs of these “V is fatter than HOD” principles break when you try to accomodate large cardinals, and indeed Hugh has plausible conjectures which imply that such obstacles are unsurmountable.

So where this is pointing is that maximality denies large cardinal existence. This happened before with the IMH but that got fixed by marrying the IMH to a vertical maximality principle; I don’t see how large cardinals are going to escape from this latest dilemma.

All the best,

Re: Paper and slides on indefiniteness of CH

Dear Sy,

On Oct 15, 2014, at 3:34 AM, Sy David Friedman wrote:

Getting as far as the \textsf{SIMH}^\# is genuine progress. I provided a direction for further progress in my Maximality Protocol. Be patient, Hugh! Working out the mathematical features of Maximality will take time, and the programme has only just begun.

If things are this tentative then the conjecture “CH is false based on \textsf{SIMH}^\# or some variant thereof” seems a bit curious at best.

Look we have competitor principle: \textsf{IMH}^\#(\text{card-arith}) which we know is consistent. There is the possibility that \textsf{IMH}^\#(\text{card-arith}) implies the GCH. If \textsf{SIMH}^\# is inconsistent then this possibility certainly looks more likely.

As Pen has implied, it is good to have different programmes in set theory, whether they be motivated by sophisticated issues emanating from large cardinal theory and descriptive set theory, like your Ultimate-L programme, or by an “intrinsic heuristic” like the Maximality of V. Your programme is also extremely hard, but I would not fault it for that reason and I hope that it works out as hoped.

It is not whether the questions are hard which is the issue, it is whether at this stage the principles can even be discussed.

The mathematical implications of the Ultimate L Conjecture are clear and there are many. It is just the conjecture which is hard.

The mathematical implications of \textsf{SIMH}^\# are not clear at all beyond failures of the GCH which are trivial. So it is somewhat difficult to have a mathematical discussion about it.

Please re-read the Maximality Protocol: Height Maximality, Cardinal Maximality, Width Maximality, in that order. I gave precise suggestions for Height and Cardinal Maximality; Width Maximality is obviously trickier but at least I made a tentative proposal with the \textsf{SIMH}^\#. The problem with Strong-\textsf{SIMH}^\# was given toward the end of my Max story (Max isn’t happy with \omega_1 being captured by a single real).

So I assume you are referring to this:

The set-theorists tell him that maybe his mistake is to start talking about preserving cardinals before maximising the notion of cardinal itself. In other words, maybe he should require that \aleph_1 is not equal to the \aleph_1 of L[x] for any real x and more generally that for no cardinal \kappa is \kappa^+ equal to the kappa^+ of L[A] when A is a subset of \kappa. In fact maybe he should go even further and require this with L[A] replaced by the much bigger model HOD_A of sets hereditarily-ordinal definable with the parameter A! [Sy's Maximality Protocol, Part 2]

Interesting. If for each uncountable cardinal \kappa and for each $latex A \subset \kappa$, (\kappa^+)^{\text{HOD}_A} is strictly less than \kappa^+ then PD holds.

About the \textsf{SIMH}^\# issue. Sy, you wrote on Sept 27 in your message to Pen:

The \textsf{SIMH}^\# is a “unification” of the \textsf{SIMH} and the \textsf{IMH}^\#. The SIMH is not too hard to explain, but the \textsf{IMH}^\# is much tougher. (I don’t imagine that you found my e-mail to Bob very enlightening!). Let me do the \textsf{SIMH} now, and if you haven’t heard enough I’ll give the \textsf{IMH}^\# a go in my next e-mail.


The acronym denotes the Strong Inner Model Hypothesis. For the sake of clarity, however, I’ll give you a simplified version that doesn’t quite imply the original IMH; please forgive that.

A cardinal is “absolute” if it is not only definable but is definable by the same formula in all cardinal-preserving extensions (“thickenings”) of V. For example, \aleph_1 is absolute because it is obviously “the least uncountable cardinal” in all cardinal-preserving extensions. The same applies to \aleph_2, aleph_3, \cdots, \aleph_\omega, \dots for a long way. But notice that the cardinality of the continuum could fail to be absolute, as the size of the continuum could grow in a cardinal-prserving extension (this is what Cohen did when he used forcing to make CH false; Bob Solovay got the ultimate result).

Now recall that the IMH says that if a first-order sentence without parameters holds in an outer model (“thickening”) of V then it holds in an inner model (“thinning”) of V. The SIMH says that if a first-order sentence with absolute parameters holds in a cardinal-preserving outer model of V then it holds in an inner model of V (of course with the same parameters). The SIMH implies that CH is false: By Cohen’s result there is a cardinal-prserving outer model of V in which the continuum has size at least \aleph_2 of V and therefore using the SIMH we conclude that there is an inner model of V in which the continuum has size at least \aleph_2 of V; it follows that also in V, the continuum has size at least \aleph_2, i.e. CH is false. In fact by the same argument, the SIMH implies that the continuum is very, very large, bigger than aleph_alpha for any ordinal alpha which is countable in Gödel’s universe L of constructible sets!

The SIMH# is the same as the SIMH except we require that V is #-generated (maximal in height) and instead of considering all cardinal-preserving outer models of V we only consider outer models of V which are #-generated (maximal in height). It is a “unification” of height maximality with a strong form of width maximality.

The attraction of the \textsf{SIMH}^\# is that it is a natural criterion that mirrors both height and width maximality and solves the continuum problem (negatively).

This to me seems to clearly indicate that at that time \textsf{SIMH}^\# was Strong-\textsf{SIMH}^\#. Sy, you wrote that “\textsf{SIMH}^\# is the same as \textsf{SIMH} except we require that V is #-generated…” I assumed the restriction to cardinal preserving #-generated outer models was just to simplify the discussion since otherwise \textsf{SIMH}^\# would not obviously imply \textsf{IMH}^\#.

So fine, my assumption was not correct or you have changed. Nothing wrong with changing things, it just complicates the discussion.

In any case, perhaps it would be more efficient to postpone our discussion of HP until HP has passed the embryonic stage and things are bit more settled.


Re: Paper and slides on indefiniteness of CH

Dear Sy,

You proposed “strong unreachability” as intrinsically justified on the basis of the maximal iterative conception of set, writing: “It is compelling that unreachability (and strong unreachability) with reflection is faithful to maximality but these criteria have not yet been systematically investigated”. Now, we know a bit more, in light of Hugh’s result: If you accept strong unreachability then you have to accept either V=HOD or PD.

But you have rejected V = HOD on grounds of maximality, writing (in your 21.8.14 to Hugh):

[It] cannot be “true” because it violates the maximality of the universe of sets. Recall Sol’s comment about “sharpenings” of the set concept that violate what the set concept is supposed to be about. Maximality implies that there are sets (even reals) which are not ordinal-definable.

So what now? Do you accept PD? Do you claim that we now know that PD is intrinsically justified on the basis of the maximal iterative conception of set?

Or do you retract one of the above claims about what is intrinsically justified on the basis of the maximal iterative conception of set? And if “maximality” keeps suggesting principles that conflict and must be either revised or rejected, does that not indicate that we are not here dealing with a robust notion? Or do you see enough convergence and underlying unity to allay this worry? And, if so, can you, in hindsight, explain what went wrong in this case?


Re: Paper and slides on indefiniteness of CH

Dear Sy,

This concerns the (clarified) notion of being strongly unreachable from the outline on HP you circulated yesterday. (M is strongly unreachable if for all proper inner models N of M, for all sufficiently large M-cardinals \kappa, \kappa^+ as computed in N is strictly less than \kappa^+ as computed in M).

Suppose V is strongly unreachable (and just relative to \Sigma_2-definable classes from parameters to make this explicitly first order). Then there are no measurable cardinals and either

  1. V = HOD and in fact V = K, (so GCH holds, and much more); or
  2. global-PD holds.

(K refers to a natural generalization of the usual core model—the union of “lower-parts of structures”—and this could be L of course. This K must be very L-like because of having no measurable cardinals. Global-PD is the assertion that PD holds in all set-generic extensions).

These are not mutually exclusive possibilities. But I actually do not know if (2) is possible. This leads to some rather subtle questions about correctness, for example suppose that M is countable and M is the minimum correct model of ZFC+global-PD. Must M be strongly unreachable? It seems likely that the answer should be yes, but this looks quite difficult (to me anyway).

(“Correct” here means that the set-generic extensions of M are projectively correct)


Re: Paper and slides on indefiniteness of CH

My current comments [on Woodin's August 25 email]:

I appreciate that you probably do not have the time to continue this — but perhaps some others will.

This thought provoking reply is rather thin on an explanation of this “conception of V in which PD holds”. One thing I am not sure of is whether this involves a direct consideration and acceptance of PD on its own, or whether it instead involves a direct consideration and acceptance of the relevant large cardinals on their own that imply PD. Of course, some sort of “interactive mixture of the two” might make some sense, but obviously a clearer story would be just a direct consideration and acceptance of the relevant large cardinals, since they outright imply PD. Also, the phrase “conception of V in which PD holds” does suggest the “intrinsic” rather than the “extrinsic”. E.g., you didn’t say “PD holds in V because it is set theoretically useful”.


But as your work suggests this could well change. But even so, somehow a structural divergence alone does not seem enough (to declare that Con ZFC+PD is an indispensable part of that conception). Who knows, maybe there is an arithmetically based strongly motivated hierarchy of “large cardinals” and Con PD matches something there.

This refers to #82 on my website. For the implicitly \Pi^0_1 equivalent to Con(SRP), you can really “see” the large cardinal indiscernibility in action. For the explicitly \Pi^0_1 equivalent to Con(SRP), you can also see it, perhaps not as clearly, and also somewhat see it for the explicitly and implicitly finite \Pi^0_1 equivalent to Con(HUGE). Of course, my main goal was to just get something that I can get a large number of mathematicians to feel, and be shocked that they can’t get around it by cutting down generality. That serious philosophy has reentered mathematics in a way that simply cannot be (comfortably) removed by their usual intellectual removal methods. But I obviously will be able to kill several birds with one stone if I can show that you can merely strip down large cardinal hypotheses to the bare bones in the integers (rationals) by simply writing down a purely arithmetical picture that all mathematicians find “perfectly natural”. Of course, I am not there yet, but have a real good start.


Re: Paper and slides on indefiniteness of CH

I see the importance you are attaching to your Ultimate L Conjecture – particularly getting a proof “by the current scenarios of course”. Care to make rough probabilistic predictions on when you will prove it “by the current scenarios of course”? Until then, your point of view seems to be that statements like Con(HUGE) are wide open, and you currently are not willing to declare any confidence in them.

Fascinating as this is, I think people here might be even more interested in the implications Con(EFA) arrows Con(PA) arrows Con(Z) arrows Con(ZFC) arrows Con(ZFC + measurable) arrows Con(ZFC + PD). Maybe you can comment on at least one of these arrows? — or maybe Peter Koellner?

Based on all my experience to date I have a conception of V in which PD holds. Based on that conception it is impossible for PD to be inconsistent. But that conception may be a false conception.

If it is a false conception then I do not have a conception of V to fall back on except for the naive conception I had when I first was exposed to set theory. This is why for me, if ZFC+PD is inconsistent I think that ZFC is suspect. This is not to say that I cannot or will not rebuild my conception to that of V which satisfies ZFC etc. But I would need to understand how all the intuitions etc. that led me to a conception V with PD went so wrong.

My question to those who feel V is just the integers (and maybe just a bit more) is: How do they assess that Con ZFC+PD is relevant to their conception? The conception of the set theoretic V with or without PD are very different conceptions with deep structural differences. I just do not see that happening yet to anywhere near the same degree in the case where the conception V  is just the integers. But as your work suggests this could well change.

But even so, somehow a structural divergence alone does not seem enough (to declare that Con ZFC+PD is an indispensable part of that conception).  Who knows, maybe there is an arithmetically based strongly motivated hierarchy of “large cardinals” and Con PD matches something there.

If one’s conception of V is the integers and one is never compelled to declare Con ZFC+PD as true based on whatever methodology one is using to refine that conception, then it seems to me that the only plausible conjecture one can make is that ZFC+PD is inconsistent. This was the cryptic point behind item (2) in my message which inspired your press releases.

To those still reading and to those who have participated, I would like to express my appreciation. But I really feel it is time to conclude my participation in this email thread. Classes are about to begin and I shall have to return to Cambridge shortly.


Re: Paper and slides on indefiniteness of CH

Dear Hugh,

I personally don’t feel the implication

If ZFC + PD is inconsistent then ZFC is inconsistent.

I have a couple of questions for you.

  1. I am under the impression that you are not committed to the consistency of ZFC + HUGE. Are you committed to the consistency of ZFC + LC roughly if and only if there is some good inner model theory for it? Are you also advocating a more general principle of this kind?
  2. Consider the statement: If ZFC is inconsistent then T is inconsistent. For how weak a T do you feel this? As an extreme, are you willing to take T down to EFA = exponential function arithmetic?
  3. As you can see from what I wrote about blurring pictures, my own view is one of relative clarity and therefore relative confidence. But you seem to have quite a different view, and I am wondering what you can say about your view (feelings, intuition)?