Tag Archives: Maximality

Re: Paper and slides on indefiniteness of CH

Looks like I have three roles here.

1. Very lately, some real new content that actually investigates some generally understandable aspects of “intrinsic maximality”. This has led rather nicely to legitimate foundational programs of a generally understandable nature, involving new kinds of investigations into decision procedures in set theory.

2. Attempts to direct the discussion into more productive topics. Recall the persistent subject line of this thread! The last time I tried this, I got a detailed response from Peter which I intended to answer, but put 1 above at a higher priority.

3. And finally, some generally understandable commentary on what is both not generally understandable and having no tangible outcome.

This is a brief dose of 3.

QUOTE FROM BSL PAPER BY MR. ENERGY (jointly authored):

The approach that we present here shares many features, though not all, of Goedel’s program for new axioms. Let us briefly illustrate it. The Hyperuni- verse Program is an attempt to clarify which first-order set-theoretic state- ments (beyond ZFC and its implications) are to be regarded as true in V , by creating a context in which different pictures of the set-theoretic universe can be compared. This context is the hyperuniverse, defined as the collection of all countable transitive models of ZFC.

DIGRESSION: The above seems to accept ZFC as “true in V”, but later discussions raise issues with this, especially with AxC.

So here we have the idiosyncractic propogandistic slogan “HP” for

*Hyperuniverse Program*

And we have the DEFINITION of the hyperuniverse as

**the collection of all countable transitive models of ZFC**

QUOTE FROM THIS MORNING BY MR. ENERGY:

That is why it is quite inappropriate, as you have done on numerous occasions, to refer to the HP as the study of ctm’s, as there is no need to consider ctm’s at all, and even if one does (by applying LS), the properties of ctm’s that results are very special indeed, far more special than what a full-blown theory of ctm’s would entail.

If it is supposed to be “inappropriate to refer to the HP as the study of ctm’s”, and “no need to consider ctm’s at all”, then why coin the term Hyperuniverse Program and then DEFINE the Hyperuniverse as the collection of all countable transitive models of ZFC???

THE SOLUTION (as I suggested many times)

Stop using HP and instead use CTMP = countable transitive model program. Only AFTER something foundationally convincing arises, AFTER working through all kinds of pitfalls carefully and objectively, consider trying to put forth and defend a foundational program.

In the meantime, go for a “full-blown theory of ctm’s” (language from Mr. Energy) so that you at least have something tangible to show for the effort if and when people reject your foundational program(s).

GENERALLY UNDERSTANDABLE AND VERY DIRECT PITFALLS IN USING INTRINSIC MAXIMALITY

It is “obvious” from intrinsic maximality that the GCH fails at all infinite cardinals because of “width considerations”.

This “refutes” the continuum hypothesis. This also “refutes” the existence of (\omega+2)-extendible cardinals, since they imply that the GCH holds at some infinite cardinals (Solovay).

QED

LESSONS TO BE LEARNED

You have to creatively analyze what is wrong with the above use of “intrinsic maximality”, and how it is fundamentally to be distinguished from other uses of “intrinsic maximality” that one is putting forward as legitimate. If this can be done in a suitably creative and convincing way, THEN you have at least the beginnings of a legitimate foundational program. WARNING: if the distinction is drawn too artificially, then you are not creating a legitimate foundational program.

Harvey

Re: Paper and slides on indefiniteness of CH

This is a continuation of my earlier message. Recall that I have two titles to this note. You get to pick the title that you want.

REFUTATION OF THE CONTINUUM HYPOTHESIS AND EXTENDIBLE CARDINALS

THE PITFALLS OF CITING “INTRINSIC MAXIMALITY”

1. GENERAL STRATEGY.
2. THE LANGUAGE L_0.
3. STRONGER LANGUAGES.

1. GENERAL STRATEGY

Here we present a way of using the informal idea of “intrinsic maximality of the set theoretic universe” to do two things:

1. Refute the continuum hypothesis (using PD and less).
2. Refute the existence of extendible cardinals (in ZFC).

Quite a tall order!

Since I am not that comfortable with “intrinsic maximality”, I am happy to view this for the time being as an additional reason to be even less comfortable.

At least I will resist announcing that I have refuted both the continuum hypothesis and existence of certain extensitvely studied large cardinals!

INFORMAL HYPOTHESIS. Let \phi(x,y,z) be a simple property of sets x,y,z. Suppose ZFC + “for all infinite x, there exist infinitely many distinct sets which are pairwise incomparable under \phi(x,y,z)” is consistent. Then for all infinite x, there exist infinitely many distinct sets which are pairwise incomparable under \phi(x,y,z).

Since we are going to be considering only very simple properties, we allow for more flexibility.

INFORMAL HYPOTHESIS. Let 0 \leq n,m \leq \omega. Let \phi(x,y,z) be a simple property of sets x,y,z. Suppose ZFC + “for all x with at least n elements, there exist m distinct sets which are pairwise incomparable under \phi(x,y,z)” is consistent. Then for all x with at least n elements, there exist at least m distinct sets which are pairwise incomparable under \phi(x,y,z).

We can view the above as reflecting the “intrinsic maximality of the set theoretic universe”.

We will see that this Informal Hypothesis leads to “refutations” of both the continuum hypothesis and the existence of certain large cardinals, even using very primitive phi in very primitive set theoretic languages.

2. THE LANGUAGE L_0

L_0 has variables over sets, =,<, \leq^*,\cup. Here =,<, =^* are binary relation symbols, and \cup is a unary function symbol. x \leq^* y is interpreted as “there exists a function from x onto y“. \cup is the usual union operator, \cup x being the set of all elements of elements of x.

\text{MAX}(L_0,n,m). Let 0 \leq n,m \leq \omega. Let \phi(x,y,z) be the conjunction of finitely many formulas of L_0 in variables x,y,z. Suppose ZFC + “for all x with at least n elements, there exist m distinct sets which are pairwise incomparable under \phi(x,y,z)” is consistent. Then for all x with at least n elements, there exist at least m distinct sets which are pairwise incomparable under \phi(x,y,z).

THEOREM 2.1. ZFC + \text{MAX}(L_0,\omega,\omega) proves that there is no (\omega+2)-extendible cardinal.

More generally, we have

THEOREM 2.2. Let 2 < \log(m)+1 < n \leq \omega.

i. ZFC + \text{MAX}(L_0,n,m) proves that there is no (\omega+2)-extendible cardinal. Here \log(\omega) = \omega.
ii. ZFC + PD + \text{MAX}(L_0,n,m) proves that the GCH fails at all infinite cardinals. In particular, it refutes the continuum hypothesis.
iii. ii with PD replaced by higher order measurable cardinals in the sense of Mitchell.

We are morally certain that we can easily get a complete understanding of the meaning of the sentences in quotes that arise in the \text{MAX}(L_0,n,m).

Write \text{MAX}(L_0) for

“For all 0 \leq n,m \leq \omega, \text{MAX}(L_0,n,m)“. Using such a complete understanding we should be able to establish that ZFC + \text{MAX}(L_0) is a “good theory”. E.g., such things as

  1. ZFC + PD + \text{MAX}(L_0) is equiconsistent with ZFC + PD.
  2. ZFC + PD + \text{MAX}(L_0) is conservative over ZFC + PD for sentences of second order arithmetic.
  3. ZFC + PD + \text{MAX}(L_0) + “there is a proper class of measurable cardinals” is also conservative over ZFC + PD for sentences of second order arithmetic.

We will revisit this development after we have gained that complete understanding. Then we will go beyond finite conjunctions of atomic formulas in L_0.

The key technical ingredient in this development is the fact that

1. GCH fails at all infinite cardinals is incompatible with (\omega+2)-extendible cardinals (Solovay).
2. GCH fails at all infinite cardinals is demonstrably consistent using much weaker large cardinals, or using just PD (Foreman/Woodin).

Harvey

Re: Paper and slides on indefiniteness of CH

From Mr. Energy:

Why? I think I made my position clear enough: I stated a consistent Maximality Criterion and based on my proof (with co-authors) of its consistency I have the impression that this Criterion contradicts supercompacts (not just extendibles). So that leads to a tentative rejection of supercompacts until the situation changes through further understanding of further Maximality Criteria. It’s analagous to what happened with the IMH: It led to a tentative rejection of inaccessibles, but then when Vertical Maximality was taken into account, it became obvious that the IMH# was a better criterion than the IMH and the IMH# is compatible with inaccessibles and more.

I also think that the Maximality Criterion I stated could be made much stronger, which I think is only possible if one denies the existence of supercompacts. (Just a conjecture, no theorem yet.)

Looks like I have been nominated long ago (smile) to try to turn this controversy into something readily digestible – and interesting – for everybody.

A main motivator for me in this arguably unproductive traffic is to underscore the great value of real time interaction. Bad ideas can be outed in real time! Bad ideas can be reformulated as reasonable ideas in real time!! Good new ideas can emerged in real time!!! What more can you want? Back to this situation.

This thread is now showing even more clearly the pitfalls of using unanalyzed flowery language like “Maximality Criterion” to try to draw striking conclusions (technical advances not yet achieved, but perhaps expected). Nobody would bother to complain if the striking conclusions were compatible with existing well accepted orthodoxy.

So what is really being said here is something like this:

“My (Mr. Energy) fundamental thinking about the set theoretic universe is so wise that under anticipated technical advances, it is sufficient to overthrow long established and generally accepted orthodoxy”.

What is so unusual here is that this unwarranted arrogance is so prominently displayed in a highly public environment with several of the most well known scholars in relevant areas actively engaged!

What was life like before email? We see highly problematic ideas being unravelled in real time.

What would a rational person be putting forward? Instead of the arrogant

*Maximality Criteria tells us that HOD is much smaller than V and this (is probably going to be shown in the realistic future to) refutes certain large cardinal hypotheses*

the entirely reasonable

**Certain large cardinal hypotheses (are probably going to be shown in the realistic future to) imply that HOD has similarities to V. Such similarities cannot be proved or refuted in ZFC. This refutes certain kinds of formulations of “Maximality in higher set theory, under relevant large cardinal hypotheses.**

and then remark something like this:

***The notion “intrinsic maximality of the set theoretic universe” is in great need of clear elucidation. Many formulations lead to inconsistencies or refutations of certain large cardinal hypotheses. We hope to find a philosophically coherent analysis of it from first principles that may serve as a guide to the appropriateness of many set theoretic hypotheses. In particular, the use of HOD in formulations can be criticized, and raises a number of unresolved issues.***

Again, what was life like before email? We might have been seeing students and postdocs running around Europe opening claiming to refute various large cardinal hypotheses!

Harvey

Re: Paper and slides on indefiniteness of CH

Dear Sy,

I owe you a response to your other letters (things have been busy) but your letter below presents an opportunity to make some points now.

On Oct 31, 2014, at 12:20 PM, Sy David Friedman wrote:

Why? I think I made my position clear enough: I stated a consistent Maximality Criterion and based on my proof (with co-authors) of its consistency I have the impression that this Criterion contradicts supercompacts (not just extendibles). So that leads to a tentative rejection of supercompacts until the situation changes through further understanding of further Maximality Criteria. It’s analagous to what happened with the IMH: It led to a tentative rejection of inaccessibles, but then when Vertical Maximality was taken into account, it became obvious that the IMH# was a better criterion than the IMH and the \textsf{IMH}^\# is compatible with inaccessibles and more.

I don’t buy this. Let’s go back to IMH. It violates inaccessibles (in a dramatic fashion). One way to repair it would have been to simply restrict to models that have inaccessibles. That would have been pretty ad hoc. It is not what you did. What you did is even more ad hoc. You restricted to models that are #-generated. So let’s look at that.

We take the presentation of #’s in terms of \omega_1-iterable countable models of the form (M,U). We iterate the measure out to the height of the universe. Then we throw away the # (“kicking away the ladder once we have climbed it”) and imagine we are locked in the universe it generated. We restrict IMH to such universes. This gives \textsf{IMH}^\#.

It is hardly surprising that the universes contain everything below the # (e.g. below 0^\# in the case of a countable transitive model of V=L) used to generate it and, given the trivial consistency proof of \textsf{IMH}^\# it is hardly surprising that it is compatible with all large cardinal axioms (even choicless large cardinal axioms). My point is that the maneuver is even more ad hoc than the maneuver of simply restricting to models with inaccessibles. [I realized that you try to give an "internal" account of all of this, motivating what one gets from the # without grabbing on to it. We could get into it. I will say now: I don't buy it.]

I also think that the Maximality Criterion I stated could be made much stronger, which I think is only possible if one denies the existence of supercompacts. (Just a conjecture, no theorem yet.)

First you erroneously thought that I wanted to reject PD and now you think I want to reject large cardinals! Hugh, please give me a chance here and don’t jump to quick conclusions; it will take time to understand Maximality well enough to see what large cardinal axioms it implies or tolerates. There is something robust going on, please give the HP time to do its work. I simply want to take an unbiased look at Maximality Criteria, that’s all. Indeed I would be quite happy to see a convincing Maximality Criterion that implies the existence of supercompacts (or better, extendibles), but I don’t know of one.

We do have “maximality” arguments that give supercompacts and extendibles, namely, the arguments put forth by Magidor and Bagaria. To be clear: I don’t think that such arguments provide us with much in the way of justification. On that we agree. But in my case the reason is that is that I don’t think that any arguments based on the vague notion of “maximality” provide us with much in the way of justification. With such a vague notion “anything goes”. The point here, however, is that you would have to argue that the “maximality” arguments you give concerning HOD (or whatever) and which may violate large cardinal axioms are more compelling than these other “maximality” arguments for large cardinals. I am dubious of the whole enterprise — either for or against — of basing a case on “maximality”. It is a pitting of one set of vague intuitions against another. The real case, in my view, comes from another direction entirely.

An entirely different issue is why supercompacts are necessary for “good set theory”. I think you addressed that in the second of your recent e-mails, but I haven’t had time to study that yet.

The notion of “good set theory” is too vague to do much work here. Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise. The key notion is evidence, evidence of a form that people can agree on. That is the virtue of actually making a prediction for which there is agreement (not necessarily universal — there are few things beyond the law of identity that everyone agrees on — but which is widespread) that if it is proved it will strengthen the case and if it is refuted it will weaken the case.

Best,
Peter

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

On Fri, 31 Oct 2014, W Hugh Woodin wrote:

Ok we keep going.

Why? I think I made my position clear enough: I stated a consistent Maximality Criterion and based on my proof (with co-authors) of its consistency I have the impression that this Criterion contradicts supercompacts (not just extendibles). So that leads to a tentative rejection of supercompacts until the situation changes through further understanding of further Maximality Criteria. It’s analagous to what happened with the IMH: It led to a tentative rejection of inaccessibles, but then when Vertical Maximality was taken into account, it became obvious that the \textsf{IMH}^\# was a better criterion than the IMH and the \textsf{IMH}^\# is compatible with inaccessibles and more.

I also think that the Maximality Criterion I stated could be made much stronger, which I think is only possible if one denies the existence of supercompacts. (Just a conjecture, no theorem yet.)

First you erroneously thought that I wanted to reject PD and now you think I want to reject large cardinals! Hugh, please give me a chance here and don’t jump to quick conclusions; it will take time to understand Maximality well enough to see what large cardinal axioms it implies or tolerates. There is something robust going on, please give the HP time to do its work. I simply want to take an unbiased look at Maximality Criteria, that’s all. Indeed I would be quite happy to see a convincing Maximality Criterion that implies the existence of supercompacts (or better, extendibles), but I don’t know of one.

An entirely different issue is why supercompacts are necessary for “good set theory”. I think you addressed that in the second of your recent e-mails, but I haven’t had time to study that yet.

To repeat: I am not out to kill any particular axiom of set theory! I just want to take an unbiased look at what comes out of Maximality Criteria. It is far too early to conclude from the HP that extendibles don’t exist.

Thanks,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Peter,

I think we should all be grateful to you for this eloquent description of how we gather evidence for new axioms based on the development of set theory. The first two examples (and possibly the third) that you present are beautiful cases of how a body of ideas converges on the formulation of a principle or principles with great explanatory power for topics which lie at the heart of the subject. Surely we have to congratulate those who have facilitated the results on determinacy and forcing axioms (and perhaps in time Hugh for his work on Ultimate L) for making this possible. Further, the examples mentioned meet your high standard for any such programme, which is that it “makes predictions which are later verified”.

I cannot imagine a more powerful statement of how Type 1 evidence for the truth of new axioms works, where again by “Type 1″ I refer to set theory’s role as a field of mathematics and therefore by “Type 1 evidence” I mean evidence for the truth of a new axiom based on its importance for generating “good set theory”, in the sense that Pen has repeatedly emphasized.

But I do think that what you present is only part of the picture. Set theory is surely a field of mathematics that has its own key questions and as it evolves new ideas are introduced which clarify those questions. But surely other areas of mathematics share that feature, even if they are free of questions of independence; they can have analogous debates about which developments are most important for the field, just as in set theory. So what you describe could be analagously described in other areas of mathematics, where “predictions” are made about how certain approaches will lead to the solution of central open problems. Briefly put: In your description of programmes for set theory, you treat set theory in the same way as one would treat any field of mathematics.

But set theory is much more that. Before I discuss this key point, let me interrupt myself with a brief reference to where this whole e-mail thread began, Sol’s comments about the indefiniteness of CH. As I have emphasized, there is no evidence that the pursuit of programmes like the ones you describe will agree on CH. Look at your 3 examples: The first has no opinion on CH, the second denies it and the third confirms it! I see set theory as a rich and developing subject, constantly transforming itself with new ideas, and as a result of that I think it unreasonable based on past and current evidence to think that CH will be decided by the Type 1 evidence that you describe. Pen’s suggestion that perhaps there will be a theory “whose virtues swamp the rest” is wishful thinking. Thus if we take only Type 1 evidence for the truth of new axioms into account (Sol rightly pointed out the misuse of the term “axiom” and Shelah rightly suggested the better term “semi-axiom”), we will not resolve CH and I expect that we won’t resolve much at all. Something more is needed if your goal is to say something about truth in set theory. (Of coures it is fine to not have that goal, and only a handful of set-theorists have that goal.)

OK, back to the point that set theory is more than just a branch of mathematics. Set theory also has a role as a foundation for mathematics (Type 2). Can we really assume that Type 1 axioms like the ones you suggest in your three examples are the optimal ones for the role of set theory as a foundation? Do we really have a clear understanding of what axioms are optimal in this sense? I think it is clear that we do not.

The preliminary evidence would suggest that of the three examples you mention, the first and third are quite irrelevant to mathematics outside of set theory and the second (Forcing Axioms) is of great value to mathematics outside of set theory. Should we really ignore this in a discussion of set-theoretic truth? I mean set theory is a great branch of mathematics, rife with ideas, but can we really assert the “truth” of an axiom which serves set theory’s needs when other axioms that contradict it do a better job in providing other areas of mathematics what they need?

There is even more to the picture, beyond set theory as a branch of or a foundation for math. I am referring to its Type 3 role, as a study of the concept of set. There is widespread agreement that this concept entails the maximality of V in height and width. The challenge is to explain this feature in mathematical terms, the goal of the HP. There is no a priori reason whatsoever to assume that the mathematical consequences of maximality in this sense will conform to axioms which best serve the Type 1 or Type 2 needs of set theory (as a branch of or foundation for mathematics). Moreover, to pursue this programme requires a very different approach than what is familiar to the Type 1 set-theorist, perfectly described in your previous e-mail. I am asking you to please be open-minded about this, because the standards you set and the assumptions that you make when pursuing new axioms for “good set theory” do not apply when pursuing consequences of maximality in the HP. The HP is a very different kind of programme.

To illustrate this, let me begin with two quotes which illustrate the difference and set the tone for the HP:

I said to Hugh:

The basic problem with what you are saying is that you are letting set-theoretic practice dictate the investigation of set-theoretic truth!

In other words, my starting point is not what facilitates the “best set theory”, but what one can understand about maximality of V in height and width.

On a recent occasion, Hugh said to me:

[Yet] you propose to deduce the non existence of large cardinals at some level based on maximality considerations. I would do the reverse, revise maximality.

This second quote precisely indicates the difference in our points of view. The HP is intended to be an unbiased analysis of the maximality of V in height and width, grounded in our intuitions about this feature and limited by what is possible mathematically. These intuitions are indeed fairly robust, surely more so than our judgments about what is “good set theory”. I know of no persuasive argument that large cardinal existence (beyond what is compatible with V = L) follows from the maximlity of V in height and width. Indeed in the literature authors such as Gödel had doubts about this, whereas they have felt that inaccessible cardinals are derivable from maximality in height.

So the only reasonable interpretation of Hugh’s comment is that he feels that LC existence is necessary for “good set theory” and that such Type 1 evidence should override any investigation of the maximality of V in height and width. Pen and I discussed this (in what seems like) ages ago in the terminology of “veto power” and I came to the conclusion that it should not be the intention of the HP to have its choice of criteria dictated by what is good for the practice of set theory as mathematics.

To repeat, the HP works like this: We have an intuition about maximality (of V in height and width) which we can test out with various criteria. It is a lengthy process by which we formulate, investigate and compare different criteria. Sometimes we “unify” or “synthesise” two criteria into one, resulting in a new criterion that based on our intuitions about maximality does a better job of expressing this feature than did the individual criteria which were unified. And sometimes our criteria conflict with reality, namely they are shown to be inconsistent in ZFC. Here are some examples:

Synthesis: The IMH is the most obvious criterion for expressing the maximality of V in width. #-generation is the strongest criterion for expressing the maximality of V in height. If we unify these we get IMH#, which is consistent but behaves differently than either the IMH alone or #-generation alone. Our intuition says that the IMH# better expresses maximality than either the IMH alone or #-generation alone.

Inconsistency (examples with HOD): We can consistently assert the maximality principle V \noteq \text{HOD}. A natural strengthening is that \alpha^+ of HOD is less than \alpha^+ for all infinite cardinals \alpha. Still consistent. But then we go to the further natural strengthening \alpha^+ of HOD_x is less than \alpha^+ for all subsets x of \alpha (for all infinite cardinals \alpha). This is inconsistent. So we back off to the latter but only for \alpha of cofinality \omega. Now it is consistent for many such \alpha, not yet known to be consistent for all such \alpha. We continue to explore the limits of maximality in this way, in light of what is consistent with ZFC. A similar issue arises with the statement that \alpha is inaccessible in HOD for all infinite regular \alpha, which is not yet known to be consistent (my belief is that it is).

The process continues. There is a constant interplay betrween criteria suggested by our maximality intuitions and the mathematics behind these criteria. Obviously we have to modify what we are doing as we learn more of the mathematics. Indeed, as you pointed out in your more recent e-mail, there are maximality criteria which contradict ZFC; this has been obvious for a long time, in light of Vopenka’s theorem.

It may be too much to ask that your program at this stage make such predictions. But I hope that it aspires to that. For if it does not then, as I mentioned earlier, one has the suspicion that it is infinitely revisable and “not even wrong”.

Once again, the aim of the programme is to understand the consequences of the maximality of V in height and width. Your criterion of “making predictions” may be fine for your Type 1 programmes, which are grounded by nothing more than “good set theory”, but it is not appropriate for the HP. That is because the HP is grounded by an intrinsic feature of the set-concept, maximality, which will take a long time to understand. I see no basis for your suggestion that the programme is “infinitely revisable”, it simply requires a huge amount of mathematics to carry out. Already the synthesis of the IMH with #-generation is considerable progress, although to get a deeper understanding we’ll definitely have to deal with the \textsf{SIMH}^\# and HOD-maximality.

If you insist on a “prediction” the best I can do is to say that the way things look now, at this very preliminary stage of the programme, I would guess that both not-CH and the nonexistence of supercompacts will come out. But that can’t be more than a guess at this point.

Now I ask you this: Suppose we have two Type 1 axioms, like the ones in your examples. Suppose that one is better than the other for Type 2 reasons, i.e., is more effective for mathematics outside of set theory. Does that tip the balance between those two Type 1 axioms in terms of which is closer to the truth? And I ask the same question for Type 3: Could you imagine joining forces and giving priority to axioms that both serve the needs of set theory as mathematics and are derivable from the maximality of V in height and width?

One additional worry is the vagueness of the idea of the ” ‘maximal’ iterative conception of set”. If there were a lot of convergence in what was being mined from this concept then one might think that it was clear after all. But I have not seen a lot of convergence. Moreover, while you first claimed to be getting “intrinsic justifications” (an epistemically secure sort of thing) now you are claiming to arrive only at “intrinsic heuristics” (a rather loose sort of thing). To be sure, a vague notion can lead to motivations that lead to a great deal of wonderful and intriguing mathematics. And this has clearly happened in your work. But to get more than interesting mathematical results — to make a case for for new axioms — at some stage one will have to do more than generate suggestions — one will have to start producing propositions which if proved would support the program and if refuted would weaken the program.

I imagine you agree and that that is the position that you ultimately want to be in.

No, the demands you want to make of a programme are appropriate for finding the right axioms for “good set theory” but not for an analysis of the maximality of V in height and width. For the latter it is more than sufficient to analyse the natural candidates for maximality criteria provided by our intuitions and achieve a synthesis. I predict that this will happen with striking consequences, but those consequences cannot be predicted without a lot of hard work.

Thanks,
Sy

PS: The above also addresses your more recent mail: I don’t reject a form of maximality just because it contradicts supercompacts (because I don’t see how supercompact existence is derivable form any form of maximality) and I don’t see any problem with rejecting maximality principles that contradict ZFC, simply because by convention ZFC is taken in the HP as the standard theory.

PPS: A somewhat weird but possibly interesting investigation would indeed be to drop the ZFC convention and examine criteria for the maximality of V in height and width over a weaker theory.

Re: Paper and slides on indefiniteness of CH

Dear Sy,

It is a virtue of a program if it generate predictions which are subsequently verified. To the extent that these predictions are verified one obtains extrinsic evidence for the program. To the extent that these predictions are refuted one obtains extrinsic evidence for the problematic nature of the program. It need not be a prediction which would “seal the deal” in the one case and “set it back to square one” in the other (two rather extreme cases). But there should be predictions which would lend support in the one case and take away support in the other.

The programs for new axioms that I am familiar with have had this feature. Here are some examples:

(1) Definable Determinacy.

The descriptive set theorists made many predictions that were subsequently verified and taken as support for axioms of definable determinacy. To mention just a few. There was the prediction that \text{AD}^{L(\mathbb R)} would lift the structure theory of Borel sets of reals (provable in ZFC) to sets of reals in L(\mathbb R). This checked out. There was the prediction that \text{AD}^{L(\mathbb R)} followed from large cardinals. This checked out. The story here is long and impressive and I think that it provides us with a model of a strong case for new axioms. For the details of this story — which is, in my view, a case of prediction and verification and, more generally, a case that parallels what happens when one makes a case in physics — see the Stanford Encyclopedia of Philosophy entry “Large Cardinals and Determinacy”, Tony Martin’s paper “Evidence in Mathematics”, and Pen’s many writings on the topic.

(2) Forcing Axioms

These axioms are based on ideas of “maximality” in a rather special sense. The forcing axioms ranging from \textsf{MA} to \textsf{MM}^{++} are a generalization along one dimension (generalizations of the Baire Category Theorem, as nicely spelled out in Todorcevic’s recent book “Notes on Forcing Axioms”) and the axiom (*) is a generalization along a closely related dimension. As in the case of Definable Determinacy there has been a pretty clear program and a great deal of verification and convergence. And, at the current stage advocates of forcing axioms are able to point to a conjecture which if proved would support their view and if refuted would raise a serious problem (though not necessarily setting it back to square one), namely, the conjecture that \textsf{MM}^{++} and (*) are compatible. That I take to be a virtue of the program. There are test cases. (See Magidor’s contribution to the EFI Project for more on this aspect of the program.)

(3) Ultimate L

Here we have lots of predictions which if proved would support the program and there are propositions which if proved would raise problems for the program. The most notable on is the “Ultimate L Conjecture”. But there are many other things. E.g. That conjecture implies that V=HOD. So, if the ideas of your recent letter work out, and your conjecture (combined with results of “Suitable Extender Models, I”) proves the HOD Conjecture then this will lend some support “V = Ultimate L” in that “V = Ultimate L” predicts a proposition that was subsequently verified in ZFC.

It may be too much to ask that your program at this stage make such predictions. But I hope that it aspires to that. For if it does not then, as I mentioned earlier, one has the suspicion that it is infinitely revisable and “not even wrong”.

One additional worry is the vagueness of the idea of the “‘maximal’ iterative conception of set”. If there were a lot of convergence in what was being mined from this concept then one might think that it was clear after all. But I have not seen a lot of convergence. Moreover, while you first claimed to be getting “intrinsic justifications” (an epistemically secure sort of thing) now you are claiming to arrive only at “intrinsic heuristics” (a rather loose sort of thing). To be sure, a vague notion can lead to motivations that lead to a great deal of wonderful and intriguing mathematics. And this has clearly happened in your work. But to get more than interesting mathematical results — to make a case for for new axioms — at some stage one will have to do more than generate suggestions — one will have to start producing propositions which if proved would support the program and if refuted would weaken the program.

I imagine you agree and that that is the position that you ultimately want to be in.

Best,
Peter

Re: Paper and slides on indefiniteness of CH

Dear Hugh,

Hmm… \aleph_{\omega} is an infinite cardinal which is singular in HOD.

Something is missing in the formulation of (**).

You are absolutely right! My apologies for that. It should have been:

(*) \alpha^+ of \text{HOD} is less than \alpha^+ for all infinite
cardinals \alpha, and

(**) There is a closed unbounded class of cardinals which are regular in HOD.

Or even more simply:

({*}{*}{*}) There is a closed unbounded class of cardinals alpha such that \alpha is regular in HOD and \alpha^+ of HOD is less than \alpha^+.

({*}{*}{*}) is consistent with #-generation (ordinal maximality) and implies that there are no supercompacts.

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

On Oct 26, 2014, at 7:39 PM, Sy David Friedman wrote:

Dear Peter,

But probably there’s a proof of no Reinhardt cardinals in ZF, even without Ultimate-L:

Conjecture: In ZF, the Stable Core is rigid.

Note that V is generic over the Stable Core.

I took a brief look at your paper on the stable core and did not immediately see anything that genuinely seemed to argue for the conjecture you make above. (Maybe I just did not look at the correct paper).

Are you just really conjecturing that there is no (nontrivial) j:\text{HOD} \to \text{HOD}, or more generally that if V is a generic extension of an inner model N (by a class forcing which is amenable to N) then here is no nontrivial j:N \to N? Or is there extra information about the Stable Core which motivates the conjecture?

I would think that based on HP etc., you would actually conjecture that there is a nontrivial j:\text{HOD} \to \text{HOD}. This would have the added advantage explaining why V \neq \text{HOD} follows from maximality considerations etc. (This declaration you have made at several points in this thread and which I must confess I have never really understand the reasons for.)

This seems like a perfect opportunity for you to use your conception of HP and boldly make a conjecture. (i.e. that the existence of j:\text{HOD} \to \text{HOD} is consistent because by the HP protocols, the class free version, stated as (1) below, must be true in the preferred ct.’s and these are #-generated).

The axiom (that there is such a j) surely transcends the hierarchy we have now. So this HP insight if well grounded would be a remarkable success.

You could then very naturally go further and modify your unreachability property to:
if N is a proper inner model of V then there is a nontrivial j:N \to N.

If fact you could combine everything and go with the following perfect pairing:

1) For all sufficiently large \kappa, there is a nontrivial elementary embedding
j: \text{HOD}\cap V_{\kappa} \to \text{HOD} \cap V_{\kappa}.

2) If N is a proper inner model of V (definable from parameters) then N \subset \text{HOD}_A for some set A \subset \text{Ord}.

In the Ultimate-L approach one faces a similar choice but there one is compelled to reject
such non-rigidity conjectures (since they must be false in that approach).

But why do you? You did after all write to Peter on Oct 19:

Well, since this thread is no stranger to huge extrapolations beyond current knowledge, I’ll throw out the following scenario: By the mid-22nd cenrury we’ll have canonical inner models for all large cardinals right up to a Reinhardt cardinal. What will simply happen is that when the LCs start approaching Reinhardt the associated canonical inner model won’t satisfy AC. The natural chain of theories leading up the interpretability hierarchy will only include theories that have AC: they will assert the existence of a canonical inner model of some large cardinal. These theories are better than theories which assert LC existence, which give little information.

Regards,
Hugh