Re: Paper and slides on indefiniteness of CH

Looks like I have three roles here.

1. Very lately, some real new content that actually investigates some generally understandable aspects of “intrinsic maximality”. This has led rather nicely to legitimate foundational programs of a generally understandable nature, involving new kinds of investigations into decision procedures in set theory.

2. Attempts to direct the discussion into more productive topics. Recall the persistent subject line of this thread! The last time I tried this, I got a detailed response from Peter which I intended to answer, but put 1 above at a higher priority.

3. And finally, some generally understandable commentary on what is both not generally understandable and having no tangible outcome.

This is a brief dose of 3.

QUOTE FROM BSL PAPER BY MR. ENERGY (jointly authored):

The approach that we present here shares many features, though not all, of Goedel’s program for new axioms. Let us briefly illustrate it. The Hyperuni- verse Program is an attempt to clarify which first-order set-theoretic state- ments (beyond ZFC and its implications) are to be regarded as true in V , by creating a context in which different pictures of the set-theoretic universe can be compared. This context is the hyperuniverse, defined as the collection of all countable transitive models of ZFC.

DIGRESSION: The above seems to accept ZFC as “true in V”, but later discussions raise issues with this, especially with AxC.

So here we have the idiosyncractic propogandistic slogan “HP” for

*Hyperuniverse Program*

And we have the DEFINITION of the hyperuniverse as

**the collection of all countable transitive models of ZFC**

QUOTE FROM THIS MORNING BY MR. ENERGY:

That is why it is quite inappropriate, as you have done on numerous occasions, to refer to the HP as the study of ctm’s, as there is no need to consider ctm’s at all, and even if one does (by applying LS), the properties of ctm’s that results are very special indeed, far more special than what a full-blown theory of ctm’s would entail.

If it is supposed to be “inappropriate to refer to the HP as the study of ctm’s”, and “no need to consider ctm’s at all”, then why coin the term Hyperuniverse Program and then DEFINE the Hyperuniverse as the collection of all countable transitive models of ZFC???

THE SOLUTION (as I suggested many times)

Stop using HP and instead use CTMP = countable transitive model program. Only AFTER something foundationally convincing arises, AFTER working through all kinds of pitfalls carefully and objectively, consider trying to put forth and defend a foundational program.

In the meantime, go for a “full-blown theory of ctm’s” (language from Mr. Energy) so that you at least have something tangible to show for the effort if and when people reject your foundational program(s).

GENERALLY UNDERSTANDABLE AND VERY DIRECT PITFALLS IN USING INTRINSIC MAXIMALITY

It is “obvious” from intrinsic maximality that the GCH fails at all infinite cardinals because of “width considerations”.

This “refutes” the continuum hypothesis. This also “refutes” the existence of (\omega+2)-extendible cardinals, since they imply that the GCH holds at some infinite cardinals (Solovay).

QED

LESSONS TO BE LEARNED

You have to creatively analyze what is wrong with the above use of “intrinsic maximality”, and how it is fundamentally to be distinguished from other uses of “intrinsic maximality” that one is putting forward as legitimate. If this can be done in a suitably creative and convincing way, THEN you have at least the beginnings of a legitimate foundational program. WARNING: if the distinction is drawn too artificially, then you are not creating a legitimate foundational program.

Harvey

Re: Paper and slides on indefiniteness of CH

This is a continuation of my earlier message. Recall that I have two titles to this note. You get to pick the title that you want.

REFUTATION OF THE CONTINUUM HYPOTHESIS AND EXTENDIBLE CARDINALS

THE PITFALLS OF CITING “INTRINSIC MAXIMALITY”

1. GENERAL STRATEGY.
2. THE LANGUAGE L_0.
3. STRONGER LANGUAGES.

1. GENERAL STRATEGY

Here we present a way of using the informal idea of “intrinsic maximality of the set theoretic universe” to do two things:

1. Refute the continuum hypothesis (using PD and less).
2. Refute the existence of extendible cardinals (in ZFC).

Quite a tall order!

Since I am not that comfortable with “intrinsic maximality”, I am happy to view this for the time being as an additional reason to be even less comfortable.

At least I will resist announcing that I have refuted both the continuum hypothesis and existence of certain extensitvely studied large cardinals!

INFORMAL HYPOTHESIS. Let \phi(x,y,z) be a simple property of sets x,y,z. Suppose ZFC + “for all infinite x, there exist infinitely many distinct sets which are pairwise incomparable under \phi(x,y,z)” is consistent. Then for all infinite x, there exist infinitely many distinct sets which are pairwise incomparable under \phi(x,y,z).

Since we are going to be considering only very simple properties, we allow for more flexibility.

INFORMAL HYPOTHESIS. Let 0 \leq n,m \leq \omega. Let \phi(x,y,z) be a simple property of sets x,y,z. Suppose ZFC + “for all x with at least n elements, there exist m distinct sets which are pairwise incomparable under \phi(x,y,z)” is consistent. Then for all x with at least n elements, there exist at least m distinct sets which are pairwise incomparable under \phi(x,y,z).

We can view the above as reflecting the “intrinsic maximality of the set theoretic universe”.

We will see that this Informal Hypothesis leads to “refutations” of both the continuum hypothesis and the existence of certain large cardinals, even using very primitive phi in very primitive set theoretic languages.

2. THE LANGUAGE L_0

L_0 has variables over sets, =,<, \leq^*,\cup. Here =,<, =^* are binary relation symbols, and \cup is a unary function symbol. x \leq^* y is interpreted as “there exists a function from x onto y“. \cup is the usual union operator, \cup x being the set of all elements of elements of x.

\text{MAX}(L_0,n,m). Let 0 \leq n,m \leq \omega. Let \phi(x,y,z) be the conjunction of finitely many formulas of L_0 in variables x,y,z. Suppose ZFC + “for all x with at least n elements, there exist m distinct sets which are pairwise incomparable under \phi(x,y,z)” is consistent. Then for all x with at least n elements, there exist at least m distinct sets which are pairwise incomparable under \phi(x,y,z).

THEOREM 2.1. ZFC + \text{MAX}(L_0,\omega,\omega) proves that there is no (\omega+2)-extendible cardinal.

More generally, we have

THEOREM 2.2. Let 2 < \log(m)+1 < n \leq \omega.

i. ZFC + \text{MAX}(L_0,n,m) proves that there is no (\omega+2)-extendible cardinal. Here \log(\omega) = \omega.
ii. ZFC + PD + \text{MAX}(L_0,n,m) proves that the GCH fails at all infinite cardinals. In particular, it refutes the continuum hypothesis.
iii. ii with PD replaced by higher order measurable cardinals in the sense of Mitchell.

We are morally certain that we can easily get a complete understanding of the meaning of the sentences in quotes that arise in the \text{MAX}(L_0,n,m).

Write \text{MAX}(L_0) for

“For all 0 \leq n,m \leq \omega, \text{MAX}(L_0,n,m)“. Using such a complete understanding we should be able to establish that ZFC + \text{MAX}(L_0) is a “good theory”. E.g., such things as

  1. ZFC + PD + \text{MAX}(L_0) is equiconsistent with ZFC + PD.
  2. ZFC + PD + \text{MAX}(L_0) is conservative over ZFC + PD for sentences of second order arithmetic.
  3. ZFC + PD + \text{MAX}(L_0) + “there is a proper class of measurable cardinals” is also conservative over ZFC + PD for sentences of second order arithmetic.

We will revisit this development after we have gained that complete understanding. Then we will go beyond finite conjunctions of atomic formulas in L_0.

The key technical ingredient in this development is the fact that

1. GCH fails at all infinite cardinals is incompatible with (\omega+2)-extendible cardinals (Solovay).
2. GCH fails at all infinite cardinals is demonstrably consistent using much weaker large cardinals, or using just PD (Foreman/Woodin).

Harvey

Re: Paper and slides on indefiniteness of CH

I have two titles to this note. You get to pick the title that you want.

REFUTATION OF THE CONTINUUM HYPOTHESIS THE PITFALLS OF CITING “INTRINSIC MAXIMALITY’

Note that the most fundamental and simple nontrivial equivalence relation on the set theoretic universe is that of “being in one-one correspondence”.

Also very fundamental and simple is the equivalence relation EQ on infinite sets of reals “being in one-to-one correspondence”.

Note that it is consistent with ZFC that this fundamental simple EQ has

i. exactly two equivalence classes. ii. infinitely many equivalence classes.

THEREFORE, by the “intrinsic maximality of the set theoretic universe”, ii holds. THEREFORE, we have refuted the continuum hypothesis (smile).

NOTE: A lesson that can be drawn here is just how important it is to avoid cavalier quoting of “intrinsic maximality of the set theoretic universe”.

In fact, if we factor, we are looking at a set for which it is consistent with ZFC that it has, on the one hand, exactly two elements, and on the other hand, is infinite. So by “intrinsic maximality of the set theoretic universe”, it must be infinite (smile).

GENERAL PRINCIPLE. Let EQ be a simple equivalence relation. Suppose ZFC + “EQ has infinitely many equivalence classes” is consistent. Then EQ actually has infinitely many equivalence classes.

Here is the legitimate foundational program.

  1. Set up an elementary language that is based on only some of the most set theoretically fundamental notions.
  2. Determine which “simple” definitions define equivalence relations. Show that this is robust, in that here truth is the same as provability in ZFC and in ZC.
  3. Determine what is consistent with ZFC about the number of equivalence classes of items in 2.
  4. Now apply the general principle, and show that the resulting statements are (even collectively?) consistent with ZFC. Perhaps the general principle will be seen to be equivalent over ZFC to “the continuum is greater than \aleph_\omega” or perhaps some versions of not GCH?
  5. Rework 1-4 with ever stronger elementary languages and ever less “simple” definitions, until one hits a brick wall.

The immediate problem is to get a good prototype for this elementary language. We want it to be not ad hoc, and so should be in tune with the most basic set theoretic material.

While doing this real time foundations, it now appears, provisionally, that we are best off using “there is a function from x onto y” and not just “there is a bijection from x onto y”. The former is more flexible than the latter, and still very very basic for elementary set theory.

ELST = elementary set theory. We have

  1. Equality, and union operator (set of all elements of elements).
  2. There is a function from x onto y. Written x\geq y.
  3. Convenient to have variables range only over infinite sets.

Something interesting has arisen. This language supports even more naturally the 3-ary relation

T(x,y,z) if and only if

i. The union of y and the union of z are both x.
ii. y \geq z\geq x and z \geq y \geq x.
iii. (Implicitly, x,y,z are infinite).

Two observations.

  1. We have defined T as a conjunction of a small number of atomic formulas in x,y,z, with no nesting of the union operator.
  2. For all x, T_x is an equivalence relation.

Thus there is great simplicity here. We can provisionally concentrate on just cases of 1.2, even perhaps with a limit on the number of atomic formulas. We can also relax the “no nesting”.

So we have a parameterized equivalence relation. We should look at a modified General Principle.

GENERAL PRINCIPLE. Let T be a 3-ary parameterized equivalence relation. Suppose ZFC + “EQ has infinitely many equivalence classes” is consistent. Then EQ actually has infinitely many equivalence classes.

In this way, we should be getting the robustness referred to above, and also the failure of GCH at every infinite cardinal.

I’ll stop here with this provisional beginning…

Harvey

Re: Paper and slides on indefiniteness of CH

From Mr. Energy:

Why? I think I made my position clear enough: I stated a consistent Maximality Criterion and based on my proof (with co-authors) of its consistency I have the impression that this Criterion contradicts supercompacts (not just extendibles). So that leads to a tentative rejection of supercompacts until the situation changes through further understanding of further Maximality Criteria. It’s analagous to what happened with the IMH: It led to a tentative rejection of inaccessibles, but then when Vertical Maximality was taken into account, it became obvious that the IMH# was a better criterion than the IMH and the IMH# is compatible with inaccessibles and more.

I also think that the Maximality Criterion I stated could be made much stronger, which I think is only possible if one denies the existence of supercompacts. (Just a conjecture, no theorem yet.)

Looks like I have been nominated long ago (smile) to try to turn this controversy into something readily digestible – and interesting – for everybody.

A main motivator for me in this arguably unproductive traffic is to underscore the great value of real time interaction. Bad ideas can be outed in real time! Bad ideas can be reformulated as reasonable ideas in real time!! Good new ideas can emerged in real time!!! What more can you want? Back to this situation.

This thread is now showing even more clearly the pitfalls of using unanalyzed flowery language like “Maximality Criterion” to try to draw striking conclusions (technical advances not yet achieved, but perhaps expected). Nobody would bother to complain if the striking conclusions were compatible with existing well accepted orthodoxy.

So what is really being said here is something like this:

“My (Mr. Energy) fundamental thinking about the set theoretic universe is so wise that under anticipated technical advances, it is sufficient to overthrow long established and generally accepted orthodoxy”.

What is so unusual here is that this unwarranted arrogance is so prominently displayed in a highly public environment with several of the most well known scholars in relevant areas actively engaged!

What was life like before email? We see highly problematic ideas being unravelled in real time.

What would a rational person be putting forward? Instead of the arrogant

*Maximality Criteria tells us that HOD is much smaller than V and this (is probably going to be shown in the realistic future to) refutes certain large cardinal hypotheses*

the entirely reasonable

**Certain large cardinal hypotheses (are probably going to be shown in the realistic future to) imply that HOD has similarities to V. Such similarities cannot be proved or refuted in ZFC. This refutes certain kinds of formulations of “Maximality in higher set theory, under relevant large cardinal hypotheses.**

and then remark something like this:

***The notion “intrinsic maximality of the set theoretic universe” is in great need of clear elucidation. Many formulations lead to inconsistencies or refutations of certain large cardinal hypotheses. We hope to find a philosophically coherent analysis of it from first principles that may serve as a guide to the appropriateness of many set theoretic hypotheses. In particular, the use of HOD in formulations can be criticized, and raises a number of unresolved issues.***

Again, what was life like before email? We might have been seeing students and postdocs running around Europe opening claiming to refute various large cardinal hypotheses!

Harvey

Re: Paper and slides on indefiniteness of CH

Mr. Energy writes (two excerpts):

With co-authors I established the consistency of the following

Maximality Criterion. For each infinite cardinal \alpha, \alpha^+ of \text{HOD} is less than \alpha^+.

Both Hugh and I feel that this Criterion violates the existence of certain large cardinals. If that is confirmed, then I will (tentatively) conclude that Maximality contradicts the existence of large cardinals. Hugh will conclude that there is something wrong with the above Maximality Criterion and it therefore should be rejected.

My point is that Hugh considers large cardinal existence to be part of set-theoretic truth. Why? I have yet to see an argument that large cardinal existence is needed for “good set theory”, so it does not follow from Type 1 evidence. That is why I think that large cardinal existence is part of Hugh’s personal theory of truth.

My guess is he’d also consider type 2 evidence (involving the relations of set theory to the rest of mathematics) if there were some ready to hand.

There is some ready to hand: At present, Type 2 evidence points towards Forcing Axioms, and these contradict CH and therefore contradict Ultimate L

I have written dozens of e-mails to explain what I am doing and I take it as a good sign that I am still standing, having responded consistently to each point. If there is something genuinely new to be said, fine, I will respond to it, but as I see it now we have covered everything: The HP is simply a focused investigation of mathematical criteria for the maximality of V in height and width, with the aim of convergence towards an optimal such criterion. The success of the programme will be judged by the extent to which it achieves that goal. Interesting math has already come out of the programme and will continue to come out of it. I am glad that at least Hugh has offered a bit of encouragement to me to get to work on it.

This illustrates the pitfalls involved in trying to use an idiosyncratic propogandistic slogan like “HP” to refer to an unanalyzed philosophical conception with language like “intrinsic maximality of the set theoretic universe”. Just look at how treacherous this whole area of “philosophically motivated higher set theory” can be.

E.g., MA (Martin’s axiom) already under appropriate formulations look like some sort of “intrinsic maximality”, at least as clear as many things purported on this thread to exhibit some sort of “intrinsic maximality”, and already implies that CH is false. So have we now completely solved the CH negatively? If so, why? If not, why not? See what happens with an unanalyzed notion of “intrinsic maximality of the set theoretic universe”. Also MM (Martin’s maximum) is even stronger, and implies that 2^\omega = \omega_2. Also looks like “intrinsic maximality of the set theoretic universe”, at least before any convincing analysis of it, and so do we now know that 2^\omega = \omega_2 follows from the “intrinsic maximality of the set theoretic universe”?

I will now take an obvious step toward turning at least some of this very unsatisfying stuff into something completely unproblematic – without the idiosyncratic propogandistic slogans – AND something (hopefully) not needing countable transitive models for straightforward formulations.

Ready? Here is the narraitive.

1. We want to explore the idea that

*L is a tiny part of V* *L is very different from V*

We also want to explore the idea that

**HOD is a tiny part of V. **HOD is very different from V**

Here HOD = hereditarily ordinal definable sets. Myhill/Scott proved that HOD satisfies ZFC, following semiformal remarks of Gödel.

2. There are some interesting arguments that one can give for L being a tiny part of V. These arguments themselves can be subjected to various kinds of scrutiny, and that is an interesting topic in and of its own. But we shall, for the time being, take it for granted that we are starting off with “L is a tiny part of V”.

3. On the other hand, the arguments that HOD is a tiny part of V are, at least at the moment, fewer and much weaker. This reflects some important technical differences between L and HOD. E.g., L is very stable in the sense that L within L is L. However, HOD within HOD may not be HOD (that’s independent of ZFC).

4. Another related big difference between L and HOD is the following. You can prove that any formal extension of the set theoretic universe compatible with the set theoretic universe in a nice sense, must violate V = L if the original set theoretic universe violates V = L. This is the kind of thing that adds to an arsenal of possible arguments that L is only a part or tiny part of V. However, the set theoretic universe demonstrably has a formal extension satisfying V = HOD even if the set theoretic universe does not satisfy V = HOD. This makes the idea that HOD is a tiny part of V a much more problematic “consequence” of “intrinsic maximality of the set theoretic universe”.

5. Yet another difference. Vopenka proved in ZFC that every set can be obtained by set forcing over HOD. That every set can be obtained by set forcing over L is known to be independent of ZFC, and in fact violates medium large cardinals (such as measurable cardinals and even 0^\#). The same is true for set forcing replaced by class forcing.

6. Incidentally, I think there is an open question that goes something like this. Let M be the minimum ctm of ZFC. There exists a ctm extension of M with the same ordinals that is not obtainable by class forcing over M – I think even under a very wide notion of class forcing. Still open?

7. Another way of talking about the problematic nature of V not equal HOD as following from “intrinsic maximality” is that, well, maybe if there were more sets, we would be able to make more powerful definitions, making certain certain sets in HOD that weren’t “before”, and then close this off, making V = HOD. Thus this is an attempt to actually turn V = HOD itself into some sort of “intrinsic maximality”!!

8. So the proper move, until there is more creative analysis of “intrinsic maximality of the set theoretic universe” is to simply say, flat out:

*we are going to explore the idea that HOD is a tiny part of V* *we are going to explore the idea that HOD is very different from V*

and avoid any idiosnyncratic propogandistic slogans like “HP”.

9. So now let’s fast forward to the excerpt from Mr. Energy:

With co-authors I established the consistency of the following Maximality Criterion. For each infinite cardinal \alpha, \alpha^+ of HOD is less than \alpha^+. Both Hugh and I feel that this Criterion violates the existence of certain large cardinals. If that is confirmed, then I will (tentatively) conclude that Maximality contradicts the existence of large cardinals. Hugh will conclude that there is something wrong with the above Maximality Criterion and it therefore should be rejected.

Here is a reasonable restatement without the idiosyncratic propoganda – propoganda that papers over all of the issues about HOD raised above.

NEW STATEMENT. With co-authors I (Mr. Energy) established the consistency of the following relative to the consistency of ???

(HOD very different from V). Every infinite set in HOD is the domain of a bijection onto another set in HOD without there being a bijection in HOD.

Furthermore, Hugh and I (Mr. Energy) feel that the above statement refutes the existence of certain kinds of large cardinal hypotheses. If this is confirmed, then it follows that “HOD is very different from V” is incompatible with certain kinds of large cardinal hypotheses.

10. Who can complain about that? Perhaps somebody on the list can clarify just which large cardinal hypotheses might be incompatible with the above statement?

11. Let’s now step back and reflect on this a bit in general terms to make more of it. What can be say about “HOD very different from V” in general terms?

HOD is an elementary substructure of V

is of course very strong. This is equivalent to saying that V = HOD.

But the above statement is an extremely strong refutation of elementary substructurehood.

THEOREM (?). The most severe/simplest possible violation of L being an elementary substructure of V is that “every infinite set in L is the domain of a bijection onto another set in L without there being a bijection in L”.

THEOREM (?). The most severe/simplest possible violation of HOD being an elementary substructure of V is that “every infinite set in HOD is the domain of a bijection onto another set in HOD without there being a bijection in HOD”.

THEOREM (???). The most severe/simplest possible violation of V not equaled to L is that “every infinite set in L is the domain of a bijection onto another set in L without there being a bijection in L”.

THEOREM (???). The most severe/simplest possible violation of V not equaled to HOD is that “every infinite set in HOD is the domain of a bijection onto another set in HOD without there being a bijection in HOD”.

Since this morning I am doing some real time foundations (of higher set theory), I should be allowed to state Theorems without knowing how to state them.

I also reserve the right to stop here.

I have written dozens of e-mails to explain what I am doing and I take it as a good sign that I am still standing, having responded consistently to each point. If there is something genuinely new to be said, fine, I will respond to it, but as I see it now we have covered everything: The HP is simply a focused investigation of mathematical criteria for the maximality of V in height and width, with the aim of convergence towards an optimal such criterion. The success of the programme will be judged by the extent to which it achieves that goal. Interesting math has already come out of the programme and will continue to come out of it. I am glad that at least Hugh has offered a bit of encouragement to me to get to work on it.

Of course, you have chosen to respond to much but not all of what everybody has written here, except me, invoking the “brother privilege”. Actually, I wonder if the “brother privilege” – that you do not have to respond to your brother in an open intellectual forum – is a consequence of the “intrinsic maximality of the set theoretic universe”?

If you are looking for “something genuinely new to say” then you can start with the dozens of emails I have put on this thread, Actually, you have covered very little by serious foundational standards.

On a mathematical note, you can start by talking about #-generation, what it means in generally understandable terms, why it is natural and/or important, and so forth. Why it is an appropriate vehicle for “fixing” IMH (if it is). It is absurd to think that a two line description weeks (or is it months) ago is even remotely appropriate for a list of about 75 readers. Also, continually referring to type 1, type 2, type 3 set theoretic themes without using real and short names is a totally unnecessary abuse of the readers of this list. People are generally not going to be keeping that in their heads – even if they have not been throwing your messages (and mine) into the trash. Are the numbers 1,2,3 canonically associated with those themes? Furthermore, your brief discussion of them was entirely superficial. There are crucial issues involved in just what the interaction of higher set theory is with mathematics that have not been discussed hardly at all here either by you or by others.

BOTTOM LINE ADVICE.

Change HP to CTMP = countable transitive model program. Cast headlines for statements in terms like “HOD is very different from V” or “HOD is a tiny part of V” or things like that. Avoid “intrinsic maximality of the set theoretic universe” unless you have something new to say that is philosophically compelling.

Harvey

Re: Paper and slides on indefiniteness of CH

Regarding a few recent quotes from Mr. Energy:

I want to know what you mean when you say “PD is true”. Is it true because you want it to be true? Is it true because ALL forms of good set theory imply PD? I have already challenged, in my view successfully, the claim that all sufficiently strong natural theories imply it; so what is the basis for saying that PD is true?

It has nothing to do with the HP either (which I repeat can proceed perfectly well without discussing ctm’s anyway).

Pen’s Thin Realism provides a grounding for Type 1 truth. Mathematical practice outside of set theory provides a grounding for Type 2 truth. Out intuitions about the maximality of V in height and width provide a grounding for Type 3 truth. How is Hugh’s personal theory of truth grounded?

I want to know what you mean when you say “PD is true”. Is it true because you want it to be true? Is it true because ALL forms of good set theory imply PD? I have already challenged, in my view successfully, the claim that all sufficiently strong natural theories imply it; so what is the basis for saying that PD is true?

Pen’s Thin Realism provides a grounding for Type 1 truth. Mathematical practice outside of set theory provides a grounding for Type 2 truth. Out intuitions about the maximality of V in height and width provide a grounding for Type 3 truth. How is Hugh’s personal theory of truth grounded?

I assume that Hugh wants to claim that all natural paths in higher set theory, where “all x^\#, x \subseteq \omega, exist” lies there in an early stage of development, lead to PD. Although it would be very nice to have some formalized version of this, it does seem to make sense, and I don’t recall seeing any convincing counterexample to this.

For instance, one can set up a language for natural statements in the projective hierarchy, and try to prove rigorous theorems backing up this statement.

“It has nothing to do with the HP either (which I repeat can proceed perfectly well without discussing ctm’s anyway).”

Any version of HP that I have seen, even for IMH, including any of my own (the only one that is “new” involves Boolean algebras), is awkward compared to using countable transitive models. So “perfectly well” seems like an exaggeration at best. Also I think (have I got this right?) Hugh pointed out a place in the proliferating “fixes” of IMH for which ctms are needed, or perhaps where the awkwardness of not using them becomes really severe? In addition, I think you never answered some of Hugh’s questions about formulating precise and interesting fixes of IMH.

ASIDE: It now seems that any settling of CH via “HP” or CTMP is extremely remote. You did not start the discussion here with this point of view. Recall the subject line of this email.

I think of IMH, with that triple paper, as something not uninteresting. My impression is that you don’t have a comparable second not uninteresting development. IMH has, under standard views at least, a prima facie fatal flaw that calls into doubt the coherence of the very notion you keep talking about – intrinsic maximality of the set theoretic universe. What seems most dubious about “HP” is that it is not robust, and doesn’t have a second not uninteresting success for a wide range of people to really ponder. My back channels indicate to me that the “fixes” artificially layer the idea of IMH on top of large cardinals, which is not a convincing way to proceed.

You should simply rename it CTMP (countable transitive model program), as Hugh and I have said, and then you have a license to pursue practically any grammatically coherent question whatsoever in the realm of ctms as a not uninteresting corner of higher set theory. If something foundational or philosophically coherent comes out of pursuing CTMP then you can try to make something of it foundationally or philosophically. You just don’t have enough success with “HP” to do this with it now. No, you can’t reasonably just invent a branch of set theory called “intrinsic maximality” without more not uninteresting successes. That’s way premature.

Since you spent the bulk of your career on not uninteresting technical work in set theory, it is heroic to try to “get religion” and do something “truly important”, as you are 61. I can see how you got excited with IMH, and got just the right help with the technical complications (Welch, Woodin). But you are trying to dress this up into a foundational/philosophical program under a hopelessly idiosyncratic propogandistic name (HP) way too early, and should have instead stated CTMP and pondered the difficulties with “intrinsic maximality” in an objective and creative way. Incidentally, the way PD is used to prove the consistency of IMH in that triple paper does lend some credence to Hugh conjecturing that “HP” may well simply be another path leading to PD.

OK, I am skeptical in many dimensions of higher set theory, and probably will be raising issues with Koellner/Woodin. You have done some of that already, sometimes with unexpectedly strong language. But I don’t think that the “HP” is strong enough at this point to be using it to set an example that would undermine Koellner/Woodin. You surely can raise some legitimate issues without holding up “HP” as superior.

Harvey

Re: Paper and slides on indefiniteness of CH

“It just occurred to me that we already have a consistent maximality
principle which provably contradicts large cardinal existence.

This is not enough evidence to infer the nonexistence of supercompacts
from maximality but this is definitely pointing in that direction!”

Well, how about the closely related

“It just occurred to me that we already have a maximality principle
which provably contradicts ZFC.

This is not enough evidence to infer that ZFC fails from maximality
but this is definitely pointing in that direction!”

Such is the folly of trying to apply an insufficiently analyzed
informal idea in a technical context. There is no substitute for at
least trying to come to grips with the plethora of inconsistent
formulations of so called “intrinsic maximality of the set theoretic
universe” in a philosophically honest way.

Harvey

Re: Paper and slides on indefiniteness of CH

We haven’t discussed Hugh’s Ultimate L program much. There are two big differences between this program and CTMP (aka HP). As I understand it,

1. It offers a proposed preferred set theoretic universe in which it is clear that CH holds – but the question of its existence relative to large cardinals (or relative consistency) is a (or the) major open question in the program.

2. In connection with 1, there are open conjectures (formulated in ZFC) which show how to refute Reinhardt’s axiom (the existence of j:V \to V) within ZF (and more).

So even if one rejects 1, this effort will leave us at least with 2, which is nearly universally regarded as important in the set theory community.

It would be nice for most people on this thread to have a generally understandable account of at least the structure of 1. I know that there has been some formulations already on the thread, but they are a while ago, and relatively technical. So let me ask some leading questions.

Can this Ultimate L proposal be presented in the following generally understandable shape?

Goedel’s constructible sets, going by the name of L, are built up along the ordinals in a very well defined way. This allows all of the usual set theoretic problems like CH to become nice mathematical problems, when formulated within the set theoretic universe of constructible sets, L. Thanks to Goedel, Jensen, and others, all of these problems have been settled as L problems. (L is the original so called inner model).

Dana Scott showed that L cannot accommodate measurable cardinals. There is an incompatibility.

Jack Silver showed that L can be extended to accommodate measurable cardinals. He worked out L[U], where U stands for a suitable measure on a measurable cardinal. The construction is somewhat analogous to the original Goedel’s L. Also all of the usual set theoretic problems like CH are settled in L[U].

This Gödel-Silver program (you don’t usually see that name though) has been lifted to considerably stronger large cardinals, with the same outcome. The name you usually see is “the inner model program”. The program slowed down to a trickle, and is stalled at some medium large cardinals considerably stronger than measurable cardinals, but very much weaker than – well it’s a bit technical and I’ll let others fill in the blanks here.

“Inner model theory for a large cardinal” became a reasonably understood notion at an informal or semiformal level. And some good test questions emerged that seem to be solvable only by finding an appropriate “inner model theory” for some large cardinals.

So I think this sets the stage for a generally understandable or almost generally understandable discussion of what Hugh is aiming to do.

Perhaps Hugh has picked out some important essential features of what properties the inner models so far have had, adds to them some additional desirable features, and either conjectures or proves that there is a largest such inner model – if there is any such inner model at all. I am hoping that this is screwed up only a limited amount, and the accurate story can be given in roughly these terms, black boxing the important details.

There are also a lot of important issues that we have only touched on in this thread that I think we should return to. Here is a partial list.

1. Sol maintains that there is a crucial difference between (\mathbb N,+,\times) and (P(\mathbb N),\mathbb N,\in,+,\times) that drives an enormous difference in the status of first order sentences. Whereas Peter for sure, and probably Pen, Hugh, Geoffrey strongly deny this. I think that Sol’s position is stronger on this, but I am interested in playing both sides of the fence on this. In particular, one enormous difference between the two structures that is mathematically striking is that the first is finitely generated (even 1-generated), whereas the second is not even countably generated. Of course, one can argue both for and against that this indisputable fact does or does not inform us about the status of first order sentences. Peter has written on the thread that he has refuted Sol’s arguments in this connection, and Sol denies that Peter has refuted Sol’s arguments in this connection. Needs to be carefully and interactively discussed, even though there has been published stuff on this.

2. The idea of “good set theory” has been crucial to the entire thread here. Obviously, there is the question of what is good set theory. But even more basic is this: I don’t actually hear or see much done at all in higher set theory other than studies of models of higher set theory. By higher set theory I mean more or less set theory except for DST = descriptive set theory. See, DST operates just like any normal mathematical area. DST does not study models of DST, or models of any set theory. DST basically works with Borel and sometimes analytic sets and functions, and applies these notions to shed light on a variety of situations in more or less core mathematics. E.g., ergodic theory, group actions, and the like. Higher set theory operates quite differently. It’s almost entirely wrapped up in metamathematical considerations. Now maybe there is a point of view that says I am wrong and if you look at it right, higher set theorists are simply pursuing a normal mathematical agenda – the study of sets. I don’t see this, unless the normal mathematical area is supposed to be “the study of models of higher set theory”. Perhaps people might want to interpret working out what can be proved from forcing axioms? Well, I’m not sure this is similar to the situation in a normal area of mathematics like DST. So my point is: judging new axioms for set theory on the basis of “good set theory” or “bad set theory” doesn’t quite match the situation on the ground, as I see it.

3. In fact, the whole enterprise of higher set theory has so many features that are so radically different from the rest of mathematics, that the whole enterprise, to my mind, should come into serious question. Now I want to warn you that I am both a) incredibly enthusiastic about the future of higher set theory, and b) incredibly dismissive about any future of higher set theory whatsoever — all at the same time. This is because a) is based on certain special aspects of higher set theory, whereas b) is based on the remaining aspects of higher set theory. So when you see me talking from both sides of my mouth, you won’t be shocked.

Harvey

Re: Paper and slides on indefiniteness of CH

There’s quite a bit of terminology being used here that most people, including me, are not familiar with.

Parasitic (simple generally understandable examples, please), extendible cardinal (as in Kanamori?), Reinhardt cardinal (I know what Reinhardt’s axiom is, j:V \to V), super Reinhardt cardinal Berkeley cardinal

Can someone step up here and explain these in generally understandable terms?

Peter’s message indicates how strong a grip Hugh has on \textsf{IMH}^\#, which if I recall, was offered as a “fix” for \textsf{IMH} (inner model hypothesis) being incompatible with even an inaccessible cardinal.

Is \textsf{IMH}^\# merely a layering of the \textsf{IMH} idea on top of large cardinal infrastructure?

I believe there has been repeated requests for a more substantial “fix” for \textsf{IMH} (I think Hugh made such or similar requests). Which, if any, have been offered?

Are there any tangible prospects left for “fixing \textsf{IMH}” in order to shed light on CH? Are there any tangible prospects left for “fixing \textsf{IMH}” in order to shed light on any other mathematical open problem in set theory?

NOTE: I had concluded that, on the basis of this extensive traffic, this “HP” is not a legitimate foundational program, and should be renamed CTMP = countable transitive model program, a not uninteresting technical program that has been around for quite some time, with no artificial philosophical pretensions. Just the study of ctms. However, If Peter Koellner is going to put such an enormous amount of painstaking and skilled effort in continuing corresponding about it, I am more than happy to suspend disbelief and ask these questions.

NOTE: My own position is that “intrinsic maximality of the set theoretic universe” is prima facie a deeply flawed notion, fraught with nonrobustness (inconsistencies, particularly) that may or may not be coherently adjusted in order to lead to anything foundationally interesting. In my next message, I am hoping to play both sides of the fence. I will try my hand at coherently adjusting it. I will also try my hand at showing that the notion itself is inherently contradictory. I don’t know yet what I will find.

Harvey

Re: Paper and slides on indefiniteness of CH

This will be my last message about general maximality before I turn – in the next message after this – to “intrinsic maximality of the set theoretic universe”, starting with a discussion of “there exists a non constructible set of integers”. (Recall that I reported receiving enough encouragement for me to continue trying to turn this thread into something more productive).

Here we discuss some kinds of “logical maximality”.

Let T be a theory in the first order predicate calculus with equality, with finitely many constant, relation, and function symbols. We assume that T is given by individual axioms and axiom schemes. A particularly important case is where there are finitely many axioms and finitely many axiom schemes.

We define the associated theory T_\text{extend}, whose language is that of T extended with a new unary predicate symbol P. The axioms of T_\text{extend} are

  1. The extension of P contains the constants and is closed under the functions.
  2. The axioms, and the schemes of T, the latter being treated as schemes in the extended language.
  3. The axioms of T, with quantifiers relativized to P.
  4. The schemes of T, with quantifiers relativized to P. These schemes are treated as schemes in the extended language.

Note that the models of T_\text{extend} are the (M,P), where M is a model of T and P carves out a submodel of T, where in both cases the schemes are taken over all formulas in the extended language.

We say that T is logically maximal if and only if T_\text{extend} proves \forall x\ P(x). I.e., every such (M,P) above has P = dom(M).

THEOREM 1. \textsf{PA} (Peano arithmetic) is logically maximal. \textsf{Z}_2 (formulated as a single sorted theory in the obvious way) is logically maximal. More generally, for n \geq 2, \textsf{Z}_n (formulated as a single sorted theory in the obvious way) is logically maximal. However, no consistent extension of \textsf{Z} by axioms is logically maximal.

The last negative claim is rather cheap. Consideration of it suggests a stronger notion.

Let T be as above. We define the associated theory T_\text{elex}, for elementary extension. Of course it is stronger than elementary extension. T_\text{elex} has language that of T extended with a new unary predicate symbol P as before. The axioms of T_\text{elex} are

  1. The extension of P carves out an elementary substructure with respect to the language of T.
  2. The axioms, and the schemes of T, the latter being treated as schemes in the extended language.
  3. The axioms of T, with quantifiers relativized to P.
  4. The schemes of T, with quantifiers relativized to P. These schemes are treated as schemes in the extended language.

Note that the models of T_\text{elex} are the (M,P), where M is a model of T and P carves out an elementary submodel of M for the language of T, where for both M and the submodel, the schemes hold when taken over formulas in the extended language.

We say that T is elementarily maximal if and only if T_\text{elex} proves \forall x\ P(x). I.e., every such (M,P) above has P = \text{dom}(M).

Note that elementarily maximal is a nice logical notion. Let’s see what happens when we apply it to ZFC and its extensions.

THEOREM 2. Logically maximal implies elementarily maximal (trivial). ZC + “every set has rank some \omega + n” is elementarily maximal. No consistent extension of ZF by axioms is elementarily maximal.

Another example is the real closed fields. There are two particularly well known axiomatizations. One is ordered field, every positive element has a square root, every moonic polynomial of odd degree has a root. The second is ordered field, plus the scheme of least upper bound.

THEOREM 3. The first axiomatization of real closed fields is not elementarily maximal. The second axiomatization of real closed fields is logically maximal.

Next time I want to get into real set theoretic issues. Particularly, the relationship between “intrinsic maximality of the set theoretic universe” and the existence(?) of a nonconstructible subset of omega.

Harvey