Tag Archives: Types 1 2 3

Re: Paper and slides on indefiniteness of CH

Dear Pen and Peter,

Pen:

I am sorry to have annoyed you with the issue of TR and Type 2 evidence, indeed you have made it clear many times that the TR does take such evidence into account. I got that! But in your examples, you vigorously hail the virtues of \text{AD}^{L(\mathbb R)} and other developments that have virtually no relevance for math outside of set theory, rather than Forcing Axioms, which provide real Type 2 evidence! As I said, I think you got it very right with your excellent “Defending”, but in your 2nd edition you might want to hail the virtues of Forcing Axioms well above \text{AD}^{L(\mathbb R)}, Ultimate L (should it be “ripe” by the time of your 2nd edition) or other math-irrelevant topics, giving FA’s their richly-earned praise for winning evidence of Types both 1 and 2.

I was really hoping for your reaction to the following, but I guess I ain’t gonna get it:

Hence my conclusion is that the only sensible move for us to make is to gather evidence from all three sources: Set theory as an exciting and rapidly-developing branch of math and as a useful foundation for math, together with evidence we can draw from the concept of set itself via the maximality of V in height and width. None of these three types (which I have labelled as Types 1, 2 and 3, respectively) should be ignored.

Let me make this more specific. Look at the following axioms:

  • V = L
  • V is not L, but is a canonical model of ZFC, generic over L
  • Large Cardinal axioms like Supercompact
  • Forcings Axioms like PFA
  • AD in L(\mathbb R)
  • Cardinal Characteristics like \mathfrak b < \mathfrak a < \mathfrak d
  • (The famous) “Etcetera”

It seems that each of these has pretty good Type 1 evidence (useful for the development of set theory, with P’s and V’s).

But look! We can discriminate between these examples with evidence of Types 2 and 3! Type 2 comes down HARD for Forcing Axioms and V = L, as so far none of the others has done anything important for mathematics outside of set theory. And of course Type 3 kills V = L. So using all three Types of evidence, we have a clear winner, Forcing Axioms!

I expect that without a heavy use of Type 2 and Type 3 evidence, we aren’t going to get any consensus about set-theoretic truth using only Type 1 evidence.

Peter:

Thanks again for your comments and the time you are putting in with the HP.

1. (Height Maximality, Transcending First-Order) #-generation provides a canonical principle that is compatible with V = L and yields all small large cardinals (those consistent with V = L). In the sense to which Hugh referred, its conjunction with V = L is a Shelavian “semi-complete” axiom encompassing all small large cardinals.

But of course #-generation is not first-order! That has been one of my predictions from the start of this thread: First-order axioms (FA’s, Ultimate L, \text{AD}^{L(\mathbb R)}, …) are inadequate for uncovering the maximal iterative conception. Height maximality demands lengthenings, width maximality demands “thickenings” in the width-actualist sense. We don’t need full second order logic (God forbid!) but only “Gödel” lengthenings of V (and except for the case of Height Maximality, very mild Gödel lengthenings indeed). We need the “external ladder” as you call it, as we can’t possibly capture all small large cardinals without it!

2. (Predictions and Verifications) The more I think about P’s and V’s (Predictions and Verifications), the less I understand it. Maybe you can explain to me why they really promote a better “consensus” than just the sloppy notion of “good set theory”, as I’m really not getting it. Here is an example:

When Ronald solved Suslin’s Hypothesis under V = L, one could have “predicted” that V = L would also provide a satisfying solution to the Generalised Suslin Hypothesis. There was little evidence at the time that this would be possible, as Ronald only had Diamond and not Square. In other words, this “prediction” could have ended in failure, as perhaps it would have been too hard to solve the problem under V = L or the answer from V = L would somehow be “unsatisfying”. Then in profound work, Ronald “verified” this “prediction” by inventing the “fine-structure theory” for L. In my view this is an example of evidence for V = L, based on P’s and V’s, perhaps even more impressive than the “prediction” that the properties of the Borel sets would extend all the way to L(\mathbb R) via large cardinals (Ronald didn’t even need an appeal to anything “additional” like large cardinals, he did it all with V = L). Now one might ask: Did someone really “predict” that the Generalised Suslin Hypothesis would be satisfactorily solved under V = L? I think the correct answer to this question is: Who cares? Any “evidence” for V = L comes from the “good set theory”, not from the “prediction”.

It’s hard for me to imagine a brand of “good set theory” doesn’t have its own P’s and V’s. Another example: I developed a study of models between L and 0# based on Ronald’s ground-breaking work in class-forcing, and that resulted in a rich theory in which a number of “predictions” were verifed, like the “prediction” that there are canonical models of set theory which lie strictly between L and 0^\# (a pioneering question of Bob’s); but I don’t regard my work as “evidence” for V \neq L, necessary for this theory, despite having “verified” this “prediction”. Forcing Axioms: Haven’t they done and won’t they continue to do just fine without the “prediction” that you mention for them? I don’t see what the “problem” is if that “prediction” is not fulfilled, it seems that there is still very good evidence for the truth of Forcing Axioms

I do acknowledge that Hugh feels strongly about P’s and V’s with regard to his Ultimate-L programme, and he likes to say that he is “sticking his neck out” by making “predictions” that might fail, leading to devastating consequences for his programme. I don’t actually believe this, though: I expect that there will be very good mathematics coming out of his efforts and that this “good set theory” will result in a programme of no less importance than what Hugh is currently hoping for.

So tell me: Don’t P’s and V’s exist for almost any “good set theory”? Is there really more agreement about how “decisive” they are than there is just for which forms of set theory are “good”?

You have asked me why I am more optimistic about a consensus concerning Type 3 evidence. The reason is simple: Whereas set theory as a branch of mathematics is an enormous field, with a huge range of different worthwhile developments, the HP confines itself to just one specific thing: Maximality of V in height and width (not even a broader sense of Maximality). Finding a consensus is therefore a much easier task than it is for Type 1. Moreover the process of “unification” of different criteria is a powerful way to gain consensus (look at the IMH, #-generation and their syntheses, variants of the \textsf{IMH}^\#). Of course “unification” is available for Type 1 evidence as well, but I don’t see it happening. Instead we see Ultimate-L, Forcing Axioms, Cardinal Characteristics, …, developing on their own, going in valuable but distinct directions, as it should be. Indeed they conflict with each other even on the size of the continuum (omega_1, omega_2, large, respectively).

3. (So-Called “Changes” to the HP) OK, Peter here is where I take you to task: Please stop repeating your tiresome claim that the HP keeps changing, and as a result it is hard for you to evaluate it. As I explain below, you have simply confused the programme itself with other things, such as the specific criteria that it generates and my own assessment of its significance.

There have been exactly 2 changes to the HP-procedure, one on August 21 when after talking to Pen (and you) I decided to narrow it to the analysis of the maximality of V in height and width only (the MIC), leaving out other “features of V”, and on September 24 when after talking to Geoffrey (and Pen) I decided to make the HP-procedure compatible with width actualism. That’s it, the HP-procedure has remained the same since then. But you didn’t understand the second change and then claimed that I had switched from radical potentialism to height actualism! That was your fault, not mine. Since September 24 you have had a fixed programme to assess, and no excuse to say that you don’t know what the programme is.

Indeed there have been changes in my own assessment of the significance of the HP, and that is something else. I have been enormously influenced by Pen concerning this. I started off telling Sol that I thought that the CH could be “solved” negatively using the HP. My discussions with Pen gave me a much deeper understanding and respect for Type 1 evidence (recall that back then, which seems like ages ago, I accused Hugh of improperly letting set-theoretic practice enter a discussion of set-theoretic truth!). I also came to realise (all on my own) the great importance of Type 2 evidence, which I think has not gotten its due in this thread. I think that we need all 3 types of evidence to make progress on CH and I am not particularly optimistic, as current indications are that we have no reason to expect Types 1, 2 and 3 evidence to come to a common conclusion. I am much more optimistic about a common conclusion concerning other questions like PD and even large cardinals. Another change has been my openness to a gentler HP: I still expect the HP to come to a consensus, leading to “intrinsic consequences of the set-concept”. But failing a consensus, I can imagine a gentler HP, leading only to “intrinsically-based evidence”, analogous to evidence of Types 1 and 2.

Despite my best efforts, you still don’t understand how the HP handles maximality criteria. On 3.September, you attributed to me the absurd claim that both the IMH and inaccessible cardinals are intrinsically justified! I have been trying repeatedly to explain to you since then that the HP works by formulating, analysing, refining, comparing and synthesing a wide range of mathematical criteria with the aim of convergence. Yet in your last mail you say that “We are back to square one”, not because of any change in the HP-procedure or even in the width actualist version of the IMH#, but because of a better understanding of the way the \textsf{IMH}^\# translates into a property of countable models. I really don’t know what more I can say to get you to understand how the HP actually works, so I’ll just leave it there and await further questions. But please don’t blame so-called “changes” in the programme for the difficulties you have had with it. In any case, I am grateful that you are willing to take the time to discuss it with me.

Best, Sy

Re: Paper and slides on indefiniteness of CH

Dear Peter and Hugh,

Thanks to you both for the valuable comments and your continued interest in the HP. Answers to your questions follow.

Peter:

1. As I said, #-generation was not invented as a “fix” for anything. It was invented as the optimal form of maximality in height. It is the limit of the small large cardinal hierarchy (inaccessibles, Mahlos, weak compacts, $latex\omega$-Erdos, (\omega+\omega)-Erdos, … #-generation). A nice feature is that it unifies well with the IMH, as follows: The IMH violates inaccessibles. IMH(inaccessibles) violates Mahlos. IMH(Mahlos) violates weak compacts … IMH(omega-Erdos) violates omega+omega-Erdos, … The limit of this chain of principles is the canonical maximality criterion \textsf{IMH}^\#, which is compatible with all small large cardinals, and as an extra bonus, with all large cardinals. It is a rather weak criterion, but becomes significantly stronger even with the tiny change of adding \omega_1 as a parameter (and considering only \omega_1 preserving outer models).

2. What I called “Magidor’s embedding reflection” in fact appears in a paper by Victoria Marshall (JSL 54, No.2). As it violates V = L it is not a form of height maximality (the problem is with the internal embeddings involved; if the embeddings are external then one gets a weak form of #-generation). Indeed Marshall Reflection does not appear to be a form of maximality in height or width at all.

You say: “I don’t think that any arguments based on the vague notion of ‘maximality’ provide us with much in the way of justification”. Wow! So you don’t think that inaccessibles are justified on the basis of reflection! Sounds like you’ve been talking to the radical Pen Maddy, who doesn’t believe in any form of intrinsic justification.

3. Here’s the most remarkable part of your message. You say:

“Different people have different views of what ‘good set theory’ amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.”

In this thread I have repeatedly and without objection taken Pen’s Thin Realism to be grounded on “good set theory” (or if looking beyond set theory, on “good mathematics”). So you have now rejected not only the HP, but also Thin Realism. My view is that Pen got it exactly right when it comes to evidence from the practice of set theory, one must only acknowledge that such evidence is limited by the lack of consensus on what “good set theory” means.

You are right to say that there is value to “predictions” and “verifications”. But these only serve to make a “good set theory” better. They don’t really change much, as even if a brand of “good set theory” fails to fulfill one of its “predictions”, it can still maintain its high status. Take the example of Forcing Axioms: They have been and always will be regarded as “good set theory”, even if the “prediction” that you attribute to them fails to materialise.

Peter, your unhesitating rejection of approaches to set-theoretic truth is not helpful. You faulted the HP for not being “well-grounded” as its grounding leans on a consensus regarding the “maximality of V in height and width”. Now you fault Thin Realism (TR) for not being “well-grounded” as its grounding leans on “good set theory”. There is an analogy between TR and the HP: Like Pen’s second philosopher, Max (the Maximality Man) is fascinated by the idea of maximality of V in height and width and he “sets out to discover what the world of maximality is like, the range of what there is to the notion and its various properties and behaviours”. In light of this analogy, it is reasonable that someone who likes Thin Realism would also like the HP and vice-versa. It seems that you reject both, yet fail to provide a good substitute. How can we possibly make progress in our understanding of set-theoretic truth with such skepticism? What I hear from Pen and Hugh is a “wait and see” attitude, they want to know what criteria and consensus comes out of the HP. Yet you want to reject the approach out of hand. I don’t get it. Are you a pessimist at heart?

It seems to me that in light of your rejection of both TR and HP, the natural way for you to go is “radical skepticism”, which denies this whole discussion of set-theoretic truth in the first place. (Pen claimed to be a radical skeptic, but I don’t entirely believe it, as she does offer us Thin Realism.) Maybe Pen’s Arealism is your real cup of tea?

OK, let’s return to something we agree on: the lack of consensus regarding “good set theory”, where I have something positive to offer. What this lack of consensus suggests to me is that we should seek further clarification by looking to other forms of evidence, namely Type 2 evidence (what provides the best foundation for math) and Type 3 evidence (what follows from the maximality of V in height and width). The optimistic position (I am an optimist at heart) is that the lack of consensus based solely on Type 1 evidence (coming from set-theoretic practice) could be resolved by favouring those Type 1 axioms which in addition are supported by Type 2 evidence, Type 3 evidence, or both. Forcing Axioms seem to be the best current axioms with both Type 1 and Type 2 support, and perhaps if they are unified in some way with Type 3 evidence (consequences of Maximality) one will arrive at axioms which can be regarded as true. This may even give us a glimmer of hope for resolving CH. But of course that is way premature, as we have so much work to do (on all three types of evidence) that it is impossible to make a reasonable prediction at this point.

To summarise this part: Please don’t reject things out of hand. My suggestion (after having been set straight on a number of key points by Pen) is to try to unify the best of three different approaches (practice, foundations, maximality) and see if we can make real progress that way.

4. With regard to your very entertaining story about K and Max: As I have said, one does not need a radical potentialist view to implement the HP, and I now regret having confessed to it (as opposed to a single-universe view augmented by height potentialism), as it is easy to make a mistake using it, as you have done. I explain: Suppose that “we live in a Hyperuniverse” and our aim is to weed out the “optimal universes”. You suggest that maximality criteria for a given ctm M quantify over the entire Hyperuniverse (“Our quantifiers range over CTM-space.”). This is not true and this is a key point: They are expressible in a first-order way over Goedel lengthenings of M. (By Gödel lengthening I mean an initial segment of the universe L(M) built over M, the constructible universe relative to M.) This even applies to #-generation, as explained below to Hugh. From the height potentialist / width actualist view this is quite clear (V is not countable!) and the only reason that Maximality Criteria can be reflected into the Hyperuniverse (denote this by H to save writing) is that they are expressible in this special way (a tiny fragment of second order set theory). But the converse is false, i.e., properties of a member M of H which are expressible in H (essentially arbitrary second-order properties) need not be of this special form. For example, no height maximal universe M is countable in its Goedel lengthenings, even for a radical potentialist, even though it is surely countable in the Hyperuniverse. Briefly put: From the height potentialist / width actualist view, the reduction to the Hyperuniverse results in a study of only very special properties of ctm’s, only those which result from maximality criteria expressed using lengthenings and “thickenings” of V via Löwenheim-Skolem.

So I was too honest, I should not have admitted to a radical form of multiversism (radical potentialism), as it is then easy to misundertand the HP as you have. As far as the choice of maximality criteria, I can only repeat myself: Please be open-minded and do not prejudge the programme before seeing the criteria that it generates. You will see that our intuitions about maximality criteria are more robust than our intuitions about “good set theory”.

Hugh:

1. The only method I know to obtain the consistency of the maximality criterion I stated involves Prikry-like forcings, which add Weak Squares. Weak Squares contradict supercompactness. In your last mail you verify that stronger maximality criteria do indeed violate supercompactness.

2. A synthesis of LCs with maximality criteria makes no sense until LCs themeselves are derived from some form of maximality of V in height and width.

3. I was postponing the discussion of the reduction of #-generation to ctm’s (countable transitive models) as long as possible as it is quite technical, but as you raised it again I’ll deal with it now. Recall that in the HP “thickenings” are dealt with via theories. So #-generation really means that for each Gödel lengthening L_\alpha(V) of V, the theory in L_\alpha(V) which expresses that V is generated by a presharp which is \alpha-iterable is consistent. Another way to say this is that for each \alpha, there is an \alpha-iterable presharp which generates V in a forcing extension of L(V) in which \alpha is made countable. For ctm’s this translates to: A ctm M is (weakly) #-generated if for each countable \alpha, M is generated by an \alpha-iterable presharp. This is weaker than the cleaner, original form of #-generation. With this change, one can run the LS argument and regard \textsf{IMH}^\# as a statement about ctm’s. In conclusion: You are right, we can’t apply LS to the raw version of \textsf{IMH}^\#, essentially because #-generation for a (real coding a) countable V is a \Sigma^1_3 property; but weak #-generation is \Pi^1_2 and this is the only change required.

But again, there is no need in the HP to make the move to ctm’s at all, one can always work with theories definable in Gödel lengthenings of V, making no mention of countability. Indeed it seems that the move to ctm’s has led to unfortunate misunderstandings, as I say to Peter above. That is why it is quite inappropriate, as you have done on numerous occasions, to refer to the HP as the study of ctm’s, as there is no need to consider ctm’s at all, and even if one does (by applying LS), the properties of ctm’s that results are very special indeed, far more special than what a full-blown theory of ctm’s would entail.

Thanks again for your comments,
Sy

Re: Paper and slides on indefiniteness of CH

Mr. Energy writes (two excerpts):

With co-authors I established the consistency of the following

Maximality Criterion. For each infinite cardinal \alpha, \alpha^+ of \text{HOD} is less than \alpha^+.

Both Hugh and I feel that this Criterion violates the existence of certain large cardinals. If that is confirmed, then I will (tentatively) conclude that Maximality contradicts the existence of large cardinals. Hugh will conclude that there is something wrong with the above Maximality Criterion and it therefore should be rejected.

My point is that Hugh considers large cardinal existence to be part of set-theoretic truth. Why? I have yet to see an argument that large cardinal existence is needed for “good set theory”, so it does not follow from Type 1 evidence. That is why I think that large cardinal existence is part of Hugh’s personal theory of truth.

My guess is he’d also consider type 2 evidence (involving the relations of set theory to the rest of mathematics) if there were some ready to hand.

There is some ready to hand: At present, Type 2 evidence points towards Forcing Axioms, and these contradict CH and therefore contradict Ultimate L

I have written dozens of e-mails to explain what I am doing and I take it as a good sign that I am still standing, having responded consistently to each point. If there is something genuinely new to be said, fine, I will respond to it, but as I see it now we have covered everything: The HP is simply a focused investigation of mathematical criteria for the maximality of V in height and width, with the aim of convergence towards an optimal such criterion. The success of the programme will be judged by the extent to which it achieves that goal. Interesting math has already come out of the programme and will continue to come out of it. I am glad that at least Hugh has offered a bit of encouragement to me to get to work on it.

This illustrates the pitfalls involved in trying to use an idiosyncratic propogandistic slogan like “HP” to refer to an unanalyzed philosophical conception with language like “intrinsic maximality of the set theoretic universe”. Just look at how treacherous this whole area of “philosophically motivated higher set theory” can be.

E.g., MA (Martin’s axiom) already under appropriate formulations look like some sort of “intrinsic maximality”, at least as clear as many things purported on this thread to exhibit some sort of “intrinsic maximality”, and already implies that CH is false. So have we now completely solved the CH negatively? If so, why? If not, why not? See what happens with an unanalyzed notion of “intrinsic maximality of the set theoretic universe”. Also MM (Martin’s maximum) is even stronger, and implies that 2^\omega = \omega_2. Also looks like “intrinsic maximality of the set theoretic universe”, at least before any convincing analysis of it, and so do we now know that 2^\omega = \omega_2 follows from the “intrinsic maximality of the set theoretic universe”?

I will now take an obvious step toward turning at least some of this very unsatisfying stuff into something completely unproblematic – without the idiosyncratic propogandistic slogans – AND something (hopefully) not needing countable transitive models for straightforward formulations.

Ready? Here is the narraitive.

1. We want to explore the idea that

*L is a tiny part of V* *L is very different from V*

We also want to explore the idea that

**HOD is a tiny part of V. **HOD is very different from V**

Here HOD = hereditarily ordinal definable sets. Myhill/Scott proved that HOD satisfies ZFC, following semiformal remarks of Gödel.

2. There are some interesting arguments that one can give for L being a tiny part of V. These arguments themselves can be subjected to various kinds of scrutiny, and that is an interesting topic in and of its own. But we shall, for the time being, take it for granted that we are starting off with “L is a tiny part of V”.

3. On the other hand, the arguments that HOD is a tiny part of V are, at least at the moment, fewer and much weaker. This reflects some important technical differences between L and HOD. E.g., L is very stable in the sense that L within L is L. However, HOD within HOD may not be HOD (that’s independent of ZFC).

4. Another related big difference between L and HOD is the following. You can prove that any formal extension of the set theoretic universe compatible with the set theoretic universe in a nice sense, must violate V = L if the original set theoretic universe violates V = L. This is the kind of thing that adds to an arsenal of possible arguments that L is only a part or tiny part of V. However, the set theoretic universe demonstrably has a formal extension satisfying V = HOD even if the set theoretic universe does not satisfy V = HOD. This makes the idea that HOD is a tiny part of V a much more problematic “consequence” of “intrinsic maximality of the set theoretic universe”.

5. Yet another difference. Vopenka proved in ZFC that every set can be obtained by set forcing over HOD. That every set can be obtained by set forcing over L is known to be independent of ZFC, and in fact violates medium large cardinals (such as measurable cardinals and even 0^\#). The same is true for set forcing replaced by class forcing.

6. Incidentally, I think there is an open question that goes something like this. Let M be the minimum ctm of ZFC. There exists a ctm extension of M with the same ordinals that is not obtainable by class forcing over M – I think even under a very wide notion of class forcing. Still open?

7. Another way of talking about the problematic nature of V not equal HOD as following from “intrinsic maximality” is that, well, maybe if there were more sets, we would be able to make more powerful definitions, making certain certain sets in HOD that weren’t “before”, and then close this off, making V = HOD. Thus this is an attempt to actually turn V = HOD itself into some sort of “intrinsic maximality”!!

8. So the proper move, until there is more creative analysis of “intrinsic maximality of the set theoretic universe” is to simply say, flat out:

*we are going to explore the idea that HOD is a tiny part of V* *we are going to explore the idea that HOD is very different from V*

and avoid any idiosnyncratic propogandistic slogans like “HP”.

9. So now let’s fast forward to the excerpt from Mr. Energy:

With co-authors I established the consistency of the following Maximality Criterion. For each infinite cardinal \alpha, \alpha^+ of HOD is less than \alpha^+. Both Hugh and I feel that this Criterion violates the existence of certain large cardinals. If that is confirmed, then I will (tentatively) conclude that Maximality contradicts the existence of large cardinals. Hugh will conclude that there is something wrong with the above Maximality Criterion and it therefore should be rejected.

Here is a reasonable restatement without the idiosyncratic propoganda – propoganda that papers over all of the issues about HOD raised above.

NEW STATEMENT. With co-authors I (Mr. Energy) established the consistency of the following relative to the consistency of ???

(HOD very different from V). Every infinite set in HOD is the domain of a bijection onto another set in HOD without there being a bijection in HOD.

Furthermore, Hugh and I (Mr. Energy) feel that the above statement refutes the existence of certain kinds of large cardinal hypotheses. If this is confirmed, then it follows that “HOD is very different from V” is incompatible with certain kinds of large cardinal hypotheses.

10. Who can complain about that? Perhaps somebody on the list can clarify just which large cardinal hypotheses might be incompatible with the above statement?

11. Let’s now step back and reflect on this a bit in general terms to make more of it. What can be say about “HOD very different from V” in general terms?

HOD is an elementary substructure of V

is of course very strong. This is equivalent to saying that V = HOD.

But the above statement is an extremely strong refutation of elementary substructurehood.

THEOREM (?). The most severe/simplest possible violation of L being an elementary substructure of V is that “every infinite set in L is the domain of a bijection onto another set in L without there being a bijection in L”.

THEOREM (?). The most severe/simplest possible violation of HOD being an elementary substructure of V is that “every infinite set in HOD is the domain of a bijection onto another set in HOD without there being a bijection in HOD”.

THEOREM (???). The most severe/simplest possible violation of V not equaled to L is that “every infinite set in L is the domain of a bijection onto another set in L without there being a bijection in L”.

THEOREM (???). The most severe/simplest possible violation of V not equaled to HOD is that “every infinite set in HOD is the domain of a bijection onto another set in HOD without there being a bijection in HOD”.

Since this morning I am doing some real time foundations (of higher set theory), I should be allowed to state Theorems without knowing how to state them.

I also reserve the right to stop here.

I have written dozens of e-mails to explain what I am doing and I take it as a good sign that I am still standing, having responded consistently to each point. If there is something genuinely new to be said, fine, I will respond to it, but as I see it now we have covered everything: The HP is simply a focused investigation of mathematical criteria for the maximality of V in height and width, with the aim of convergence towards an optimal such criterion. The success of the programme will be judged by the extent to which it achieves that goal. Interesting math has already come out of the programme and will continue to come out of it. I am glad that at least Hugh has offered a bit of encouragement to me to get to work on it.

Of course, you have chosen to respond to much but not all of what everybody has written here, except me, invoking the “brother privilege”. Actually, I wonder if the “brother privilege” – that you do not have to respond to your brother in an open intellectual forum – is a consequence of the “intrinsic maximality of the set theoretic universe”?

If you are looking for “something genuinely new to say” then you can start with the dozens of emails I have put on this thread, Actually, you have covered very little by serious foundational standards.

On a mathematical note, you can start by talking about #-generation, what it means in generally understandable terms, why it is natural and/or important, and so forth. Why it is an appropriate vehicle for “fixing” IMH (if it is). It is absurd to think that a two line description weeks (or is it months) ago is even remotely appropriate for a list of about 75 readers. Also, continually referring to type 1, type 2, type 3 set theoretic themes without using real and short names is a totally unnecessary abuse of the readers of this list. People are generally not going to be keeping that in their heads – even if they have not been throwing your messages (and mine) into the trash. Are the numbers 1,2,3 canonically associated with those themes? Furthermore, your brief discussion of them was entirely superficial. There are crucial issues involved in just what the interaction of higher set theory is with mathematics that have not been discussed hardly at all here either by you or by others.

BOTTOM LINE ADVICE.

Change HP to CTMP = countable transitive model program. Cast headlines for statements in terms like “HOD is very different from V” or “HOD is a tiny part of V” or things like that. Avoid “intrinsic maximality of the set theoretic universe” unless you have something new to say that is philosophically compelling.

Harvey

Re: Paper and slides on indefiniteness of CH

Dear Sy,

Pen and Peter, can you please help here? Pen hit me very hard for developing what could be regarded as “Sy’s personal theory of truth” and it seems to me that we now have “Hugh’s personal theory of truth”, i.e., when Hugh develops a powerful piece of set theory he wants to declare it as “true” and wants us all to believe that. This goes far beyond Thin Realism, it goes to what Hugh calls a “conception of V” which far exceeds what you can read off from set-theoretic practice in its many different forms. Another example of this is Hugh’s claim that large cardinal existence is justified by large cardinal consistency; what notion of “truth” is this, if not “Hugh’s personal theory of truth”?

Pen’s Thin Realism provides a grounding for Type 1 truth. Mathematical practice outside of set theory provides a grounding for Type 2 truth. Out intuitions about the maximality of V in height and width provide a grounding for Type 3 truth. How is Hugh’s personal theory of truth grounded?

I’m pretty sure Hugh would disagree with what I’m about to say, which naturally gives me pause. With that understood, I confess that from where I sit as a relatively untutored observer, it looks as if the evidence Hugh is offering is overwhelming of your Type 1 (involving the mathematical virtues of the attendant set theory). My guess is he’d also consider type 2 evidence (involving the relations of set theory to the rest of mathematics) if there were some ready to hand. He has a ‘picture’ of what the set theoretic universe is like, a picture that guides his thinking, but he doesn’t expect the rest of us to share that picture and doesn’t appeal to it as a way of supporting his claims. If the mathematics goes this way rather than that, he’s quite ready to jettison a given picture and look for another. In fact, at times it seems he has several such pictures in play, interrelated by a complex system of implications (if this conjecture goes this way, the universe like this; if it goes that way, it looks like that…) But all this picturing is only heuristic, only an aide to thought — the evidence he cites is mathematical. And, yes, this is more or less how one would expect a good Thin Realist to behave (one more time: the Thin Realist also recognizes Type 2 evidence). (My apologies, Hugh. You must be thinking, with friends like these…)

The HP works quite differently. There the picture leads the way — the only legitimate evidence is Type 3. As we’ve determined over the months, in this case the picture involved has to be shared, so that it won’t degenerate into ‘Sy’s truth’. So far, to be honest, I’m still not clear on the HP picture, either in its height potentialist/width actualist form or its full multiverse form. Maybe Peter is doing better than I am on that.

All best,

Pen

Re: Paper and slides on indefiniteness of CH

That doesn’t answer the question: If you assert that we will know the truth value of CH, how do you account for the fact that we have many different forms of set-theoretic practice? Do you really think that one form (Ultimate L perhaps) will “have virtues that will swamp all the others”, as Pen suggested?

Look, as I have stated repeatedly I see the subject of the model theory of ctm’s as separate from the study of V (but this is not to say that theorems in the mathematical study of ctm’s cannot have significant consequences for the study of V). I see nothing wrong with this view or the view that the practice you cite is really in the subject of ctm’s, however it is presented.

??? My question has nothing to do with ctm’s! It has nothing to do with the HP either (which I repeat can proceed perfectly well without discussing ctm’s anyway). I was referring to the many different forms of set-theoretic practice which disagree with each other on basic questions like CH. How do you assign a truth value to CH in light of this fact?

For your second question, If the tests are passed, then yes I do think that V = Ulitmate-L will “swamp all the others” but only in regard to a conception of V, not with regard to the mathematics of ctm’s. There are a number of conjectures already which I think would argue for this. But we shall see (hopefully sooner rather than later).

Here come the irrelevant ctm’s again. But you do say that V = Ultimate L will “swamp all the others”, so perhaps that is your answer to my question. Now do you really believe that? You suggested that Forcing Axioms can somehow be “part of the picture” even under V = Ultimate L, but that surely doesn’t mean that Forcing Axioms are false and Ultimate L is true.

Pen and Peter, can you please help here? Pen hit me very hard for developing what could be regarded as “Sy’s personal theory of truth” and it seems to me that we now have “Hugh’s personal theory of truth”, i.e., when Hugh develops a powerful piece of set theory he wants to declare it as “true” and wants us all to believe that. This goes far beyond Thin Realism, it goes to what Hugh calls a “conception of V” which far exceeds what you can read off from set-theoretic practice in its many different forms. Another example of this is Hugh’s claim that large cardinal existence is justified by large cardinal consistency; what notion of “truth” is this, if not “Hugh’s personal theory of truth”?

Pen’s Thin Realism provides a grounding for Type 1 truth. Mathematical practice outside of set theory provides a grounding for Type 2 truth. Out intuitions about the maximality of V in height and width provide a grounding for Type 3 truth. How is Hugh’s personal theory of truth grounded?

Look: There is a rich theory about the projective sets in the context of not-PD (you yourself have proved difficult theorems in this area). There are a number of questions which remain open about the projective sets in the context of not-PD which seem very interesting and extremely difficult. But this does not argue against PD. PD is true.

I want to know what you mean when you say “PD is true”. Is it true because you want it to be true? Is it true because ALL forms of good set theory imply PD? I have already challenged, in my view successfully, the claim that all sufficiently strong natural theories imply it; so what is the basis for saying that PD is true?

If the Ultimate-L Conjecture is false then for me it is “back to square one” and I have no idea about an resolution to CH.

I see no virtue in “back to square one” conjectures. In the HP the whole point is to put out maximality criteria and test them; it is foolish to make conjectures without doing the mathematics. Why should your programme be required to make “make or break” conjectures, and what is so attractive about that? As I understand the way Pen would put it, it all comes down to “good set theory” for your programme, and for that we need only see what comes out of your programme and not subject it to “death-defying” tests.

One more question at this point: Suppose that Jack had succeeded in proving in ZFC that 0^\# does not exist. Would you infer from this that V = L is true? On what grounds? Your V = Ultimate L programme (apologies if I misunderstand it) sounds very much like saying that Ultimate L is provably close to V so we might as well just take V = Ultimate L to be true. If I haven’t misunderstood then I find this very dubious indeed. As Pen would say, axioms which restrict set-existence are never a good idea.

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Peter,

I think we should all be grateful to you for this eloquent description of how we gather evidence for new axioms based on the development of set theory. The first two examples (and possibly the third) that you present are beautiful cases of how a body of ideas converges on the formulation of a principle or principles with great explanatory power for topics which lie at the heart of the subject. Surely we have to congratulate those who have facilitated the results on determinacy and forcing axioms (and perhaps in time Hugh for his work on Ultimate L) for making this possible. Further, the examples mentioned meet your high standard for any such programme, which is that it “makes predictions which are later verified”.

I cannot imagine a more powerful statement of how Type 1 evidence for the truth of new axioms works, where again by “Type 1″ I refer to set theory’s role as a field of mathematics and therefore by “Type 1 evidence” I mean evidence for the truth of a new axiom based on its importance for generating “good set theory”, in the sense that Pen has repeatedly emphasized.

But I do think that what you present is only part of the picture. Set theory is surely a field of mathematics that has its own key questions and as it evolves new ideas are introduced which clarify those questions. But surely other areas of mathematics share that feature, even if they are free of questions of independence; they can have analogous debates about which developments are most important for the field, just as in set theory. So what you describe could be analagously described in other areas of mathematics, where “predictions” are made about how certain approaches will lead to the solution of central open problems. Briefly put: In your description of programmes for set theory, you treat set theory in the same way as one would treat any field of mathematics.

But set theory is much more that. Before I discuss this key point, let me interrupt myself with a brief reference to where this whole e-mail thread began, Sol’s comments about the indefiniteness of CH. As I have emphasized, there is no evidence that the pursuit of programmes like the ones you describe will agree on CH. Look at your 3 examples: The first has no opinion on CH, the second denies it and the third confirms it! I see set theory as a rich and developing subject, constantly transforming itself with new ideas, and as a result of that I think it unreasonable based on past and current evidence to think that CH will be decided by the Type 1 evidence that you describe. Pen’s suggestion that perhaps there will be a theory “whose virtues swamp the rest” is wishful thinking. Thus if we take only Type 1 evidence for the truth of new axioms into account (Sol rightly pointed out the misuse of the term “axiom” and Shelah rightly suggested the better term “semi-axiom”), we will not resolve CH and I expect that we won’t resolve much at all. Something more is needed if your goal is to say something about truth in set theory. (Of coures it is fine to not have that goal, and only a handful of set-theorists have that goal.)

OK, back to the point that set theory is more than just a branch of mathematics. Set theory also has a role as a foundation for mathematics (Type 2). Can we really assume that Type 1 axioms like the ones you suggest in your three examples are the optimal ones for the role of set theory as a foundation? Do we really have a clear understanding of what axioms are optimal in this sense? I think it is clear that we do not.

The preliminary evidence would suggest that of the three examples you mention, the first and third are quite irrelevant to mathematics outside of set theory and the second (Forcing Axioms) is of great value to mathematics outside of set theory. Should we really ignore this in a discussion of set-theoretic truth? I mean set theory is a great branch of mathematics, rife with ideas, but can we really assert the “truth” of an axiom which serves set theory’s needs when other axioms that contradict it do a better job in providing other areas of mathematics what they need?

There is even more to the picture, beyond set theory as a branch of or a foundation for math. I am referring to its Type 3 role, as a study of the concept of set. There is widespread agreement that this concept entails the maximality of V in height and width. The challenge is to explain this feature in mathematical terms, the goal of the HP. There is no a priori reason whatsoever to assume that the mathematical consequences of maximality in this sense will conform to axioms which best serve the Type 1 or Type 2 needs of set theory (as a branch of or foundation for mathematics). Moreover, to pursue this programme requires a very different approach than what is familiar to the Type 1 set-theorist, perfectly described in your previous e-mail. I am asking you to please be open-minded about this, because the standards you set and the assumptions that you make when pursuing new axioms for “good set theory” do not apply when pursuing consequences of maximality in the HP. The HP is a very different kind of programme.

To illustrate this, let me begin with two quotes which illustrate the difference and set the tone for the HP:

I said to Hugh:

The basic problem with what you are saying is that you are letting set-theoretic practice dictate the investigation of set-theoretic truth!

In other words, my starting point is not what facilitates the “best set theory”, but what one can understand about maximality of V in height and width.

On a recent occasion, Hugh said to me:

[Yet] you propose to deduce the non existence of large cardinals at some level based on maximality considerations. I would do the reverse, revise maximality.

This second quote precisely indicates the difference in our points of view. The HP is intended to be an unbiased analysis of the maximality of V in height and width, grounded in our intuitions about this feature and limited by what is possible mathematically. These intuitions are indeed fairly robust, surely more so than our judgments about what is “good set theory”. I know of no persuasive argument that large cardinal existence (beyond what is compatible with V = L) follows from the maximlity of V in height and width. Indeed in the literature authors such as Gödel had doubts about this, whereas they have felt that inaccessible cardinals are derivable from maximality in height.

So the only reasonable interpretation of Hugh’s comment is that he feels that LC existence is necessary for “good set theory” and that such Type 1 evidence should override any investigation of the maximality of V in height and width. Pen and I discussed this (in what seems like) ages ago in the terminology of “veto power” and I came to the conclusion that it should not be the intention of the HP to have its choice of criteria dictated by what is good for the practice of set theory as mathematics.

To repeat, the HP works like this: We have an intuition about maximality (of V in height and width) which we can test out with various criteria. It is a lengthy process by which we formulate, investigate and compare different criteria. Sometimes we “unify” or “synthesise” two criteria into one, resulting in a new criterion that based on our intuitions about maximality does a better job of expressing this feature than did the individual criteria which were unified. And sometimes our criteria conflict with reality, namely they are shown to be inconsistent in ZFC. Here are some examples:

Synthesis: The IMH is the most obvious criterion for expressing the maximality of V in width. #-generation is the strongest criterion for expressing the maximality of V in height. If we unify these we get IMH#, which is consistent but behaves differently than either the IMH alone or #-generation alone. Our intuition says that the IMH# better expresses maximality than either the IMH alone or #-generation alone.

Inconsistency (examples with HOD): We can consistently assert the maximality principle V \noteq \text{HOD}. A natural strengthening is that \alpha^+ of HOD is less than \alpha^+ for all infinite cardinals \alpha. Still consistent. But then we go to the further natural strengthening \alpha^+ of HOD_x is less than \alpha^+ for all subsets x of \alpha (for all infinite cardinals \alpha). This is inconsistent. So we back off to the latter but only for \alpha of cofinality \omega. Now it is consistent for many such \alpha, not yet known to be consistent for all such \alpha. We continue to explore the limits of maximality in this way, in light of what is consistent with ZFC. A similar issue arises with the statement that \alpha is inaccessible in HOD for all infinite regular \alpha, which is not yet known to be consistent (my belief is that it is).

The process continues. There is a constant interplay betrween criteria suggested by our maximality intuitions and the mathematics behind these criteria. Obviously we have to modify what we are doing as we learn more of the mathematics. Indeed, as you pointed out in your more recent e-mail, there are maximality criteria which contradict ZFC; this has been obvious for a long time, in light of Vopenka’s theorem.

It may be too much to ask that your program at this stage make such predictions. But I hope that it aspires to that. For if it does not then, as I mentioned earlier, one has the suspicion that it is infinitely revisable and “not even wrong”.

Once again, the aim of the programme is to understand the consequences of the maximality of V in height and width. Your criterion of “making predictions” may be fine for your Type 1 programmes, which are grounded by nothing more than “good set theory”, but it is not appropriate for the HP. That is because the HP is grounded by an intrinsic feature of the set-concept, maximality, which will take a long time to understand. I see no basis for your suggestion that the programme is “infinitely revisable”, it simply requires a huge amount of mathematics to carry out. Already the synthesis of the IMH with #-generation is considerable progress, although to get a deeper understanding we’ll definitely have to deal with the \textsf{SIMH}^\# and HOD-maximality.

If you insist on a “prediction” the best I can do is to say that the way things look now, at this very preliminary stage of the programme, I would guess that both not-CH and the nonexistence of supercompacts will come out. But that can’t be more than a guess at this point.

Now I ask you this: Suppose we have two Type 1 axioms, like the ones in your examples. Suppose that one is better than the other for Type 2 reasons, i.e., is more effective for mathematics outside of set theory. Does that tip the balance between those two Type 1 axioms in terms of which is closer to the truth? And I ask the same question for Type 3: Could you imagine joining forces and giving priority to axioms that both serve the needs of set theory as mathematics and are derivable from the maximality of V in height and width?

One additional worry is the vagueness of the idea of the ” ‘maximal’ iterative conception of set”. If there were a lot of convergence in what was being mined from this concept then one might think that it was clear after all. But I have not seen a lot of convergence. Moreover, while you first claimed to be getting “intrinsic justifications” (an epistemically secure sort of thing) now you are claiming to arrive only at “intrinsic heuristics” (a rather loose sort of thing). To be sure, a vague notion can lead to motivations that lead to a great deal of wonderful and intriguing mathematics. And this has clearly happened in your work. But to get more than interesting mathematical results — to make a case for for new axioms — at some stage one will have to do more than generate suggestions — one will have to start producing propositions which if proved would support the program and if refuted would weaken the program.

I imagine you agree and that that is the position that you ultimately want to be in.

No, the demands you want to make of a programme are appropriate for finding the right axioms for “good set theory” but not for an analysis of the maximality of V in height and width. For the latter it is more than sufficient to analyse the natural candidates for maximality criteria provided by our intuitions and achieve a synthesis. I predict that this will happen with striking consequences, but those consequences cannot be predicted without a lot of hard work.

Thanks,
Sy

PS: The above also addresses your more recent mail: I don’t reject a form of maximality just because it contradicts supercompacts (because I don’t see how supercompact existence is derivable form any form of maximality) and I don’t see any problem with rejecting maximality principles that contradict ZFC, simply because by convention ZFC is taken in the HP as the standard theory.

PPS: A somewhat weird but possibly interesting investigation would indeed be to drop the ZFC convention and examine criteria for the maximality of V in height and width over a weaker theory.

Re: Paper and slides on indefiniteness of CH

Dear Pen,

On Mon, 20 Oct 2014, Penelope Maddy wrote:

It seems to me disingenuous to suggest that resolving CH, and devising a full account of sets of reals more generally, is not one of the goals of set theory — indeed a contemporary goal with strong roots in the history of the subject.

Good luck selling that to the ST public. This is interesting to you, me and many others in this thread, but very few set-theorists think it’s worth spending much time on it, let’s not deceive ourselves. They are focused on “real set theory”, mathematical developments, and don’t take these philosophical discussions very seriously. …  Resolving CH was certainly never my goal; I got into the HP to better understand large cardinals and internal consistency, with no particular focus on CH. … It would be interesting to ask other set-theorists (not Hugh or I) what the goals of set theory are; I think you might be very surprised by what you hear, and also surprised by your failure to hear “solve CH”.

The goal I mentioned was resolving CH as part of a full theory of sets of reals more generally.  I said ‘resolving’ to leave open the possibility that the ‘resolution’ will be a understanding of why CH doesn’t have a determinate truth value, after all (e.g., a multiverse resolution).

I’m not sure I understand what you mean by “a full theory of sets of reals”, but I presume you mean a theory with “practical completeness”, meaning that it resolves all of the interesting questions about sets of reals? You seem to imply that we already have such a theory for sets of integers; I am not even convinced of that!

But to turn to your second comment above: We already know why CH doesn’t have a determinate truth value, it is because there are and always will be axioms which generate good set theory which imply CH and others which imply not-CH. Isn’t this clear when one looks at what’s been going on in set theory? (Confession: I have to credit this e-mail discussion for helping me reach that conclusion; recall that I started by telling Sol that the HP might give a definitive refutation of CH! You told me that it’s OK to change my mind as long as I admit it, and I admit it now!)

So the best one can do with a problem like CH is to say: “Based on a certain Type of evidence, the truth value of CH is such and such.” As said above, Type 1 evidence (the development of set theory as an area of mathematics) will never yield a fixed truth value, we don’t know yet about Type 2 evidence (ST as a foundation) and I still conjecture that Type 3 evidence (based on the Maximality of the universe of sets in height and width) will imply that CH is false.

It’s not a matter of how many people are actively engaged in the project: there might be lots of perfectly good reasons why most set theorists aren’t (because there are other exciting new projects and goals, because CH has been around for a long time and looks extremely hard to crack, etc.).  I would ask you this:   is CH one of the leading open questions of set theory?

No! The main reason is that, as Sol has pointed out, it is not a mathematical problem but a logical one. The leading open questions of set theory are mathematical.

Is it the sort of thing that would draw great acclaim if someone were to come up with a widely persuasive ‘resolution’?

There will never be such a resolution of CH (for the reasons I gave above). The best one can do is to give a widely persuasive argument that CH (or not-CH) is needed for the foundations of mathematics or that CH (or not-CH) follows from the Maximality of the set-concept. But I would not expect either achievement to draw great acclaim, as nearly all set-theorists care only about the mathematical development of set theory and CH is not a mathematical problem.

This whole discussion about CH is of interest only to philosophers and a handful of philosophically-minded mathematicians. To find the leading open questions in set theory, one has to instead stay closer to what set-theorists are doing. For example: Provably in ZFC, is V generic over an inner model which satisfies GCH?

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

If we are talking about ST in terms of its role as a foundation for or subfield of mathematics (Types 1 and 2) then we needn’t trouble ourvselves with this discussion of universes for ST and can hang our hats on what axioms of set theory are advantageous for the development of set theory and mathematics, as was done with AC and the Axiom of Infinity, for example.

Thanks for this clarification.  If all we care about is set theory as a branch of mathematics and set theory as it relates to the rest of mathematics, then we can stick with our familiar iterative picture of V and rely on extrinsic justifications of the familiar sort (unless the extrinsic evidence eventually leads us to prefer some sort of multiverse, in which case we’d shift to a new picture).  It’s only when we’re interested in further exploration of ‘the maximality of the set-concept’ that we need to engage in the HP (or the MP).

All best,
Pen

Re: Paper and slides on indefiniteness of CH

Dear Pen,

It seems to me disingenuous to suggest that resolving CH, and devising a full account of sets of reals more generally, is not one of the goals of set theory — indeed a contemporary goal with strong roots in the history of the subject.

Good luck selling that to the ST public. This is interesting to you, me and many others in this thread, but very few set-theorists think it’s worth spending much time on it, let’s not deceive ourselves. They are focused on “real set theory”, mathematical developments, and don’t take these philosophical discussions very seriously.

Surely doing serious set-theoretic mathematics with the hope of resolving CH isn’t a mere ‘philosophical discussion’!

I agree, and I did not mean to imply that the discussion was only philosophical. But my belief is that there are at most 3 or 4 set-theorists actually engaged in the attempt to resolve CH. Resolving CH was certainly never my goal; I got into the HP to better understand large cardinals and internal consistency, with no particular focus on CH. But as this thread began with Sol’s paper on CH, I have been naturally talking about what the HP could offer to that problem. (In any case you already know my views on CH: There will never be a Type 1 solution, we don’t know if there will be a Type 2 solution and I expect a Type 3 refutation.) But if CH motivates Hugh to do good set theory then that is valuable. The motivation fo the HP is much broader than the continuum problem.

In any case, for the record, only the foundational goal figured in my case for the methodological principles of maximize and unify. The goal of resolving CH was included to illustrate that I wasn’t at all claiming that this is the only goal of set theory. Your further examples will serve that purpose just as well:

The goals I’m aware of that ST-ists seem to really care about are much more mathematical and specific, such as a thorough understanding of what can be done with the forcing method.

It would be interesting to ask other set-theorists (not Hugh or I) what the goals of set theory are; I think you might be very surprised by what you hear, and also surprised by your failure to hear “solve CH”.

Best,
Sy

Re: Paper and slides on indefiniteness of CH

Dear Sy,

For present purposes, what matters is that set theory has, as one of its goals, the kind of thing Zermelo identifies. This is part of the goal of providing the sort of foundation that Claudio and I were talking about (a kind of certification and a shared arena).

I interpreted the Zermelo quote to mean that ST’s task is to provide a useful foundation for mathematics through a mathematical clarification of ‘number’, ‘order’ and ‘function’, Is that correct? This goal is then Type 2, i.e. concerned with ST’s role as a foundation for mathematics.

Yes, in your classification (if I’m remembering it correctly), this would be a Type 2 goal, that is, a goal having to do with the relations of set theory to the rest of mathematics. (My recollection is that a Type 1 goal is a goal within set theory itself, as a branch of mathematics, and Type 3 is the goal of spelling out the concept of set, regardless of its relations to mathematics of either sort, as a matter of pure philosophy.)

I don’t see that it’s being Type 2 in any way disqualifies it as a goal of set theory, with attendant methodological consequences. It’s true that set theory has been so successful in this role and is now so entrenched that it’s become nearly invisible, and neither set theorists nor mathematicians generally give it much thought anymore, but it was explicit early on and it remains in force today (as that recent quotation from Voevodsky indicates).

I think it’s fair to say that contemporary set theory also has the goal of resolving CH somehow.

No, Type 1 considerations (ST as a branch of math) are not concerned with resolving CH, that is just something that a handful of set-theorists talk about. The rest are busy developing set theory, independent of philosophical concerns. Both Hugh and I do lots of ST for the sake of the development of ST, without thinking about this philosophical stuff. Philosophers naturally only see a small fraction of what is going on in ST, for the simple reason that 90% of what’s going on does not appear to have much philosophical significance (e.g. forcing axioms).

It seems to me disingenuous to suggest that resolving CH, and devising a full account of sets of reals more generally, is not one of the goals of set theory — indeed a contemporary goal with strong roots in the history of the subject. To say this is in no sense to deny that you and Hugh and other set theorists have many other goals besides. (Incidentally, I don’t see why you think forcing axioms are of no interest to philosophers, but let that pass.)

There are others.

Such as? I think that just as the judgments about “good” or “deep” ST must be left to the set-theorists, perhaps with a little help from the philosophers, so must judgments about “the goals of set theory”.

I haven’t attempted to list other goals because, as a philosopher, I’m not well-placed to do so (as you point out).

All best,
Pen