Tag Archives: CTMs

Re: Paper and slides on indefiniteness of CH

Dear Sy,

A. The principles in the hierarchy IMH(Inaccessible), IMH(Mahlo), IMH(the Erdos cardinal \kappa_\omega exists), etc. up to \textsf{IMH}^\# must be regarded as ad hoc unless one can explain the restriction to models that satisfy Inaccessibles, Mahlos, \kappa_\omega, etc. and #-generation, respectively.

One of my points was that #-generation is ad hoc (for many reasons,
one being that you use the # to get everything below it and then you
ignore the #). There has not been a discussion of the case for
#-generation in this thread. It would be good if you could give an
account of it and make a case for it on the basis of “length
maximality”. In particular, it would be good if you could explain how
it is a form of “reflection” that reaches the Erdos cardinal
\kappa_\omega.

B. It is true that we now know (after Hugh’s consistency proof of
\textsf{IMH}^\#) that \textsf{IMH}^\#(\omega_1) is stronger than \textsf{IMH}^\# in the sense that the large cardinals required to obtain its consistency are stronger. But in contrast to \textsf{IMH}^\# it has the drawback that it is not consistent with all large cardinals. Indeed it implies that there is a real x such that \omega_1=\omega_1^{L[x]} and (in your letter about Max) you have already rejected any principle with that implication. So I am not sure why you are bringing it up.

(The case of \textsf{IMH}^\#\text{(card-arith)} is more interesting. It has a standing chance, by your lights. But it is reasonable to conjecture (as Hugh did) that it implies GCH and if that conjecture is true then there is a real x such that \omega_1=\omega_1^{L[x]}, and, should that turn out to be true, you would reject it.)

2. What I called “Magidor’s embedding reflection” in fact appears in a paper by Victoria Marshall (JSL 54, No.2). As it violates V = L it is not a form of height maximality (the problem  is with the internal embeddings involved; if the embeddings are external then one gets a weak form of #-generation). Indeed Marshall Reflection does not appear to be a form of maximality in height or width at all.

No, Magidor Embedding Reflection appears in Magidor 1971, well before Marshall 1989. [Marshall discusses Kanamori and Magidor 1977, which contains an account of Magidor 1971.]

You say: “I don’t think that any arguments based on the vague notion of “maximality” provide us with much in the way of justification”. Wow! So you don’t think that inaccessibles are justified on the basis of reflection! Sounds like you’ve been talking to the radical Pen Maddy, who doesn’t believe in any form of intrinsic justification.

My comment was about the loose notion of “maximality” as you use it, not about “reflection”. You already know what I think about “reflection”.

3. Here’s the most remarkable part of your message. You say:

Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.

In this thread I have repeatedly and without objection taken Pen’s Thin Realism to be grounded on “good set theory” (or if looking beyond set theory, on “good mathematics”). So you have now rejected not only the HP, but also Thin Realism. My view is that Pen got it exactly right when it comes to evidence from the practice of set theory, one must only acknowledge that such evidence is limited by the lack of consensus on what “good set theory” means.

You are right to say that there is value to “predictions” and “verifications”. But these only serve to make a “good set theory” better. They don’t really change much, as even if a brand of “good set theory” fails to fulfill one of its “predictions”, it can still maintain its high status. Take the example of Forcing Axioms: They have been and always will be regarded as “good set theory”, even if the “prediction” that you attribute to them fails to materialise.

Peter, your unhesitating rejection of approaches to set-theoretic truth is not helpful. You faulted the HP for not being “well-grounded” as its grounding leans on a consensus regarding the “maximality of V in height and width”. Now you fault Thin Realism (TR) for not being “well-grounded” as its grounding leans on “good set theory”. There is an analogy between TR and the HP: Like Pen’s second philosopher, Max (the Maximality Man) is fascinated by the idea of maximality of V in height and width and he “sets out to discover what the world of maximality is like, the range of what there is to the notion and its various properties and behaviours”. In light of this analogy, it is reasonable that someone who likes Thin Realism would also like the HP and vice-versa. It seems that you reject both, yet fail to provide a good substitute. How can we possibly make progress in our understanding of set-theoretic truth with such skepticism? What I hear from Pen and Hugh is a “wait and see” attitude, they want to know what criteria and consensus comes out of the HP. Yet you want to reject the approach out of hand. I don’t get it. Are you a pessimist at heart?

No, I am an unrepentant optimist. (More below.)

It seems to me that in light of your rejection of both TR and HP, the natural way for you to go is “radical skepticism”, which denies this whole discussion of set-theoretic truth in the first place. (Pen claimed to be a radical skeptic, but I don’t entirely believe it, as she does offer us Thin Realism.) Maybe Pen’s Arealism is your real cup of tea?

A. I don’t see how you got from my statement

Different people have different views of what “good set theory” amounts to. There’s little intersubjective agreement. In my view, such a vague notion has no place in a foundational enterprise.

to conclusions about my views realism and truth (e.g. my “[rejection]
….of Thin Realism” and my “unhesitating rejection of approaches to
set-theoretic truth”)!

Let’s look at the rest of the passage:

“The key notion is evidence, evidence of a form that people can agree on. That is the virtue of actually making a prediction for which there is agreement (not necessarily universal — there are few things beyond the law of identity that everyone agrees on — but which is widespread) that if it is proved it will strengthen the case and if it is refuted it will weaken the case.

I said nothing about realism or about truth. I said something only about the epistemic notion that is at play in a case (of the kind you call Type-1) for new axioms, namely, that it is not the notion of “good set theory” (a highly subjective, personal notion, where there is little agreement) but rather the notion of evidence (of a sort where there is agreement).

B. I was criticizing the employment of the notion of “good set theory” as you use it, not as Pen uses it.

As you use it Jensen’s work on V = L is “good set theory” and the work on ZF+AD is “good set theory” (in fact, both are cases of “great set theory”). On that we can all agree. (One can multiply examples:
Barwise on KP, etc.) But that has nothing to do with whether we should accept V = L or ZF+AD.

As Pen uses it involves evidence in the straightforward sense that I have been talking about.  (Actually, as far as I can tell she doesn’t use the phrase in her work. E.g. it does not occur in her most detailed book on the subject, “Second Philosophy”. She has simply used it in this thread as a catch phrase for something she describes in more detail, something involving evidence). Moreover, as paradigm examples of evidence she cites precisely the examples that John, Tony, Hugh, and I have given.

In summary, I was saying nothing about realism or truth; I was saying something about epistemology. I was saying: The notion of “good set theory” (as you use it) has no epistemic role to play in a case for new axioms. But the notion of evidence does.

So when you talk about Type 1 evidence, you shouldn’t be talking about “practice” and “good set theory”. The key epistemic notion is rather evidence of the kind that has been put forward, e.g. the kind that involves sustained prediction and confirmation.

[I don't pretend that the notion of evidence in mathematics (and
especially in this region, where independence reigns), is a straightforward matter. The explication of this notion is one of the main things I have worked on. I talked about it in my tutorial when we were at Chiemsee. You already have the slides but I am attaching them here in case anyone else is interested. It contains both an outline of the epistemic framework and the case for \text{AD}^{L(\mathbb R)} in the context of this framework. A more detailed account is in the book I have been writing (for several years now...)]

[C. Aside: Although I said nothing about realism, since you attributed views on the matter to me, let me say something briefly: It is probably the most delicate notion in philosophy. I do not have a settled view. But I am certainly not a Robust Realist (as characterized by Pen) or a Super-Realist (as characterized by Tait), since each leads to what Tait calls "an alienation of truth from proof." The view I have defended (in "Truth in Mathematics: The Question of Pluralism") has much more in common with Thin Realism.]

So I was too honest, I should not have admitted to a radical form of multiversism (radical potentialism), as it is then easy to misundertand the HP as you have. As far as the choice of maximality criteria, I can only repeat myself: Please be open-minded and do not prejudge the programme before seeing the criteria that it generates. You will see that our intuitions about maximality criteria are more robust than our intuitions about “good set theory”.

I have been focusing on CTM-Space because (a) you said quite clearly that you were a radical potentialist and (b) the principles you have put forward are formulated in that context. But now you have changed your program yet again. There have been a lot of changes.
(1) The original claim

I conjecture that CH is false as a consequence of my Strong Inner Model Hypothesis (i.e. Levy absoluteness with “cardinal-absolute parameters” for cardinal-preserving extensions) or one of its variants which is compatible with large cardinals existence. (Aug. 12)

has been updated to

With apologies to all, I want to say that I find this focus on CH to be exaggerated. I think it is hopeless to come to any kind of resolution of this problem, whereas I think there may be a much better chance with other axioms of set theory such as PD and large cardinals. (Oct. 25)

(2) The (strong) notion of “intrinsic justification” has been replaced by the (weak) notion of “intrinsic heurisitic”.

(3) Now, the background picture of “radical potentialism” has been
replaced by “width-actualism + height potentialism”.

(4) Now, as a consequence of (3), the old principles \textsf{IMH}^\#\textsf{IMH}^\#(\omega_1), \textsf{IMH}^\#\text{(card-arith)}, \textsf{SIMH}, \textsf{SIMH}^\#, etc. have been  replaced by New-\textsf{IMH}^\#, New-\textsf{IMH}^\#(\omega_1), etc.

Am I optimistic about this program, program X? Well, it depends on
what X is. I have just been trying to understand X.

Now, in light of the changes (3) and (4), X has changed and we have to start over. We have a new philosophical picture and a whole new collection of mathematical principles. The first question is obviously: Are these new principles even consistent?

I am certainly optimistic about this: If under the scrutiny of people like Hugh and Pen you keep updating X, then X will get clearer and more tenable.

That, I think, is one of the great advantages of this forum. I doubt that a program has ever received such rapid philosophical and mathematical scrutiny. It would be good to do this for other programs, like the Ultimate-L program. (We have given that program some scrutiny. So far, there has been no need for mathematical-updating — there has been no need to modify the Ultimate-L Conjecture or the HOD-Conjecture.)

Best,
Peter

Chiemsee_1 Chiemsee_2

Re: Paper and slides on indefiniteness of CH

Dear Sy,

My point is that the non-rigidity of HOD is a natural extrapolation of ZFC large cardinals into a new realm of strength.  I only reject it now because of the Ultimate-L Conjecture and its implication of the HOD Conjecture. It would be interesting to have an independent line which argues for the non-rigidity of HOD. This is the only reason I ask.

Please don’t confuse two things: I conjectured the rigidity of the Stable Core for purely mathematical reasons. I don’t see it as part of the HP. Indeed, I don’t see a clear argument that the nonrigidity of inner models follows from some form of maximality.

It would be nice to see one such reason (other than then non V-constructible one).

You seem to feel strongly that maximality entails some form of V is far from HOD. It would seem a natural corollary of this to conjecture that the HOD Conjecture is false, unless there is a compelling reason otherwise. If the HOD Conjecture is false then the most natural explanation would be the non-rigidity of HOD but of course there could be any number of other reasons.

In brief: HP considerations would seem to predict/suggest the failure of the HOD Conjecture. But you do not take this step. This is mysterious to me.

I am eager to see a well grounded argument for the HOD Conjecture which is independent of the Ultimate-L scenario.

Why am I so eager?  It would “break the symmetry” and for me anyway argue more strongly for the HOD Conjecture.

But I did answer your question by stating how I see things developing, what my conception of V would be, and the tests that need to be passed. You were not happy with the answer. I guess I have nothing else to add at this point since I am focused on a rather specific scenario.

That doesn’t answer the question: If you assert that we will know the truth value of CH, how do you account for the fact that we have many different forms of set-theoretic practice? Do you really think that one form (Ultimate-L perhaps) will “have virtues that will swamp all the others”, as Pen suggested?

Look, as I have stated repeatedly I see the subject of the model theory of ctm’s as separate from the study of V (but this is not to say that theorems in the mathematical study of ctm’s cannot have significant consequences for the study of V). I see nothing wrong with this view or the view that the practice you cite is really in the subject of ctm’s, however it is presented.

For your second question, If the tests are passed, then yes I do think that V = Ulitmate L will “swamp all the others” but only in regard to a conception of V, not with regard to the mathematics of ctm’s. There are a number of conjectures already which I think would argue for this. But we shall see (hopefully sooner rather than later).

Look: There is a rich theory about the projective sets in the context of not-PD (you yourself have proved difficult theorems in this area). There are a number of questions which remain open about the projective sets in the context of not-PD which seem very interesting and extremely difficult. But this does not argue against PD. PD is true.

Sample current open question: Suppose every projective set is Lebesgue measurable and has the property of Baire. Suppose every light-face projective set has a light-face projective uniformization. Does this imply PD? (Drop light-face and the implication is false by theorems of mine and Steel, replace projective by hyper projective and the implication holds even without the light-face restriction,  by a theorem of mine).

If the Ultimate L Conjecture is false then for me it is “back to square one” and I have no idea about an resolution to CH.

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear HP-ers, HP-worriers, and friends,

In this thread (which I confess has been moving pretty quickly for me; I’ve read it all but do apologise if I’m revisiting some old ground) we’ve seen that the key claim is that there is a deep relationship between countable transitive models and some V, either real, ideal, or sat within a multiverse. I have a few general worries on this that if assuaged will help me better appreciate the view.

I’m going to speak in “Universey” terms, just because its the easiest way I find for me to speak. Indeed, when I first heard the HP material, it occurred to me that this looked like an epistemological methodology for a Universist; we’re using the collection of all ctms as a structure to find out information (even probabilistic) about V more widely. If substantive issues turn on this way of speaking, let me know and I’ll understand better.

Let’s first note that in the wake of independence, it’s going to be a pretty hard-line Universist (read “nutty Universist”) who asserts that we shouldn’t be studying truth across models in order to understand V better. Indeed the model theory gives us a fascinating insight into the way sets behave and ways in which V might be. However, its then essential to the HPers position that it is the “truth across ctms” approach that tells us best about V, rather than “truth across models” more generally. I see at least two ways this might be established:

A. Ctms (and the totality of) are more easily understood than other kinds of model.

B. Ctms are a better guide to (first-order) truth than other kinds of model.

I worry that both A and B are false (something I came to worry in the context of trying to use the HP for an absolutist).

A.1. It would be good if we could show two things to address the first question:

A.1.1. The Hyperuniverse is in some sense “tractable” in the sense that we can refer to it easily using fairly weak resources.
A.1.2. The Hyperuniverse is in some sense “minimal”; we only have the models we need to study pictures of V. There’s no extraneous subject matter confusing things.

The natural way to assuage A.1.1. for someone who accepts something more than just first-order resources is to provide a categoricity proof for the hyperuniverse from fairly weak resources (we don’t want to go full second-order; it’s the very notion of arbitrary subset we’re trying to understand). I thought about doing this in ancestral logic, but this obviously won’t work; there are uncountably many members of the Hyperuniverse and the downward LST holds for ancestral logic. So, I don’t see how we’re able to refer to the hyperuniverse better than just models in general in studying ways V might be.

(Of course, you might not care about categoricity; but lots of philosophers do, so it’s at least worth a look)

Re: A.1.2 The Hyperuniverse is not minimal. For any complete, maximal, truth set T of first-order sentences consistent with ZFC, there’s many universes in H satisfying that truth set. So really, for studying “first-order pictures of V” there’s lots in there you don’t need.

So, I’d like to hear from the HPers the sense in which we can more easily access the elements of H. One often hears set theorists refer to ctms (and indeed Skolem hulls and the like) as `nice’, `managable’, “tractable”. I confess that in light of the above I don’t really understand what is meant by this (unless it’s something trivial like guaranteeing the existence of generics in V). So, what is meant by this kind of talk? Is there anything philosophically or epistemically deep here?

On to B. Are ctms a better guide to truth in V than other kinds of model? Certainly on the Universist picture it seems like the answer should be no; various kinds of construction that are completely illegitimate over V are legitimate of ctms; e.g. \alpha-hyperclass forcing (assuming you don’t believe in hyperclasses, which you shouldn’t if you’re a Universist). Why should techniques of this kind produce models that look anything like a way V might be when V has no hyperclasses? Now maybe a potentialist has a response here, but I’m unsure how it would go. Sy’s potentialist seems to hold that it’s a kind of epistemic potentialism; we don’t know how high V is so should study pictures on which it has different heights. But given this, it still seems that hyperclasses are out; whatever height V turns out to have, there aren’t any hyperclasses. If one wants to look at pictures of V, maybe it’s better just to analyse the model theory more generally with standard transitive models and a ban on hyperclass forcing?

[A note; like Pen I have worries that one can't make sense of the hybrid-view. The only hybrid I can make sense of is to be epistemically hyperuniversist and ontologically universist. I worry that my inability to see the `real' potentialist picture here is affecting how I characterise the debate.]

Anyway, I’m sympathetic to the idea that I’ve missed a whole bunch of subtleties here. But I’d love to have these set to rights.

With Best Wishes,

Neil.

P.S. I’ve added my good friend Chris Scambler to the list who was interested in the discussion. I hope this is okay with everyone here.

P.P.S. If there are responses I’ll try to reply as quick as I can, but time is tight currently.

Re: Paper and slides on indefiniteness of CH

Dear Hugh (likely my last mail for a while, due to my California trip),

On Sun, 19 Oct 2014, W Hugh Woodin wrote:

More details: Take the IMH as an example. It is expressible in V-logic. And V-logic is first-order over the least admissible (Goedel-) lengthening of V (i.e. we go far enough in the L-hierarchy built over V until we get a model of KP). We apply LS to this admissible lengthening, that’s all.

This is of course fine for IMH. But this does not work at all for SIMH#. One really seems to need the hyperuniverse for that.

Details: SIMH# is _not_ in general a first order property of M in L(M) or even in L(N,U) where (N,U) witnesses that M is #-generated.

You are of course right and I have been suppressing the technical details required to deal with this to avoid complicating the discussion. The point is that with any property that refers to “thickenings” one must make use of V-logic, let’s call it M-logic to match your notation. Then what I have been suppressing is that #-generation is to be taken as the consistency in M-logic (extended with new axioms making the ordinals standard) of the obvious theory expressing the iterability of a presharp that generates M. LS can be applied because any presharp that embeds into an iterable presharp is iterable. Handling variants of the \textsf{IMH}^\# takes more work, but can be done. Of course the difficulty is that we are not dealing with actual objects but with the consistency of theories. I’ll write more about this when I get a chance.

Of course if one is happy to adopt the Hyperuniverse from the start (without the “reduction”) then these technical issues disappear.

(Fine let’s call it HP and not CTMP). HP seems now to be a one-principle program (\textsf{SIMH}^\#).

This is nonsense. One may bring in cardinal maximality, inner model reflection or various forms of unreachability as well. You have an \textsf{SIMH}^\# fixation.

Further progress seems to require at the very least, understanding the implications of \textsf{SIMH}^\#.

There could be progress with other criteria in the meantime.

As I said in my last email on this, it is impossible to have a mathematical discussion (now) of \textsf{SIMH}^\# since it has been formulated so that one cannot do anything with it without first solving a number of problems which look extremely difficult. And I am not even talking about the consistency problem of \textsf{SIMH}^\#.

Just to be clear: This is not a criticism of \textsf{SIMH}^\#, it just saying it is premature to have a mathematically oriented discussion of it, and therefore of HP.

I partly agree: The important issues now concern the formulation of the HP, not the details of the mathematical analysis of the maximality criteria.

So (and this is why I have repeated myself above), I do not yet draw the distinction you do on HP versus the study of ctm’s because there not yet enough mathematical data for me to do this.

This is quite ridiculous. The study of ctm’s is obviously much, much broader than the specific types of questions relevant to the HP (maximality criteria).

Hopefully this situation will clarify as more data becomes available.

That is what I have been trying, without success until now, to say to you for several weeks already. The important task for now is to just recognise the legitimacy of the approach, not to evaluate the results of the programme, which will take time.

Have a productive week in California!

Thanks!

Concerning the future of HP, I will make a prediction: HP ends up with PD.

This has already almost happened a number of times (strong unreadability and the unreachability of V by HOD).

I think this very likely, plus a lot more!

Best,
Sy

Re: Paper and slides on indefiniteness of CH

On Sun, Oct 19, 2014 at 5:06 AM, Radek Honzik wrote: Dear all,

I will attempt to answer briefly the questions posted by Harvey. My view on HP is different from Sy’s, but I see HP as a legitimate foundational program.

The most accomplished contributors to this list seem to be doubtful.

Thanks for your replying. Your reply doesn’t contain a mish mash of hidden assumptions, not so hidden assumptions, question begging, ignoring criticisms, missed opportunities for joining issues, evasions, crude bragging about important busy activities, undefined terms, total lack of respect for the audience who does not keep up with specialist jargon, and a long list of other sins that make this thread an example of how not to do or even talk about foundations and philosophy.

I appreciate that you have for the most part avoided these sins.

0] At a fundamental level, what does “(intrinsic) maximality in set theory” mean in the first place?

Let me write IMST instead of “(intrinsic) maximality in set theory” for the sake of brevity.

I doubt IMST can mean more than “viewing sets as big as possible, without the use of considerations based on practice of set theory as the main incentive”.

I think you mean “viewing the set theoretic universe as inclusive as possible”?

Your sentence has an indirect construction that really is not necessary and slows down the reader.

“Intrinsic” is thus temporarily reduced to “non-extrinsic”; in view of the heavy philosophical discussions around this notion, I prefer to give it this more restrictive meaning. Note that “extrinsic”, unlike “intrinsic”, has a well-defined inter-subjective meaning. This leaves us with the word “big”; I guess that this is the primitive term, which cannot be defined by anything more simple — at least on the level of general discussion.

Again, I don’t think you want to use “big” here – I am suggesting something very similar – the set theoretic universe is as inclusive as possible. I regard this as only an informal starting point and the job is to systematically explore analyses of it.

Admittedly, this definition is far from informative. For me, HP is a way of explicating this definition in a mathematical framework. Making its meaning more precise, and by the same token, less general. A discussion should be if other approaches — which set out to get real mathematical results — retain more of the general meaning of the term IMST. No approach can retain all the meaning of ISMT because it is by definition vague and subjective; thus HP should not be expected to do that.

But there is the real possibility of saying something generally understandable, surprising, and robust. I haven’t seen anything like that in CTMP (aka HP).

1] Why doesn’t HP carry the obvious name CTMP = countable transitive model program.

Because the program was formulated by Sy with the aim of having wider application than the study of ctm’s.

Since the name “hyperuniverse” specifically refers to the countable transitive models of ZFC, period, it amount to nothing more than a propogandistic slogan designed to lure the listener into thinking that there is something profound going on having to do with the foundations of set theory. But since nothing yet has come out of this special study of ctms for foundations of set theory, even propogandistic slogans about ctms are premature.

2] What does the choice of a countable transitive model have to do with “(intrinsic) maximality in set theory”?

Countable models are a way of explicating IMST. It is a technical convenience which allows us to use model-theoretic techniques, not available for higher cardinalities.

What you have here is an unanalyzed idea of “intrinsic maximality in set theory”, and before that is analyzed to any depth, you have the blanket assumption that countable transitive models are going to be the way you can formulate what is going to become the analysis of “intrrinsic maximality in set theory”. The real agenda is a creative or novel analysis of “intrinsic maximality in set theory”, and BEFORE that is accomplished any “proof” that ctms will do by some sort of Lowenheim Skolem argument is bogus. Of course, you can set up some idiosyncratic framework, pretend that you going to make this your analysis of “intrinsic maximality in set theory”, and then cite Lowenheim Skolem. But that is bogus. After you make some creative or novel analysis, and work through the problematic issues (inconsistencies and other non robustness arising out of parameters, sets of sentences, etcetera), and have a framework that is credible and well argued, you can cite the Lowenheim Skolem theorem – if it really does apply correctly – to say that any claims of a certain form are equivalent to claims of the form with ctms. That would make some sense, but I still would not advise it since the proper framework, if there is any, is not going to be based on ctms. Ctms would only be a convenience.

This is the serious conceptual error being made in endless emails by Sy trying to justify the use of ctms. An unjustified framework for treating “intrinsic maximality in set theory” is alluded to, and to the extent that it is precise, one quotes Skolem Lowenheim to argue that one can wlog work with ctms. This is a very serious question begging sin. The issue at hand is first to have a novel or creative and well argued and thought out framework for treating “intrinsic maximality in set theory”. AFTER THAT, one can talk about the convenience – but NOT the fundamental nature of – using ctms.

This reversal of proper order of ideas – putting the cart before the horse – is a major error in work in foundations and philosophy.

IN ADDITION, on the mathematical level, I quote from Hugh. This indicates that even in frameworks proposed beyond IMH, there is no Lowenheim Skolem argument, and one is compelled to make the move that ctms are fundamental, rather than just a convenience. Here is the exchange:

Sy wrote:

More details: Take the IMH as an example. It is expressible in V-logic. And V-logic is first-order over the least admissible (Goedel-) lengthening of V (i.e. we go far enough in the L-hierarchy built over V until we get a model > of KP). We apply LS to this admissible lengthening, that’s all.

Hugh wrote

This is of course fine for IMH. But this does not work at all for \textsf{SIMH}^\#. One really seems to need the hyperuniverse for that. Details: \textsf{SIMH}^\# is not in general a first order property of M in L(M) or even in L(N,U) where (N,U) witnesses that M is #-generated.

MY COMMENT: So we may be already seeing that in some of these approaches being offered, one must buy into the fundamental appropriateness of ctms in the philosophy, and not just an automatic freebie from the Lowenheim Skolem theorem. FURTHERMORE, IMH, where reduction to ctm makes sense through Skolem Lowenheim, has not even been seriously analyzed as an “intrinsic maximality in set theory” by serious foundational and philosophical standards. There is a large array of issues, including inconsistencies and non robustness involving parameters and sets of sentences, and so forth.

Aside: I do not quite understand why the discussion rests so heavily on this issue: everyone seems to accept it readily when we talk about forcing (I know it can be eleminated in forcing, but the intuition — see Cohen’s book — comes from countable models). Would it make a difference if the models had cardinality omega_1, or omega and omega_1, or should they be proper classes etc? Larger cardinalities would introduce technical problems which are inessential for the aims of HP.

The crucial issue can be raised as follows. Do we or do we not want to take the structure of ctms as somehow reflecting on the structure of the actual set theoretic universe?

I am interested in seeing what happens under both answers. What is totally unacceptable is to make the hidden assumption of “yes we do” while pretending “no we are not because of the Löwenheim Skolem theorem”. That is just bad foundations and philosophy.

I am going to explore what happens when we UNAPOLOGETICALLY say “we do”. No bogus Löwenheim Skolem.

3] Which axioms of ZFC are motivated or associated with “(intrinsic) maximality in set theory”? And why? Which ones aren’t and why?

IMST by historical consensus includes at this moment ZFC. “Historical consensus” for me means that many people decided that the vague meaning of IMST extends to ZFC. I do think that this depends on time (take the example of AC). HP is a way to raise some new first-order sentences as candidates for this extension.

Then what is all this talk on the traffic doubting whether AxC is supported by “intrinsic maximality of the set theoretic universe?”

4] What is your preferred precise formulation of IMH? E.g., is it in terms of countable models?

Yes.

5] What do you make of the fact that the IMH is inconsistent with even an inaccessible (if I remember correctly)? If this means that IMH needs to be redesigned, how does this reflect on whether CTMP = HP is really being properly motivated by “(intrinsic) maximality in set theory”?

I view the process of obtaining results in HP like an experiment in explicating the vague meaning of IMST. It is to be expected that some of the results will be surprising, and will require interpretation.

This is a good attitude. However, there is still not much that has come out, and it is still unclear whether this will change. So declaring it a “program” without having the right kind of ideas in hand, and coining a jargon name, is way premature.

6] What is the simplest essence of the ideas surrounding “fixing or redesigning IMH”? Please, in generally understandable terms here, so that people can get to the essence of the matter, and not have it clouded by technicalities.

It is a creative process: explicate IMST by principle P_1 — after some mathematical work, it outputs varphi (such as P_1 = IMH, \varphi = no inaccessible). Then try P_2, etc (P_2 can be a “redesigned”, or “modified” version of P_1). Of course, one hopes that his/her understanding of set theory will be helpful in identifying P’s which have potential to output nice (good, deep) mathematics. It is essential that the principles P‘s should be as practice-independent as possible (= intrinsic, in my reading); that is what makes the program foundational (again, in my more narrow sense).

Taking into account what you are saying, and the difficulties that Hugh has been pointing out about the post IMH proposals, this does not have enough of the features of a legitimate foundational program at this stage. It has the features of a legitimate exploratory project without a flowery name and pretentious philosophy. We don’t know if it is going to develop into a legitimate foundational program, which would justify flowery names and pretentious philosophy.

Harvey

Re: Paper and slides on indefiniteness of CH

Dear Pen and Harvey,

Sorry for not replying sooner to your questions.

I believe I should address some of the issues that I had left unanswered and maybe provide some further responses to the questions you’d asked. Wrt to Harvey’s, I think that Radek has already given some persuasive answers, so I will for the time being concentrate on Pen’s questions.

Sy, please feel free at any point to add any remarks you may find useful to contribute to the discussion.

Pen, in a sense you’re right, the hyperuniverse “lives” within V (I’d rather say that it “originates from” V) and my multiverser surely has a notion of V as anybody else working with ZFC. Moreover, members of H are thought to be strongly related to V also in another way, through the satisfaction of principles which are, originally, assumed to be referring to the universe. By setting up the hyperuniverse concept and framework, however, she stipulates that questions of truth about V be dealt within the hyperuniverse itself. Again, this doesn’t imply that whatever is taken to hold across portions of the hyperuniverse is then referred back to the universe.

I’m not sure that Sy entirely agrees with me on this point, but to me HP implies an irreversible departing from the idea of finding a single, unified body of set-theoretic truths. Even if a convergence of consequences of H-axioms were to manifest itself in a stronger and more tangible way, via, e.g., results of the calibre of those already found by Sy and Radek, I’d be reluctant to accept the idea that this would automatically reinstate our confidence in a universe-view through simply referring back such a convergence to a pristine V.

Moreover, HP, in my view, constitutes the reversal of the foundational perspective I described above (that is, to find an ultimate universe), by deliberately using V as a mere inspirational concept for formulating new set-theoretic hypotheses rather than as a fixed entity whose properties will come to be known gradually. As has been indicated by someone in this thread, HP fosters an essentially top-down approach to set-theoretic truth, whose goal is that of investigating what truth about sets may be generated beyond that incapsulated by ZFC using new information about V. I used the term ideal in my previous email just to convey the contrast between a V investigated through attributing to it certain features and a real V as a fixed entity progressively determined through subsequent refinements (I guess that this ideal status of V might be construed in the light of Kant’s Grenzbegriffe working as regulatory ideals [I owe this interpretation of V to Tatiana Arrigoni]).

Coming to the other issues raised, c.t.m. have proved to be technically very expedient and fertile in terms of consequences for the purposes of HP and, moreover, they seem to capture adequately the basic intuitions at work in such techniques as forcing. And as you pointed out, the hyperuniverse might be seen as something allowing us to generate a unified conceptual arena to study a multiverse framework, so why wouldn’t the use of c.t.m. be justified precisely on these specific grounds?

To go back to a more general point, I believe that HP should be judged essentially for its merits as a dynamic interpretation of truth within a multiverse framework. In my view, its construction, thus, responds, to a legitimate foundational goal, provided one construes foundations not in the sense of selecting uniquely and determinately the best possible general axioms for the mathematics we know (including set-theoretic mathematics), but rather in that of exhibiting (and studying) evidential processes for their selection: the study of what I defined properties of an ideal V within the hyperuniverse is one such evidential process.

Now, as I said in my previous email, the programme, at least as far as its epistemological goals are concerned, is far from being perfected in all its parts, of course. As said (and requested by many people also here), it has to clarify, for instance, what further legitimate ideal properties of V there are, and what justifies their probable intrinsicness (relationship to the set concept).

Has this brief summary answered (at least some of) your legitimate concerns?

Best wishes,
Claudio

Re: Paper and slides on indefiniteness of CH

Dear all,

it seems that despite efforts of Sy, and some others, the same questions are raised over and over again. Recently, Harvey asked explicitly those who think that “HP is a legitimate foundations program” to write. I have collaborated with Sy on some of the mathematical papers regarding HP, and was a coauthor of one of the philosophical one.

I will attempt to answer briefly the questions posted by Harvey. My view on HP is different from Sy’s, but I see HP as a legitimate foundational program.

0] At a fundamental level, what does “(intrinsic) maximality in set theory” mean in the first place?

Let me write IMST instead of “(intrinsic) maximality in set theory” for the sake of brevity.

I doubt IMST can mean more than “viewing sets as big as possible, without the use of considerations based on practice of set theory as the main incentive”. “Intrinsic” is thus temporarily reduced to “non-extrinsic”; in view of the heavy philosophical discussions around this notion, I prefer to give it this more restrictive meaning. Note that “extrinsic”, unlike “intrinsic”, has a well-defined inter-subjective meaning. This leaves us with the word “big”; I guess that this is the primitive term, which cannot be defined by anything more simple — at least on the level of general discussion.

Admittedly, this definition is far from informative. For me, HP is a way of explicating this definition in a mathematical framework. Making its meaning more precise, and by the same token, less general. A discussion should be if other approaches — which set out to get real mathematical results — retain more of the general meaning of the term IMST. No approach can retain all the meaning of ISMT because it is by definition vague and subjective; thus HP should not be expected to do that.

1] Why doesn’t HP carry the obvious name CTMP = countable transitive model program.

Because the program was formulated by Sy with the aim of having wider application than the study of ctm’s.

2] What does the choice of a countable transitive model have to do with “(intrinsic) maximality in set theory”?

Countable models are a way of explicating IMST. It is a technical convenience which allows us to use model-theoretic techniques, not available for higher cardinalities.

Aside: I do not quite understand why the discussion rests so heavily on this issue: everyone seems to accept it readily when we talk about forcing (I know it can be eleminated in forcing, but the intuition — see Cohen’s book — comes from countable models). Would it make a difference if the models had cardinality \omega_1, or \omega and \omega_1, or should they be proper classes etc? Larger cardinalities would introduce technical problems which are inessential for the aims of HP.

3] Which axioms of ZFC are motivated or associated with “(intrinsic) maximality in set theory”? And why? Which ones aren’t and why?

IMST by historical consensus includes at this moment ZFC. “Historical consensus” for me means that many people decided that the vague meaning of IMST extends to ZFC. I do think that this depends on time (take the example of AC). HP is a way to raise some new first-order sentences as candidates for this extension.

4] What is your preferred precise formulation of IMH? E.g., is it in terms of countable models?

Yes.

5] What do you make of the fact that the IMH is inconsistent with even an inaccessible (if I remember correctly)? If this means that IMH needs to be redesigned, how does this reflect on whether CTMP = HP is really being properly motivated by “(intrinsic) maximality in set theory”?

I view the process of obtaining results in HP like an experiment in explicating the vague meaning of IMST. It is to be expected that some of the results will be surprising, and will require interpretation.

6] What is the simplest essence of the ideas surrounding “fixing or redesigning IMH”? Please, in generally understandable terms here, so that people can get to the essence of the matter, and not have it clouded by technicalities.

It is a creative process: explicate IMST by principle P_1 — after some mathematical work, it outputs varphi (such as P_1 = IMH, \varphi = no inaccessible). Then try P_2, etc (P_2 can be a “redesigned”, or “modified” version of P_1). Of course, one hopes that his/her understanding of set theory will be helpful in identifying P‘s which have potential to output nice (good, deep) mathematics. It is essential that the principles P‘s should be as practice-independent as possible (= intrinsic, in my reading); that is what makes the program foundational (again, in my more narrow sense).

Best regards to all,
Radek Honzik

Re: Paper and slides on indefiniteness of CH

Dear Sy,

Dear Hugh,

I agree that HP is part of the very interesting study of models of ZFC.  There are many open and studied questions here. For example suppose \phi is a sentence such that there is an uncountable wellfounded model of ZFC+ \phi but only at most one model of any given ordinal height. Must all the uncountable wellfounded models of ZFC + \phi satisfy V= L? (The wellfounded models must all satisfy V = HOD and that there are no measurable cardinals). The answer could well be yes and the proof extremely difficult etc., but to me this would be no evidence that V=L.

See my 2012 MALOA lectures:

A positive answer to your question follows from item 13 there; the proof is not difficult.

Thanks. I was aware of content of item 13 but had not realized it completely solved the problem I stated. So you have given me a homework problem.

Nevertheless, there are many similar questions. For example if \phi is a sentence and ZFC + \phi has only one transitive model M, must M \vDash \textsf{CH}?

Of course you will argue this is not relevant to HP. OK, here is another question which I would think HP must deal with.

Question: Suppose M is a countable transitive model of ZFC. Must there exist a cardinal preserving extension of M which is not a set-generic extension?

No such M can satisfy IMH since within any such M, all sets have sharps and much more.

Conjecture:  If M is such a model then M \vDash \textsf{PD}.

I guess you could predict based on your conviction in HP that this latter case will not happen just as I predict PD is consistent. For me an inconsistency in PD is an extreme back-to-square-one event. I would like to see (at some point) HP make an analogous declaration.

Huh? The HP is a programme for truth in general, it is not aimed at a particular statement like CH. Even if the SIMH is inconsistent there is still plenty for the programme to explore. I don’t yet know if CH will have a constant truth value across the “preferred universes” (Sol is right, a better term would be something more flattering, like “optimal universes”), especially as a thorough investigation of the different intrinsically-based criteria has only just begun and it will take time to develop the optimal criteria together with their first-order consequences. I guess it is possible that an unavoidable “bifurcation” occurs, i.e. there are two conflicting optimal criteria, one implying CH and the other its negation. It is much too early to know that. Perhaps this would be what you call a “back-to-square-one event” regarding CH.7.

From your message to Pen on 19 Aug:

Goal: The goal is to arrive at a single optimal criterion which best synthesises the different intrinsically-based criteria, or less ambitiously a small set of such optimal criteria. Elements of the Hyperuniverse which obey one of these optimal criteria are called “preferred universes” and first-order properties shared by all preferred universes are regarded as intrinsically-based set-theoretic truths. Although the process is dynamic and therefore the set of such truths can change the expectation is that intrinsically-based truth will stabilise over time. (I expect that more than a handful will consider this to be a legitimate notion of intrinsically-based truth.)

I will try one more time. At some point HP must identify and validate a new axiom. Otherwise HP is not a program to find new “axioms”. It is simply part of the study of the structure of countable wellfounded models no matter what the motivation of HP is.

It seems that to date HP has not done this. Suppose though that HP does eventually isolate and declare as true some new axiom. I would like to see clarified how one envisions this happens and what the force of that declaration is. For example, is the declaration simply conditioned on a better axiom not subsequently being identified which refutes it? This seems to me what you indicate in your message to Pen.

Out of LC comes the declaration “PD is true”. The force of this declaration is extreme, within LC only the inconsistency of PD can reverse it.

Regards,
Hugh

Re: Paper and slides on indefiniteness of CH

Dear Pen,

On Tue, 19 Aug 2014, Penelope Maddy wrote:

Dear Sy,

I think anyone would be nervous about linking set-theoretic truth to a concept private to one person and (perhaps) a handful of his co-workers,

I am disappointed to hear you say this.

I apologize for disappointing you. I was going by some of what you’ve written in these exchanges. First this:

First question: Is this your personal picture or one you share with others?

I don’t know, but maybe I have persuaded some subset of Carolin Antos, Tatiana Arrigoni, Radek Honzik and Claudio Ternullo (HP collaborators) to have the same picture; we could ask them.

Why do you ask? Unless someone can refute my picture then I’m willing to be the only “weirdo” who has it.

This was very badly said, my mistake. Of course I expect that other people’s mental pictures of the universe of sets share a lot in common with mine. When I made these remarks I failed to understand the importance of your questions for grounding my notion of “intrinsic features of V”. Sorry, philosophically I still have much to learn.

You then softened this with:

You got this wrong. I indeed expect that others have similar pictures in their heads but can’t assume that they have the same picture. There is Sy’s picture but also Carolin’s picture, Tatiana’s picture … Set-theoretic truth is indeed about what is common to these pictures after an exchange of ideas.

I assumed you still meant to limit the range of people whose pictures are relevant to a fairly small group.

No, the “…” also includes Pen’s picture!

Otherwise the collection of things ‘common to these pictures’ would get too sparse.

In any case, these further explanations are most helpful …

2. Mental pictures

Each set-theorist who accepts the axioms of ZFC has at any given time an individual mental picture of the universe of sets.

OK. Everyone has his own concept of the set-theoretic universe.

3. Intrinsic features of the universe of sets

These are those practice-independent features common to the different individual mental pictures, such as the maximality of the universe of sets. Thus intrinsic features are determined by the set theory community. (Here I might lose people who don’t like maximality, but that still leaves more than a handful.)

OK. Intrinsic features are those common to all the concepts of set-theoretic universe (or close enough).

With the important requirement of “practice-independence”! PD and large large cardinal existence don’t sneak in (unless they are at some point derivable from intrinsically-based criteria).

So far, this seems to be the usual kind of conceptualism: there is a shared concept of the set-theoretic universe (something like the iterative conception); it’s standardly characterized as including ‘maximality’, both in ‘width’ (Sol’s ‘arbitrary subset’) and in the ‘height’ (at least small LCs). Also reflection (see below).

“Reflection” is ambiguous; I use it below to pass from “features” to “criteria” but maybe a better word there would be “mirroring” or something like that. Usually I use “reflection” to refer to reflection principles, i.e. ordinal-maximality.

4. The Hyperuniverse

This mathematical construct consists of all countable transitive models of ZFC. These provide mathematical proxies for all possible mental pictures of the universe of sets. Not all elements of the Hyperuniverse will serve as useful proxies as for example they may fail to exhibit intrinsic features such as maximality.

OK. We stipulate that the hyperuniverse contains all CTMs of ZFC. But some of these (only some? — they’re all countable, after all) fail to exhibit maximality, etc.

The “maximality” of our mental pictures of the universe is mirrored by the countable models which satisfy mathematical criteria which are faithfully derived from the feature of
“maximality”. For example, the minimal model of set theory will satisfy none of the criteria based on “maximality”, but a countable model of V = L could satisfy an ordinal-maximality criterion (it could be “tall” in terms of reflection but still countable). There also could be countable models that satisfy the IMH, a “powerset” maximality criterion. You are right, you lose “literal maximality” when you look at countable models but you can still faithfully mirror “maximality” using countable models. Remember we are in the end after first-order consequences which don’t notice if the model is countable or not. And the huge new advantage of working with countable models as “proxies” is that we have the ability to generate and compare universes, allowing us to express “external forms” of “maximality” in ways that were not derivable from the old maximal iterative conception. This is not possible using uncountable models.

5. Mathematical criteria

These are mathematical conditions imposed on elements of the Hyperuniverse which are intended to reflect intrinsic features of the universe of sets. They are to be unbiased, i.e. formulated without appeal to set-theoretic practice. A criterion is intrinsically-based if it is judged by the set theory community to faithfully reflect an intrinsic feature of the universe of sets. (There are such criteria, like reflection, which are judged to be intrinsically-based by more than handful.)

OK. Now we’re to impose on the elements of the hyperuniverse the conditions implicit in the shared concept of the set-theoretic universe. These include maximality, reflection, etc. (We’re weeding the hyperuniverse, right?)

Right!

6. Analysis and synthesis

An intrinsic feature such as maximality can be reflected by many different intrinsically-based mathematical criteria. It is then important to analyse these different criteria for consistency and the possibility of synthesizing them into a common criteria while preserving their original intentions. (I am sure that more than a handful can agree on a suitable synthesis.)

I think this is the key step (or maybe it was (5)), the step where the HP is intended to go beyond the usual efforts to squeeze intrinsic principles out of the familiar concept of the set-theoretic universe. The key move in this ‘going beyond’ is to focus on the hyperuniverse as a way of formulating new versions of the old intrinsic principles.

Let me stop at this point, because I’m afraid my paraphrase has gone astray. You once rejected the bit of my attempted summary of your view that said the new hyperuniverse principles ‘build on’ principles from the old concept of the set-theoretic universe, and I seem to have fallen back into that misunderstanding. The old concept you characterize as ‘just the maximal iterative conception’. (You don’t include maximizing ‘width’ in this, though I think it is usually included.)

I just wasn’t sure whether the phrase “maximal iterative conception” includes maximising width; if so, fine.

I’m not sure how to describe the new concept, but the new principles implicit in it are different in that ‘they deal with external features of universes and are logical in nature’ (both quotes are from your message of 8/8).

What I’m groping for here is a characterization of where the new intrinsic principles are based. It has to be something other than the old concept of the set-theoretic universe, the maximal iterative conception. I keep falling into the idea that the new principles are generated by thinking about the old principles from the point of view of the hyperuniverse, that the new principles are new versions of the old ones and they go beyond the old ones by exploiting ‘the external features of universes’ (revealed by the hyperuniverse perspective) in logical terms. But this doesn’t seem to be what you want to say. Is there a different, new concept, with new intrinsic principles?

I think we are in agreement, the problem was my failure to realise that the “maximal iterative conception” does indeed include maximising width. So keeping my terminology, it’s the same old feature of “maximality” but the mathematical criteria derived from this “go beyond the old [internal] ones by exploiting the external features of universes revealed by the
hyperuniverse perspective in logical terms”, just as you have said.

An aside: as I understand things, it was the purported new concept that seemed to threaten to be limited to a select group. If the relevant concept in all this is just the familiar concept of the set-theoretic universe — which does seem to be broadly shared, which conceptualists generally are ready to embrace — and the hyperuniverse is just a new way of extracting information from that familiar concept, then at least one of my worries disappears.

That is exactly right. One less thing to worry about!

Thanks a lot,
Sy

PS: Obviously my immediate goal is to get to the point where I can convince you that the programme is on a solid foundation, even if you are not especially interested in it (which is OK, as I am not tying this to practice and I know how strongly you feel about practice). But at least with some reassurance of a solid philosophical foundation I will feel a lot better about devoting myself to the hard mathematics necessary to implement the programme. So please keep challenging me and looking for “cracks” in the foundation!

Re: Paper and slides on indefiniteness of CH

Well, it is hard not to respond. So I guess I will violate my “last message” prediction. Hopefully my other predictions are not so easily refuted. My apologies to the list. Never say never I suppose.

Hugh:

Regarding your two points directed at me and the HP:

1) Having established Con LC, one has established that every set X belongs to an inner model in which LC is witnessed above X.

I don’t necessarily agree. There are only countably many LC axioms and a perfectly coherent scenario is that each holds in an inner model which is coded by some real. That demands only countably many reals and in no way suggests that there should be large LCs in inner models containing all of the reals.

But please don’t misunderstand me: The HP is a programme for discovering new first-order properties via intrinsically-based criteria for the choice of preferred universes. It is open-ended, meaning that one cannot exclude the possibility of arriving at the statement you express above or even at the existence of LCs in V at some point in the future. But so far the evidence is just not there.

2) I challenge HP to establish that there is an inner model of “ZFC + there are infinitely many Woodin cardinals” without establishing PD. For this HP can use Con LC for any LC up to Axiom I0.

I think I understand the point you want to make here, which is that the HP so far offers no new techniques for producing consistency lower bounds beyond core model theory. I agree. But the intuitive (not mathematical) step from Con LC to inner models for LC is straightforward: Extrinsically we understand that LC is perfectly compatible with both the well-foundedness of the membership relation and with ordinal-maximality. From this we can conclude that LCs exist in countable transitive models of ZFC which are ordinal-maximal (i.e. #-generated). From this it provably follows that they exist in inner models.

We completely disagree on the Con LC issue as our email thread to this point clearly shows, no need to make a further comment on that.

I agree that HP is part of the very interesting study of models of ZFC. There are many open and studied questions here. For example suppose \phi is a sentence such that there is an uncountable wellfounded model of ZFC + \phi but only at most one model of any given ordinal height. Must all the uncountable wellfounded models of ZFC + \phi satisfy V = L? (The wellfounded models must all satisfy V = \text{HOD} and that there are no measurable cardinals). The answer could well be yes and the proof extremely difficult etc., but to me this would be no evidence that V = L.

The issue I seek clarified is exactly how HP will lead to a new axiom. At some point HP must declare some new sentence as “true” . What are the HP protocols? You seem to suggest that SIMH if consistent is such a “truth” but I am not even sure you make that declaration.

Regarding your “final comments”:

You said:

“Let IMH(card-arith) be IMH together with the following:

Suppose there is a card-arith preserving extension of M in which \phi holds. Then there is an card-arith preserving inner model of M in which \phi holds.

“Conjecture” : IMH(card-arith) implies GCH.

My question for Sy’s paper is simply, why if “conjecture” is true does one reject this in favor of SIMH (assuming SIMH is consistent)?”

It is good that you posed this question because it illustrates very well how the HP is meant to work. If the “conjecture” is true then the SIMH is almost surely inconsistent and this would be exciting progress in the HP. Indeed I welcome the exploration of a wide range of such criteria in order to gain a better understanding of absoluteness, constantly refining our picture of the universe based on these forms of maximality. Of course some criteria, like the SIMH, are very natural and well-motivated, whereas others, such as the IMH for ccc extensions (roughly speaking: Levy absoluteness with cardinal-absolute parameters for ccc extensions) are not. Note that the latter is consistent and solves the continuum problem! But in my view its downfall is simply that the notion of “ccc extension” is unmotivated.

Continuing the point I make above. I agree with you that if “conjecture” is true then SIMH is probably inconsistent. But it is also possible that both “conjecture” is true and SIMH is consistent. What then?

I guess you could predict based on your conviction in HP that this latter case will not happen just as I predict PD is consistent. For me an inconsistency in PD is an extreme back-to-square-one event. I would like to see (at some point) HP make an analogous declaration.

I am really trying to be helpful here. These are natural issues that I think need to be addressed in making the case that HP has the potential to discover and validate new axioms.

Regards,
Hugh