Does substack tell you how many readers there are? Please tell me there are more than 6. I would buy a physical copy in book form. I love this topic and Phillips writing is more digestible than just digging into the GUTCP especially for a non physicist. I would really love to see a popular science tour of all of the different aspects of Randy's theory. I'm sure that would be available at some point assuming brilliant light is able to commercialize. Everything seems to hinge upon commercialization.
I would also be interested in a hard copy. Maybe you should publish in book format as per Brett Holverstott's "The End of Fire". Appreciate the effort, energy and enthusiasm that has gone into the monograph. I refer to it and will point people to it if they want more info on the background and theory.
It would be most helpful if the derivation of (5-6) were fleshed out a bit more. I'm developing a Mathematica framework for Mod 1, targeting a generalization so that ionization energies can be thereby derived for all Z and n, and have everything pretty well done but for that. What appears to have happened is that the justification of (5-5)'s postulate of convenience (which I can accept as a reasonable compromise for the readability of this presentation) took compositional attention that should have been devoted to (5-6).
The following comment is not meant to be a criticism but rather friendly advice about "falsification" and "paradigms" ala Popper and Kuhn. It is one thing for a current "paradigm" of physical theory to be "falsified". It is quite another for someone like Kuhn to come up with a philosophical "paradigm" that is, itself, meta-scientifically "falsified".
In short goes like this: Any departure of observation from theory adds to the number of parameters in the current theory. Two theories accounting for the same set of observations can be compared by the number of parameters required. The one with the fewest parameters should be considered the best current theory.
If you can't see how this impacts not only GUTCP's acceptance but Popper and Kuhn, then I suggest you sit back and mediate on what I went through regarding autism -- which I related in response to the recent initiative by RFK Jr.:
I can pretty much guarantee RFK Jr. won’t come up with the cause of autism incidence increase.
Popper and Kuhn both successfully attacked the foundation of data-driven scientific discovery of causality with their psychologically and rhetorically intense popularizations of “the philosophy of science” at the precise moment in history that it became practical to rigorously discriminate mere correlation from causation, even without controlled experimentation, by looking at the data.
I only became aware of this after attempting to do first-order epidemiology of the rise of autism that had severely impacted colleagues of mine in Silicon Valley — investigation required since it was apparent that no one more qualified was bothering to do so. I did expect to find a correlation with non-western immigrants from India, and found one. Now, having said that, I’m not here to make the case for that particular causal hypothesis — there are others that I can set forth that I also expected and did find evidence for. What I’m here to point out is that my attempts to bring these hypotheses up were greeted with the usual “social science” rhetoric one expects: “Correlation doesn’t imply causation.” “Ecological correlations are invalid due to the ecological fallacy.” and so forth. This got me interested in precisely how it is that “social science” purports to infer causation from the data, experimental controls being the one widely accepted means of determining causality in the philosophy of science and the one thing social science can’t perform at the macro scale.
This interest was amplified when I, on something of a lark, decided to take my data that I’d gathered to investigate the ecology of autism, and see which of the ecological variables was the most powerful predictor of the other variables I had chosen. One variable, in particular, that I had been interested in, not for autism causation, but for social causality in general, was the ratio of Jews to Whites in a human ecology at the State level in the US. Well, out of hundreds of variables, guess which one came out on top?
Of course, again, I don’t need to explain to the readers the kind of rhetorical attacks on this “lark” of mine: Same old, same old…
So my investigation of causal inference intensified.
Eventually, circa 2004-2005, I intuited that data compression had the answer and suggested something I called “The C-Prize” — a prize that would pay out for incremental improvements in compression of a wide-ranging corpus of data, resulting in computational models of complex dynamical systems, including everything from physics to macrosocial models. That’s when I ran across information theoretic work that distinguished between Shannon information and what is now called “Algorithmic Information”. The seminal work in Algorithmic Information occurred in the lat 1950s and early 1960s — precisely when Moore’s Law was taking off in its relentlessly exponentiating power. Algorithmic Information content of a data set is the number of bits in its smallest executable archive — the smallest algorithm that outputs that data.
Shannon information is basically just statistical. Think of the digits of pi. Shannon says the information content is identical with the string of digits. Algorithmic Information says the information content is the size of the program that outputs the digits of pi.
That 1960s discovery threatened to bring the social sciences to heel with a rigorous and principled information criterion for model selection far superior, and provably so, to all other model selection criteria used by the social sciences. Moreover, the models so-selected would be necessarily causal in nature and be amenable to using the power of silicon to make predictions without any kind of ideological bias.
This, I strongly believe, was the precise reason Popper and Kuhn committed their acts of violence against science at the precise moment in history they did.
So what is the chance that RFK Jr. will apply this, the only rigorous tool to infer causality at the ecological level, given the threat it poses to the social pseudosciences?
See “HumesGuillotine” at github for more details on nuking the social pseudosciences. Contact me for more information on how you can help.
Does substack tell you how many readers there are? Please tell me there are more than 6. I would buy a physical copy in book form. I love this topic and Phillips writing is more digestible than just digging into the GUTCP especially for a non physicist. I would really love to see a popular science tour of all of the different aspects of Randy's theory. I'm sure that would be available at some point assuming brilliant light is able to commercialize. Everything seems to hinge upon commercialization.
I would be interested in a hard copy.
I would also be interested in a hard copy. Maybe you should publish in book format as per Brett Holverstott's "The End of Fire". Appreciate the effort, energy and enthusiasm that has gone into the monograph. I refer to it and will point people to it if they want more info on the background and theory.
It would be most helpful if the derivation of (5-6) were fleshed out a bit more. I'm developing a Mathematica framework for Mod 1, targeting a generalization so that ionization energies can be thereby derived for all Z and n, and have everything pretty well done but for that. What appears to have happened is that the justification of (5-5)'s postulate of convenience (which I can accept as a reasonable compromise for the readability of this presentation) took compositional attention that should have been devoted to (5-6).
Yes I am interested in the hard copy of this important work!
The following comment is not meant to be a criticism but rather friendly advice about "falsification" and "paradigms" ala Popper and Kuhn. It is one thing for a current "paradigm" of physical theory to be "falsified". It is quite another for someone like Kuhn to come up with a philosophical "paradigm" that is, itself, meta-scientifically "falsified".
In short goes like this: Any departure of observation from theory adds to the number of parameters in the current theory. Two theories accounting for the same set of observations can be compared by the number of parameters required. The one with the fewest parameters should be considered the best current theory.
If you can't see how this impacts not only GUTCP's acceptance but Popper and Kuhn, then I suggest you sit back and mediate on what I went through regarding autism -- which I related in response to the recent initiative by RFK Jr.:
I can pretty much guarantee RFK Jr. won’t come up with the cause of autism incidence increase.
Popper and Kuhn both successfully attacked the foundation of data-driven scientific discovery of causality with their psychologically and rhetorically intense popularizations of “the philosophy of science” at the precise moment in history that it became practical to rigorously discriminate mere correlation from causation, even without controlled experimentation, by looking at the data.
I only became aware of this after attempting to do first-order epidemiology of the rise of autism that had severely impacted colleagues of mine in Silicon Valley — investigation required since it was apparent that no one more qualified was bothering to do so. I did expect to find a correlation with non-western immigrants from India, and found one. Now, having said that, I’m not here to make the case for that particular causal hypothesis — there are others that I can set forth that I also expected and did find evidence for. What I’m here to point out is that my attempts to bring these hypotheses up were greeted with the usual “social science” rhetoric one expects: “Correlation doesn’t imply causation.” “Ecological correlations are invalid due to the ecological fallacy.” and so forth. This got me interested in precisely how it is that “social science” purports to infer causation from the data, experimental controls being the one widely accepted means of determining causality in the philosophy of science and the one thing social science can’t perform at the macro scale.
This interest was amplified when I, on something of a lark, decided to take my data that I’d gathered to investigate the ecology of autism, and see which of the ecological variables was the most powerful predictor of the other variables I had chosen. One variable, in particular, that I had been interested in, not for autism causation, but for social causality in general, was the ratio of Jews to Whites in a human ecology at the State level in the US. Well, out of hundreds of variables, guess which one came out on top?
Of course, again, I don’t need to explain to the readers the kind of rhetorical attacks on this “lark” of mine: Same old, same old…
So my investigation of causal inference intensified.
Eventually, circa 2004-2005, I intuited that data compression had the answer and suggested something I called “The C-Prize” — a prize that would pay out for incremental improvements in compression of a wide-ranging corpus of data, resulting in computational models of complex dynamical systems, including everything from physics to macrosocial models. That’s when I ran across information theoretic work that distinguished between Shannon information and what is now called “Algorithmic Information”. The seminal work in Algorithmic Information occurred in the lat 1950s and early 1960s — precisely when Moore’s Law was taking off in its relentlessly exponentiating power. Algorithmic Information content of a data set is the number of bits in its smallest executable archive — the smallest algorithm that outputs that data.
Shannon information is basically just statistical. Think of the digits of pi. Shannon says the information content is identical with the string of digits. Algorithmic Information says the information content is the size of the program that outputs the digits of pi.
That 1960s discovery threatened to bring the social sciences to heel with a rigorous and principled information criterion for model selection far superior, and provably so, to all other model selection criteria used by the social sciences. Moreover, the models so-selected would be necessarily causal in nature and be amenable to using the power of silicon to make predictions without any kind of ideological bias.
This, I strongly believe, was the precise reason Popper and Kuhn committed their acts of violence against science at the precise moment in history they did.
So what is the chance that RFK Jr. will apply this, the only rigorous tool to infer causality at the ecological level, given the threat it poses to the social pseudosciences?
See “HumesGuillotine” at github for more details on nuking the social pseudosciences. Contact me for more information on how you can help.