The Hydrino Hypothesis: Chapter 2
An Introduction to the Grand Unified Theory of Classical Physics
This monograph is an introduction to Randell L. Mills’ Grand Unified Theory of Classical Physics, Hydrino science, and the efforts of the company Brilliant Light Power (BLP) to commercialize Hydrino-based power technology, as told by Professor Jonathan Phillips. Out of necessity, it assumes a degree of familiarity with physics and physics history. An overview of the BLP story which serves as a helpful introductory piece to those unfamiliar with its sweeping scope can be found here. Readers should also read the previous chapter of this monograph prior to this one:
Chapter 1 of The Hydrino Hypothesis
By Professor Jonathan Phillips
The Proper Approach To Testing New Scientific Theory
Preface- The purpose of this chapter is to familiarize the reader with the scientific method, thus permitting proper perspective on the opinion of “experts,” and providing understanding of the proper means to rigorously test a theory. In sum, the focus of both scientists and readers should be on the outcome of experiments designed to test/debunk a given hypothesis.
Paradigm Shifts
As described in the famous book by Thomas Kuhn, “The Structure of Scientific Revolutions,” paradigm shifts in science and technology are dramatic departures from normal science, where normal science is that defined by textbooks or other standard pedagogy.1
These shifts are often initiated by a single, very young, individual who is at best a marginal member of the scientific community at the moment, and is accurately described as an outsider. Notably, many of these breakthroughs are “Cancelled” at the moment of inception, either by the dominant church of the time or the dominant “church of science.”
Copernicus and Galileo (Cancelled) championed the Heliocentric model of the solar system, and had their work banned for hundreds of years by the Catholic Church.
The Wright Brothers (Cancelled) owned a bicycle shop and were derided in the US by all scientific authorities for their claims of inventing a “flying machine,” and ignored by the press. They were only lauded world-wide after flying, under control for more than an hour at a time, in front of a crowd in Le Mans, France, almost 5 years after their first flight at Kitty Hawk. To paraphrase the New Testament: “one cannot be a Prophet in his own land.” They never attended college.2
Steve Jobs, also not a college graduate, led the personal computer revolution (and not the leading computer firm of the time IBM), initially to great skepticism.
Alfred Wegener (Cancelled), who originated in the early twentieth century the theory that the continents are relatively thin crustal plates that move on an underlying layer of liquid, was a respected meteorologist. His model of moving plates was disparaged by the scientific community.3 Decades after his passing, with the discovery of the mid-Atlantic ridge, and the (re)birth of plate tectonics, it became clear he was fundamentally correct.
Albert Einstein was simply a patent clerk, 25 years old, when his first three revolutionary papers were published. His outsider status meant that his ideas did not have the imprimatur of an established scientific institution or an influential mentor to lend them immediate credibility. Acceptance of his ideas took time. However, thanks to him, the universe is now believed to be “relative” in both time and space.
J.W. Gibbs, the father of thermodynamics, was a young professor at a US college (Yale) when he published amazing theories in what was described at the time as a “gardening journal.” Notably, at that time European institutions dominated all of science.
A contemporary example of scientific cancel culture having a negative impact on society is the story of the failure to accept, for more than a decade, strong evidence that ulcers are caused by a specific class of bacteria. Ulcers causing severe disability and death are behind us, but only because a couple of brave doctors persisted in their work despite strong efforts to stop or ignore them by the medical establishment.
In 2005, Barry Marshall and J. Robin Warren, physicians from Western Australia, were awarded the Nobel Prize for discovering that the bacteria H. pylori causes stomach ulcers (and stomach cancer) and for demonstrating effective antibiotic treatments.
Unfortunately, the contemporary medical community, both doctors and pharmaceutical companies, initially resisted the data that Marshall and Warren provided them with in the early 1980s, and dismissed or ignored their findings.
Even after they published letters in one of the leading medical journals of the time, Lancet, the proposition was studiously ignored. Marshall was forced to infect himself with the bacteria, causing peptic ulcers, then cure himself with antibiotics to make a truly compelling case. It took about a decade of persistence for these two to finally obtain support.
In 2010 Discover Magazine interviewed Marshall and he made some interesting comments, including the reaction to his 1984 Letter in Lancet:
Q: That letter must have provoked an uproar.
A: It didn’t. In fact, our letters were so weird that they almost didn’t get published. By then I was working at a hospital in Fremantle, biopsying every patient who came through the door. I was getting all these patients and couldn’t keep tabs on them, so I tapped all the drug companies to request research funding for a computer. They all wrote back saying how difficult times were and they didn’t have any research money. But they were making a billion dollars a year for the antacid drug Zantac and another billion for Tagamet. You could make a patient feel better by removing the acid. Treated, most patients didn’t die from their ulcer and didn’t need surgery, so it was worth $100 a month per patient, a hell of a lot of money in those days. In America in the 1980s, 2 to 4 percent of the population had Tagamet tablets in their pocket. There was no incentive to find a cure.
Is it possible that the medical community and the pharmaceutical companies “Cancelled” the discovery for as long as possible in pursuit of money and power? The interested reader is referred elsewhere for more information in order to answer that question.4
The primary relevant point: cancel culture in science is not something reserved for the past. It is very much a contemporary issue. What fundamental developments in science are currently suppressed? What non-science is promoted as science?
Insiders And Outsiders
Kuhn argues “Insiders” are generally unable to lead paradigm shifts as they are too committed, after years of study and efforts to achieve membership in the proper scientific fraternity, to preservation of standard models. Hence, if there is to be a real paradigm shift in quantum theory, or any other scientific theory, it will likely be the work of an outsider.
Randell L. Mills is a Harvard trained medical doctor, not a formally trained physicist, who presented his initial description of the GUTCP before he was 30 years old. Dr. Mills perfectly fits the description of a potential paradigm shift agent.
Prediction: the day will come when the revolutionary genius of his new theory will be widely accepted.
Mathematical and physics problems can remain unsolved for centuries, even if in some cases authority supports “final” paradigms that history later shows are to have been interim.
For example, there were paradigms of thermodynamics, such as Phlogiston theory, hundreds of years before Gibbs created the mathematical model that revolutionized that science.5
The same is true for electrical behavior before Maxwell revolutionized the theory of electricity with his mathematical field theory.
Fermat’s Last Theorem, “proven” repeatedly, awaited a final-final proof for more than 350 years.
Atomic theory, that is, a theory that all matter is composed of elementary particles that cannot be further divided, was first written about by Greek philosophers about 2500 years ago.6 However, it was only about 120 years ago, with the discovery of a distinct negatively charged particle, the electron, by J.J. Thomson in 1897, that the modern atomic theory was born.7
With work by Rutherford (1911) and others, the full morphological model of atoms that still dominates, and with which the GUTCP is arguably consistent, was completed. To wit: atoms are comprised of truly elementary, indivisible, negatively charged low mass objects that “orbit” in low density around the atomic periphery (electrons), and a relatively small, very dense core of high mass, positively charged particles (protons) and high mass neutral particles. In atoms the net charge is zero.
The GUTCP model is largely consistent with this picture, only in the GUTCP the electrons are physical objects: spherical bubbles of negative charge with surface currents that symmetrically enclose the positively charged core. Dr. Mills has dubbed the electron charge bubble the “orbitsphere.” In the GUTCP, there are no zero-dimensional electrons for which position and momentum are described by a probability distribution as per SQM.
It is reasonable to consider, notwithstanding the protestations of experts, that the Schrodinger model, first published less than one hundred years ago, is simply an interim model of the atom.8
Indeed, many prominent physicists have expressed deep skepticism of SQM. Einstein famously quipped:
“God does not play dice with the universe.”
This is a very compact expression of his disdain for the probabilistic description of atomic particles inherent in SQM. Einstein harbored the belief that quantum mechanics was an incomplete description of physical reality and that a more foundational, deterministic theory would eventually supplant it.
Many others have argued that SQM is not complete as there are “hidden variables” that once discovered will lead to a more deterministic theory.9
In sum, to evolve and improve our understanding of the real world, new theories cannot be summarily dismissed simply because they challenge the current paradigm or make experts uncomfortable.
The opposite is required: significant advances in understanding and technology require serious consideration of theories, often generated by outsiders, that challenge the current paradigms. Significant progress requires theories that make experts uncomfortable.
Thus, the search for improved understanding requires new theories be considered viable until data demonstrates the new theory fails to match observation.
Scientific testing, not authority, is required to debunk a new theory. With apologies to Cartman, “respect my authoritah” is not a scientific argument.
The Proper Scientific Method
To succinctly explain the proper scientific method, a brief summary of this process, inspired by the philosopher Karl Popper, and others, is provided below.1011
It helps to define and teach a concept via comparison. Here the comparison is between mathematics, for which “proof” exists, and science for which “proof” does not exist.
Mathematical Hypotheses
Mathematical hypotheses can be proven because the permitted operations on and with numbers have clear, man-made, rules. As long as a postulate can be shown to be completely consistent with the rules, the postulate is proven. Mathematical operations, behavior, etc, observed today will be valid tomorrow, etc. Something proven regarding mathematics today, given a very clear rule set, will be proven and valid tomorrow, and forever, as well.
Scientific Hypotheses
In contrast to mathematics, in science something demonstrated today, may not be demonstrated tomorrow. Really? Example?
It was only about 50 years ago, after investigation of the odd orientation of the magnetic moments in rocks on both sides of the mid-Atlantic ridge, that it was determined that the magnetic poles of the earth periodically switch positions; north pole becomes south pole, and south pole becomes north pole.
In essence, one day it was known that the magnetic north pole is always in the north, and the next day that fact was no longer a fact.
There are many former “facts” like this fact:
It used to be a known fact that the earth is flat.
It used to be a fact that the continents don’t move.
It used to be a fact that distance and time are not relative but absolute.
It used to be a fact that life forms do not evolve.
It used to be a fact that electric and magnetic forces were due to different mechanisms until special relativity established they are two expressions of the same electric field.
It used to be that intergalactic matter was cold, and not, as per new “facts,” more than 1,000,000 °K in some spots.
Figure 2-1: It used to be a fact that cigarettes were good for you. Or is it simply true that experts can be bought?
And, before the acceptance of SQM, it was a fact that elementary particles are actual particles, of a particular shape and position, not probability distributions describing objects of unknowable shape and position.
Hopefully, it will soon be re-recognized, again, as per GUTCP, that elementary particles are indeed physical particles of absolute shape and position.
The above points to a significant aspect of the scientific process: “observation” must be separated from “fact.”
As shown by the examples above, sometimes facts are just theories in drag. The north pole is observed to be in the north, but that doesn’t mean it is a fact that it has always been in the north. The current orientation of the earth’s magnetic field is a proper observation, but the suggestion it is always oriented in the same manner is a theory in drag.
Can We Know Anything?
So, what is the proper scientific process for evaluating theories, old and new? One can be excused for despairing that: “it seems science and facts are so ephemeral!” “Can we know anything?”
Yes and no. Yes, there is a method for testing theories, but it only leads to a provisional acceptance of a theory. New validated data can always invalidate a model.
There is no absolute, and no forever, in science theory.
There are a multitude of layers to the testing and evaluation of theory. The key test of a theory, old or new, is agreement of predictions with validated observations.
What do we mean by “validated,” and what is a prediction?
In the physics of atoms, the subject of this monograph, the minimum requirement for data validation is observation (e.g., spectroscopic data) obtained using techniques which can be shown to be logical and reliable that generate repeatable outcomes.
Observations obtained in multiple labs, using a variety of techniques and instruments, exist at a higher level of validation. Expert input is acceptable for the purpose of providing logical objections or support for any given data gathering approach, or for providing historical perspective on past performance of a protocol in order to assess the validity of data collected using that protocol, or for comparison with alternative theories.
Expert input not accompanied by reasoning and data is “argument from authority” and is not acceptable in science.
Poppers’ Falsification Principle: Only validated data that shows theoretical predictions are wrong can debunk a theory and demonstrate its lack of viability.
Opinion, from anyone of any standing within the community, is not part of the scientific process. If it were, the flat earth might still be at the center of the cosmos.
There is nothing personal in science.
Another way of expressing this: science does not involve voting. No matter how many votes a theory (e.g. flat earth) inconsistent with observation receives from the scientific community, it is still disproved by observation. It is invalid. Voting, even by experts, does not overcome observations.
What Is A Prediction?
There are levels of prediction. For atomic physics, the first level of prediction is quantified (with no variable parameter optimization, etc.) agreement with validated data.
For example, if the GUTCP quantitatively predicts observations, using only equations developed before 1872, while employing constants obtained from the most validated sources, regarding the strongly validated spectra of atoms and ions (with no variable parameters, etc.), it is not debunked and remains viable.
In fact, as shown later in this monograph, the GUTCP meets the quantitative prediction requirement regarding all atoms and ions, hence GUTCP remains a valid theory of atoms and ions, at least on the basis of its agreement with spectroscopy.
It is also shown later in this monograph that SQM does not meet this requirement, hence is no longer a valid atomic physics theory.
A second level of prediction are truly new and novel observations that are consistent with the theory. Several examples related to atom structures are presented below regarding new observations which are, upon consideration, required or predicted by the GUTCP.
Among the currently unexplained phenomena that can be explained readily with the GUTCP and indeed are predicted are:
Dark matter (Hydrinos).
Unknown spectral lines in the EUV spectra of deep space.
Balmer series line broadening observed in terrestrial plasmas and in many stars.
Excess heat/energy production in select plasmas.
In contrast, as discussed in more detail below, none of these observations can be explained readily with SQM or any conventional physics models, and they are certainly not predicted.
Science Vs. Metaphysics
Another important layer in the validation process is clearly demonstrating the model is a scientific model and not a metaphysics model. The model of dark matter preferred by the scientific community is exemplary of the error of conflating science with metaphysics, as discussed in the final chapter of this book.
Metaphysics definition: any question or postulate that cannot be answered or tested using scientific observation, analysis, or experiment.
It is clear, even to those who only follow science in the general press, that dark matter cannot be readily explained by conventional physics, and it certainly was never predicted!
The most widely accepted explanation for dark matter not behaving in a conventional fashion (e.g. does not absorb, reflect, or emit electromagnetic radiation), is verging on metaphysics. Specifically, it is generally accepted that dark matter is non-baryonic matter. Non-baryonic matter? This is matter with mass, but not any of the other properties of known forms of matter.
Dark matter, according to this concept, does not need to follow the standard rules of matter as we know it, precisely because it is non-baryonic, hence unknowable.
Thus the non-baryonic postulate cannot be tested, as all available tests are developed for baryonic matter. To get around this conundrum, and attract large grants, as will be discussed in Chapter 10 of this monograph, some models propose dark matter has some baryonic matter characteristics. Assuming dark matter has a mix of knowable baryonic properties and “unknown” properties allows the design of complex experiments designed to prove the existence of dark matter. It is notable that all of these “existence tests” have failed. Despite failed tests, and dubious assumptions about mixed baryonic/non-baryonic properties, almost the entire physics community fully accepts the existence of non-baryonic matter as a “fact.”
For this monograph, it must be noted that there are several alternate models, generally disdained by the physics community, that explain the observations “requiring” the postulate of dark matter as arising from something other than non-baryonic matter. The most important herein is the postulate that dark matter is composed entirely of Hydrinos, per GUTCP, as will be discussed in a later chapter.
When the conventional physics community is functioning properly, in the event that no model is consistent with new observations, a plethora of new models, and model variants, evolve. Room for new models is created when all of the proposed models are found inconsistent with some element of data, or are found to be untestable.
For truly perplexing phenomenon, generations of models, spanning decades, even centuries, are proposed and tested. Dark matter is an example of an observation that has been awaiting verified explanation for more than a century.
More on this topic will be presented in later chapters of this monograph.
Question: Is non-baryonic dark matter the new phlogiston?
In sum, one feature of a true scientific model is the ability to test it against observation. The postulate that dark matter is non-baryonic is metaphysics by this definition.
Question: do features of SQM, such as the probability distribution interpretation of the wave equation, infinitely small particles, the failure of Newton’s Laws at scales smaller than h-bar, the Correspondence Principle, etc. verge on metaphysics?
Model Consistency
Another layer in the process is to evaluate the consistency of a model with earlier, widely demonstrated, highly validated, models. Thus, a theory that is consistent with Newton’s Laws is more likely valid than one that breaks Newton’s Laws. Why? As noted previously by the author, it is because Newton’s Laws are exemplary of validated physics:
“Laws, and mechanics based on these laws, are demonstrated in first year student laboratories with measures of acceleration due to gravity, the behavior of both the elastic and inelastic collisions, the behavior of springs, the demonstrations of conservation of angular momentum, etc. These laws are employed to predict with great accuracy the orbits of the planets, plan the precise landing of spacecraft on Mars, explain vibrations and curveballs, permit design of stable structures (bridges, tunnels, submarines), model fluid flow, etc.”12
A similar litany can be provided for Maxwell’s Laws. Early on in the physics curriculum it is demonstrated that Maxwell’s Laws quantitatively explain phenomena such as:
The attraction and repulsion between charged particles.
The control of the direction of charge in a cathode ray tube.
The separation of ionic species in a quadrupole mass spectrometer.
The creation of voltage in a coil moving through a magnetic field.
The separation of different wavelength light with a prism.
The attraction of magnets.
Thus, Newton’s Laws and Maxwell’s equations are exemplary of validated physics. The likelihood a theory will be validated by experiment correlates closely with its consistency with earlier highly validated models.
Rating The GUTCP AND SQM
How do we rate the GUTCP and SQM relative to their consistency with the highly validated Newton’s Laws and Maxwell’s equations?
In principle, the GUTCP is based on absolute adherence to the principles and mathematics of Newton’s Laws and Maxwell’s equations. That is the origin of the word “classical” in the name. The GUTCP analysis of atomic spectra is based entirely on the application of classical physics at all size and time scales.
Moreover, in the following chapters it is shown that it is no idle boast that the GUTCP, as applied to atoms and ions, quantitatively predicts spectroscopic observation using only pre-1872 physics with no variable parameters. Hence, it rates a perfect score in terms of consistency with classical, pre-1872, physics with respect to the model of ions and atoms.
In contrast, as discussed in detail in future chapters of this monograph, SQM is based by design, and admission, on a rejection of Newton’s Laws and Maxwell’s equations at the length scale of atoms and ions (“scale of h-bar”). SQM rates a perfect zero with respect to consistency with classical physics at the scale of atoms and ions.
As noted by K. Popper:
“An inconsistent theory is no theory at all.”
Is it perhaps the case that all of presently accepted textbook physics, which espouse two sets of laws, one at quantum-size scale and the other at macro-size scale, qualifies as inconsistent? Or perhaps a theory such as SQM that reduces to equations that all agree are fundamentally unsolvable, and hence in practice employs a plethora of “approximations” for mathematical analysis of atoms and ions, approximations better described as independent theories, is inconsistent?13
The difficulties of SQM with validation criteria do not stop with its complete lack of overlap with classical physics. Although it might be argued with some reason that SQM is validated for hydrogen, clearly SQM applied to any atomic system with two or more electrons (M-SQQM) is NOT validated. As noted by this author elsewhere:
The accepted true quantum theory of multi-electron atoms, multi-electron standard quantitative quantum mechanics (M-SQQM) is not validated. Not only are there no undergraduate experiments designed to demonstrate this “true” model is “validated,” it is universal practice to label the mathematics of the model intractable. The recognition that the Hamiltonians of multi-electron systems are unsolvable leads to the standard practice of developing “approximate,” mathematically tractable, models to validate true quantum of multi-electron systems. Most of these models are also not consistent in any fashion with the quantum model of one-electron systems. To repeat, the true quantum of multi-electron systems is purportedly validated only indirectly by the success of approximate models…
…(Yet,) it is demonstrated that these approximate methods are mathematically distinct from true multi-electron theory. Hence, showing they lead to results consistent with observation does not validate M-SQQM for multi-electron systems. To rephrase: the results showing some agreement between approximate methods and measured values have no bearing on the (validation) status of true quantum, hence, it is an inescapable conclusion that M-SQQM is nothing more than metaphysics.14
Simplicity
Finally, theories can be judged on their simplicity. When presented with competing theories that produce similar predictions, the one with the fewest assumptions should be selected. The simpler theory is generally the correct one. This principle is known as Occam’s Razor.
A discussion of this postulate must necessarily begin with the definition of a Simple Theory: a theory based on a central, identifiable, notion that connects many observations, that can be tested.
Examples which illustrate this definition follow:
A theory (plate tectonics) which connects everything from earthquakes to mid-ocean ridges, to volcanoes, to tsunamis, meets the definition of simple.
A theory that postulates that a particular universal molecular structure (DNA), present in all cells of all living species, contains all the information that determines the traits of animals, plants, fungus, etc., as well as explaining the mechanism for transmitting traits through generations, is simple.
A theory, the GUTCP, accurately predicts the spectra of every tested and recorded atom/ion spectra using simple algebraic expressions derived from highly validated laws, e.g. Newton’s Laws and Maxwell’s Equations, with no variable parameters. The theory requires one set of basic equations and no variable parameters, to quantitatively predict phenomenon at all scales from the nano to the cosmic.
Thus, by definition GUTCP is a Simple Theory.
And of course, there are the contrasting Not-Simple theories.
SQM, a theory that postulates one set of equations to explain phenomena for scales “larger than h-bar” (atomic) and a completely, different set of equations, to describe phenomena at the “less than h-bar” scale is not simple.
Not only are the equations different, the physics of the two scales is totally different in the non-simple SQM theory. A theory of atoms that requires six orthogonal dimensions for a two-electron system (e.g. helium) and a nine dimensions of orthogonal space (discussed in Chapter 4) for a three-electron system (e.g. lithium) is not simple.1516
SQM, a theory that involves equations too complicated and coupled to solve except with the use of massive approximations, is complex. SQM, a theory that can only be employed to match measured data by employing two or more adjustable parameters, is complex.
It’s not even clear that SQM as practiced is a theory at all, as will be discussed in Chapter 4. It is perhaps best described, as a large set of peculiar methods to curve fit data.
Chapter 2 Personal Notes
Note to the reader: each chapter will include my personal experience with Dr. Mills, the GUTCP, and Hydrino science.
As hoped, my meeting with Dr. Mills and the rest of the BLP team led to a grant from BLP (leading to a complete report late 1996) to study excess heat production, that is, heat arising from a “chemical process” beyond that possible based on standard chemistry.
The underlying notion was that if we at Penn State could establish the production of excess heat in an environment predicted to produce Hydrinos, it would offer indirect evidence in support of the Hydrino hypothesis.
I was given total freedom to undertake any experiment I chose to accurately determine if excess heat was observed in an environment the GUTCP predicted would lead to the formation of Hydrinos and the associated excess energy production.
This freedom was provided based on my demonstrated expertise in precision calorimetry. I had already established a record as the designer of Calvet/thermopile type microcalorimeters with precision sufficient to measure the differential heat of adsorption, as well as rough kinetics of the process, for oxygen adsorption on supported metal catalysts.
This work enabled us to provide the first mapping of the surface of bimetallic catalyst particles, and the effect of various treatments on the composition of the outermost surface layer of the metal catalysts particles. This data was crucial in developing a model of the dynamics of catalyst particle structural and catalytic/chemical changes.
My team had also developed expertise in high temperature calorimetric studies, and we were able to demonstrate using electric power calibration that our 350 C+ calorimeter was accurate to within a couple of percent of the total energy input. Two relevant published works are:
M. Klanchar, B.D. Wintrode, and J.A. Phillips, “Lithium−Water Reaction Chemistry at Elevated Temperature,” Energy Fuels 11(4), 931–935 (1997).
J. Phillips, M.C. Bradford, and M. Klanchar, “A Calorimetric Study of the Mechanism and Thermodynamics of the Lithium Hydride-Water Reaction at Elevated Temperatures,” Energy Fuels 9(4), 569–573 (1995).
We first employed the device to study the interaction of Li and LiH with water at elevated temperatures. This data was important for engineering against corrosion and for controlled production of energy from Li and LiH reactions with water for certain novel engine designs. It was this high temperature device we eventually adapted to the study of Hydrino formation.
Another reason for affording me total freedom was the already established record of the scientific community, or at least those (self-appointed) guardians, for acting to summarily dismiss all claimed measures of excess heat as just another example of cold fusion madness. The thinking, or more accurately the narrative of the attack on Dr. Mills and the GUTCP, clearly was: “Hydrinos, cold fusion, it’s all the same! They are simply versions of crank science, so there is clearly no need to understand or take it seriously!”
At the time Randy was unhappy at the entire cold fusion endeavor, hoping to dissociate (quite accurately) his theory from it. He sensed, I believe correctly, that cold fusion advocates did not pursue the issue in an entirely scientific manner, hence it became a punching bag for the orthodox members of the physics community, portraying the entire business as a pathetic caricature of science.
Randy and others noted the net social outcome of the cold fusion episode was that even the least capable felt entitled to derisively dismiss any challenge to physics orthodoxy. In fact, my observation is that they were applauded for it, and it provided a career boost.
My perspective on cold fusion: it is possible. My good friend, and senior spokesman for the LENR (Low Energy Nuclear Reaction) community, Ed Storms, tells me there is sound evidence of excess heat and other signals that indicate an unknown phenomenon, cold fusion/LENR, is taking place under the proper circumstances in a repeatable manner.
It is also possible that Hydrino formation is mistaken for a form of fusion. That is, the formation of Hydrinos could explain excess heat measured in some cold fusion cells.
What else might Hydrino formation explain? Once, while enjoying beer and calamari in Randy’s favorite Princeton Bar, he speculated Hydrino formation could explain “hot spots” creating volcanoes from the Hawaiian Islands to Yellowstone. He postulated it might even contribute to tectonic plate melting deep in subduction zones.
Another scientific community argument for not taking Hydrocatalysis Power (the original name of Brilliant Light Power) seriously was that it was a private company, not taking government grants, hence not subject to the rigorous review required to earn a government grant.
It was (and still is) outside the Church of Science!
And, from the community perspective, like all self-interested private companies, BLP was likely to bend the truth to gain financial support. Clearly, scientists seeking government grants and notoriety would never do that!
It was our judgement that to overcome this cancel-culture style of objection to new physics theories, our team at Penn State simply could not take any instruction regarding the design of the study from BLP. It was also a judgement that truly independent investigation might gain acceptance in some small corners of the scientific community.
I feel compelled to note, as an aside, that I fully reject the notion that government funded research is more “objective” than privately funded research. Anyone paying attention during the pandemic may have noticed that the leadership of government institutions like the CDC and NIH demand adherence to doctrine as a prerequisite to obtaining research funding.
Independence from BLP directives was not enough. The attitude of the scientific community toward claims of excess heat dramatically soured during the latter part of the cold fusion epoch. Faculty, young and old, faced approbation and severe repercussions (possibly no grants, no students, no invitations or journal editorial posts, even public scorn at meetings) for mere consideration of the possibility of cold fusion, or any other related phenomena.
I decided to go underground. I even considered this as the title for the monograph: “The Hydrino Hypothesis: Notes from the Underground.”
It was stipulated that the results of these Penn State efforts could be quietly shared by BLP with possible investors, or other parties with signed Non-Disclosure Agreements, but the work was not to be openly available. I certainly was not going to submit the work to any journal. It was not the time in my career to go tilting at windmills. I needed government grants to survive! Ironically, it was the government, not private funding, that caused me to put shade over the truth…
It is notable that those uninterested in tilting at windmills should continue to avoid even the appearance of objectively studying the predictions of GUTCP. Even today, the leading journals, and science societies, ferociously guard against any “misinformation,” particularly correct “misinformation,” on this topic even from the most seasoned and respected scientists.
So many physicists, so few objectively pursuing the truth. More on this later.
Finally, regarding the outcome of those experiments: excess heat was repeatedly observed. Notably, excess heat was only found under those conditions where all the elements predicted to yield Hydrino formation were present. The thorough control studies, designed such that at least one required element for Hydrino formation was missing, showed no excess heat.
In fact, the excess power observed under predicted Hydrino formation conditions was generally more than 10% greater than the power put into the reaction cell. A deviation of 10% from the baseline was several times (~5X) above the inherent uncertainty in the instrumentation.
Indeed, the numbers indicated greater than 99% certainty the energy was in excess of that which could be explained with standard theory. Thus, the excess energy, simply the integral over time of excess power, never lessened when hydrogen fuel flowed into the cell, and all other required elements for Hydrino production were present as well. Unfortunately, 10% excess power is not even close to justifying a commercial process of energy production (the reader should note the technology has advanced considerably since 1996 and is now producing many orders of magnitude more excess power).
The conclusion I drew from our early calorimetric experiments: we had failed to debunk GUTCP. It remained a viable theory. As noted above, one cannot “prove” a scientific theory. In fact, the best one can do is fail to debunk a theory.
Moreover, the experiments suggested reasons to doubt orthodox physics and chemistry as no variation on standard physics or chemistry we could conceive of, and we conceived of a bunch, offered any explanation for our observations.
It made me a bit uncomfortable. I was not ready for the revolution.
T.S. Kuhn, The Structure of Scientific Revolutions, 3rd ed. (University of Chicago Press, Chicago, IL, 1996).
D.G. McCullough, The Wright Brothers, First Simon & Schuster hardcover edition. (Simon & Schuster, New York, 2015).
M.T. Greene, Alfred Wegener: Science, Exploration, and the Theory of Continental Drift (Johns Hopkins University Press, Baltimore, 2015).
J.H. White, The History of the Phlogiston Theory (AMS Press, New York, 1973).
https://www.britannica.com/science/atom/Development-of-atomic-theory
E.A. Davis, J.J. Thomson, and I.J. Falconer, J.J. Thomson and the Discovery of the Electron (Taylor & Francis, London ; Bristol, PA, 1997).
J. Gribbin, and E. Schrödinger, Erwin Schrödinger and the Quantum Revolution (Bantam Press, London, 2012).
D. Bohm, and B.J. Hiley, The Undivided Universe: An Ontological Interpretation of Quantum Theory, Repr (Routledge, London, 2003).
K. Popper, The Logic of Scientific Discovery, (Routledge, London, 2010).
T. Gold, The Deep Hot Biosphere (Copernicus, New York, 1999).
J. Phillips, “Reconsidering the validation of multi-electron standard quantitative quantum mechanics,” Phys Essays 27(3), 327–339 (2014).
J. Phillips, “Increasing Exclusion: The Pauli Exclusion Principle and Energy Conservation for Bound Fermions Are Mutually Exclusive” Phys Essays 20, 564 (2007).
J. Phillips, “Reconsidering the validation of multi-electron standard quantitative quantum mechanics,” Phys Essays 27(3), 327–339 (2014).
Ibid.
J. Phillips, “Increasing Exclusion: The Pauli Exclusion Principle and Energy Conservation for Bound Fermions Are Mutually Exclusive” Phys Essays 20, 564 (2007).
And yet the Aspect 'proof of quantum' is clearly just classical physics conservation of momentum..
In the Mach-Zehnder Interferometer single photon experiment, one of the two paths of the first beamsplitter can be blocked and thereby change the behavior of the second path with regard to its single photons when they arrive at the second beamsplitter that converges the two paths. In classical physics a single photon cannot be split into two photons of the same wavelength (let alone that that follow different paths). So this is inconsistent with classical physics. This isn't to say the GUTCP is isn't revolutionary in other ways that may predict lower energy states of hydrogen or that the reported coefficients of performance exceeding 2 for sustained periods are wrong -- but it is important to admit the difficulty presented to the GUTCP by this experiment.