Aeon.co/magazine/ recently
published a very thought provoking article by Adrian Kent that delved
into a current conundrum now faced by physicists. A problem that
revolves around a logical and empirical short coming concerning
quantum mechanics. This problem can be summarized thus:
Although quantum theory has very
elegant mathematical models that clearly indicate what the result
will be when certain particles are interacted at given energies (via
accelerators), they offer only very perplexing abstractions as to
what is actually occurring when these interactions take place;
abstractions that don't give any kind of satisfying notion of the
underlying reality.
The example he gives is quite
intriguing:
“...To
calculate what outcomes we might expect when we fire protons at one
another in the Large Hadron Collider, we need to analyze what – at
first sight – look like many different stories. The same final set
of particles detected after a collision might have been generated by
lots of different possible sequences of energy exchanges involving
lots of different possible collections of particles. We can’t tell
which particles were involved from the final set of detected
particles...”
And
the problem is more than simply having a known list of events that
you can't deduce the relative occurrence probabilities out of your
test results.
“...Quantum
theory isn’t like this, as far as we presently understand it. We
don’t get a list of possible explanations for what happened, of
which one (although we don’t know which) must be the correct one.
We get a mathematical recipe that tells us to combine, in an elegant
but conceptually mysterious way, numbers attached to each possible
explanation. Then we use the result of this calculation to work out
the likelihood of any given final result. But here’s the twist.
Unlike the mathematical theory of probability, this quantum recipe
requires us to make different possible stories cancel each other out,
or fully or partially reinforce each other. This means that the net
chance of an outcome arising from several possible stories can be
more or less than the sum of the chances associated with each.
To get a sense of the conceptual
mystery we face here, imagine you have three friends, John, Mary and
Jo, who absolutely never talk to each other or interact in any other
way. If any one of them is in town, there’s a one-in-four chance
that this person will bring you flowers on any given day. (They’re
generous and affectionate friends. They’re also entirely random and
spontaneous – nothing about the particular choice of day affects
the chance they might bring you flowers.) But if John and Mary are
both in town, you know there’s no chance you’ll get any flowers
that day – even though they never interact, so neither of them
should have any idea whether the other one is around. And if Mary and
Jo are both in town, you’ll certainly get exactly one bunch of
flowers – again, even though Mary and Jo never interact either, and
you’d have thought that if they’re acting independently, your
chance of getting any flowers is a bit less than a half, while once
in a while you should get two bunches...”
I
love thinking about this kind of stuff, and I am so thankful that
people like Mr. Kent take the time to frame the issues, as much as
they can, in terms other than straight mathematics. Which is, of
course, my cue to segue into the fact that not only am I not a
physicist, I am also someone who was only able to get past Algebra II
in school. That being said, I want to be clear that my response to
this conundrum comes from the perspective of a systems analyst and
part time philosopher.
I
believe that there are two main points that must be considered for
anyone to come to a better understanding of what is actually
happening in between the start and end of these types of particle
experiments. The first is that there will be limits to what can be
tested; if for no other reason than the fact that any physical
instrumentality we might devise to use as a measurement mediator will
always be a less than perfect translator; which is no more to say
than that the precision will always have an upper limit. Because of
this, at some point, everybody is going to have to take a description
on faith. Let us be clear, though. This faith does not necessarily
have to have anything to do with a deity. It will be a faith, rather,
based completely on a process description that resonates. In other
words, it will be something that simply feels right.
The
other thing that seems to me to be missing from the cosmological view
of science is that there isn't enough consideration of what a
singularity is in relation to the entirety. Obviously the assumption
in the standard view is that the entirety, and the singularity which
formed what we now do our experiments in, are the same thing. And I
have to say that this has always seemed quite wrong headed to me;
especially if one considers that parallel realities are feasible.
It
seems to me that the entirety is a thing of infinite potential and
complexity. In a sense it is its own, unbounded, singularity that,
like any complex system, allows for infinite boundaries within, but
therein (all puns intended) lies the rub. Just as important as the
how and why of a particular singularity is this: What is the basis of
boundary resolution in the first place? After all, how can there be a
single anything without the means to make distinctions? And if you
can have one distinct thing, why can't you have many; with none of
them representing any kind of primary starting point?
I'm
getting ahead of myself here so I'm going to take a step back for a
bit. I want to provide you with some perspective on why I see things
the way I do. And to do that I need to delve into the intellectual
that probably had the greatest impact on me: Marshall McLuhan. In
particular, my study of Mr. McLuhan has left me we a deep fascination
with the idea of “gap.” You can't read him and not get caught up
in it, at least a little.
In
one instance he described the news paper, after the widespread use of
the telegraph was in place, and up until the television, as something
you climbed into as a total immersion. It was this way because of all
of the stories collected from far and wide, and laid out in a mosaic
of juxtaposition. It demanded immersion precisely because of this
juxtaposition because elements in proximity, even if only
subconsciously, demanded some kind of completion; as in a linkage
that would give them meaning.
In
another instance he would describe the derivation of the word symbol.
That it came ultimately from the word symballein,
which denoted the bringing together of two halves. The idea was that
a stick or clay object was broken to stand for an agreement of some
sort. The bringing together completed this agreement.
Later
on, as print ads became more sophisticated, and the means to make
images matured, the arrangement of juxtaposed items became much more
subtle. As such, the draw to make engineered completions were as
effective as they were hardly perceivable to the conscious mind.
In
“From Cliche to Archetype” he further elaborated on the ultimate
elements of popular culture, and the means to take advantage of them.
He described it as a constant process of the creation of new
archetypes, their quick demise into cliches, being discarded on the
“midden heap” of culture only to be resurrected by the clever
message maker; banging old ones together in new was. Thus were
created the advertising slogans such as “The wrath of grapes” to
sell wine, or, as I tried to do with an updated take on Paul
Goodman's “Growing up Absurd”, creating the essay with the title
“The Absurdity of Growing Up.”
I
mention all of this because it serves to frame what was for me an
important context into which how meaning itself worked. Of how words
could only be expressed as combinations of other words. Of how
language in its own right was an important component to the
development of consciousness, and a persons sense of self. After all,
the very process of separating the immense welter of sensory input we
are initially assaulted with as infants into objectified terms works
to introduce, and then solidify, the distinction of outer and inner,
of me and everything outside of me. The self as a singular point of
reference was then easy to see.
Once
I started going down this road in a big way a lot of other concepts
started to click for me. All meaning, it seemed to me, was both
personal and social, combining the immense process of experience
association within the matrix of lower brain, higher brain; the
chemicals, emotions, and the interactions of a young, vulnerable
animal as it matures within a family group, and that group interacts
with the larger society that contains it. Coming to understand the
difference between subjective and objective interpretations of what
is experienced in the process.
All
of the above is just a more concrete way of indicating what systems
theory is. It is also important in seeing why systems theory argues
against reductionism. Families of organizational structure, working
as a whole can create unexpected feedback channels between the
different families. That is precisely what makes complex systems so
complex, and why a butterfly flapping away somewhere can affect the
weather some significant distance elsewhere.
Another
significant aspect of systems theory has to do with initial
conditions; especially as they concern chaotic systems, which are a
subset of complexity. Initial conditions and singularities are
wonderful things to put into juxtaposition; no less so when you throw
in point of reference, and a deeper understanding of relativity. It
all pushed me into a very specific way to think of the entirety;
especially when I started to consider how the arrow of time might
allow for such an organized uniformity of boundary resolution. That,
and discovering the controversial Anthropic principle, had me off and
running.
For
me, there are two primary aspects to the entirety: The elemental
embrace, and Mind.
The
elemental embrace should be obvious. Whether you want to call it
Love, or the Higgs Boson, or as the fundamental necessity to come
together and exchange, it still pretty much refers to the same thing.
It can't be by itself, though, if we don't have boundaries. If you
don't have boundaries, certainly, you can't have quanta to begin
with; inasmuch as any quanta is simply the bounded measurement of
some interaction. More to the point, though, is the fact that,
without boundaries there can be no information. In order for there to
be information there also needs to be gap; which of course is only
another way of saying the interval between two distinct objects.
That
being said, there has to be mind of some form or another involved
here. That is so because sentience is essential to the entirety
precisely because there can be no point of reference unless there is
a systematic meaning organizer from which to have the initial thing
that everything else can be relative to (and thus our initial
conditions). The arrow of time is simply a vector of experience
association in which meaning can be created because the specifics of
boundaries have been resolved in a particular way. That is how those
few, very important, numbers referred to in the Anthropic principle
get resolved.
Under
this view there are always an infinite number of singularities
starting new vectors of association because meaning begats mind and
mind begats meaning. In the whole we can think of the entirety as an
infinite Question/Answer engine. The question forms by the very
nature of gap, and objects, as they always imply a possible meaning.
The answer creates a new system state, and thus, a new gap, or gap
set, and the process continues. Taken together, all of the infinite
vectors of association are a kind of reality ray tracing to
systemetize the geshtalt of state change.
As
I said before, this is something I take on faith because it feels
right. I find it very helpful in everyday life because it suggests to
me that each and every one of us do matter. That by interacting with,
and understanding as much of our Cosmic Vector as we can, we add
meaning to the whole. And it only seems a small streatch to me to add
the notion that doing this in a loving way, will add more loving
structure. I will keep this faith until a better idea comes along, as
it surely will.
But,
to wrap this up, we still need to return to Mr. Kent's original
conumdrum. And to do that we need to consider the two main aspects of
what physicists are doing in this type of experimentation; namely
particle accelerators and the fact that they smash things together in
order to understand what the underlying structures and relationships
are.
First
of all let me state that I am quite certain that, in and of
themselves, accelerators can provide useful information. As Mr.
Kent's piece clearly states, this is already well established. What I
find disappointing, however, is that not enough circumspection has
been applied to their utilization. The free wheeling nature of this
utilization suggest a hubris towards what they are smashing away on
that is unbecoming, to say the least, of people who are, ordinarily
very thoughtful and caring.
I
say this precisely because I come from a systems analyst point of
view. Understanding and working with complex systems was how I earned
my living IT. But even non-systems people know now the small inputs,
large effects thing. And it may seem like the energies now being
produced are still some pretty small potatoes as far as the Cosmos is
concerned, but let's not forget an important fact. This is the mother
of all complex systems we're talking about. How do we know just how
small is before it's unintended affects become an issue? By the same
token, how can you know that your smashing doesn't change,
fundamentally, what you are trying to measure? As various light
quanta experiments have indicated, how you test a thing can determine
the results you are going to get. Where is the baseline of test
results showing particle interactions where there was absolutely no
(or at least as little as can be humanly achieved) human involvement
in either the acceleration itself, or what was smashed into in the
first place?
In
any case, though, that outlines the first aspect of accelerators that
I wanted to make note of. The second has to do with the energy levels
involved with the interactions.
When
I talk about Quantum Mechanics, and Cosmology as well, I like to
refer to the notion of “scales of consideration.” In the macro
sense, obviously, you'd be talking about solar systems, galaxies,
galaxy clusters, and so on. On the micro scale, on the other hand, it
would be the energies required to either resolve the electromagnetic
returns, or the energies required to pick apart the constituent
interactive entities. As we have seen in the relatively brief history
of these devices, every time an object, then thought of as the final
core, underwent the new, higher energy, a new layer of inner
interaction was found.
Setting
aside the already described, possible experimenter created
uncertainties of such experiments, you have to wonder if there is
some upper limit of interactive force for which, with the number of
scales of consideration crossed, that the ability to extrapolate
useful conclusions of structure become so entangled with uncertainty
as to make them simply wishful conjecture?
Let's
review Mr. Kent's
analogy.
They use a human induced explosion of one proton into another to
produce a set pattern of particle ejecta. There are a lot of
explanations of what might be going on inside those explosions to
produce the indicated ejecta. Their elegant equations give them a
recipe to work with numbers assigned to each possibility, to work a
kind of differential summation, rather than ordinary statistical
probability, to deduce the most probable explanation. And then, of
course, he gives a further analogy to give a more intuitive sense of
the conundrum.
What
if the energy/mass translation at that unbelievably brief moment of
impact created a fuzzy event horizon similar to that which Mr.
Hawking just published on black holes (those pesky boundaries again).
Something through which any interactions might be distorted by the
mere fact of gap (or duration) inconsistencies due to that very
distortion of space-time; distortions that would impact the output
ejecta in ways that could only be conjectured. Light get's bent by
gravity in nominal scales of consideration in predictable ways; that
is until the gravity becomes so great it changes the relationship of
boundary and gap from the norm of our arrow of association. The very
nature of information channel may be effected in ways that will
require some kind of imaginary entropy, and multidimensional
recursion matrix, to encode with the proper noise reduction; much as
imaginary numbers had to be invented, but with hybrid fractals. And
even then it might not be enough, as there may well be no way to
verify it with any truly objective form of experiment.
We
are inside a system that we are trying to gather all of the
information that we need to model it. In many ways this is similar to
the idea floating around now that we are all in a simulation created
by some future, uber programmer. The question I have is: how could
that programmer get all of the information he needed for a
sufficiently high fidelity reality recreation if he were not outside
it, and thus able to apply truly objective measurement of the
constituent families of component interaction. And in this context,
it has always seemed to me that Godel's incompleteness theorem should
apply here, at least in some fashion, but I'm an outsider looking in,
so its quite likely that I have at least parts of these different
ideas wrong, if not all wrong. At least for this lowly systems
analyst, however, it does certainly seem plausible.
Be
that as it may, even if I'm only close, and the answers to any of the
questions posed are ever yes, then Mr. Kent and his fellow physicists
may well be in need of considering putting faith into their toolkit.
At which point they are then going to have to find an explanation
that feels right. It probably won't be mine, but I do hope they are
able find one.
No comments:
Post a Comment