Center for Strategic Assessment and forecasts

Autonomous non-profit organization

Home / Science and Society / New in Science / Articles
On the relationship between the macrocosm and the microcosm: the universe, the fine structure constant, quantum space-time
Material posted: Publication date: 01-12-2016

The modern state of scientific ideas about the origin and development of the Universe can be considered, if not already formed a crisis, then certainly pre-crisis state. Currently, the dominant physical concept is the Big Bang and subsequent inflationary expansion of the resulting explosion of the Universe. But the reality of the Big Bang is questionable. By the way, the author of the term "Big Bang" – the Nobel laureate Frederick Hoyle from the United States in this hypothesis is not believed. It creates a lot of paradoxes, but does not explain them. Among them there are even logical paradoxes. For example, if the universe is infinite, how can it be that infinity expands?

Showed and physical paradoxes. On the one hand, monitor the dynamics of stars in galaxies and galaxies in clusters have shown that their own weight is not enough to maintain gravitational stability; this implies the existence in the Universe of some additional matter (it is called "dark matter") that is involved in gravitational attraction.

On the other hand, the study of red shift in near-field at distances of 105 to 107 light-years, and observations of flares of distant supernovae show that the expansion rate of the Universe increases with time. This required the introduction of an additional factor – "dark energy" with anti-gravity properties, which causes the Universe to expand further.

It was found that the total mass of "dark matter" and "dark energy" about 22 times more than the mass of the observed matter. According to modern concepts of followers of the Big Bang theory, the distribution of matter is: 74% is dark energy, 22% dark matter, 4% matter (of a star system, the intergalactic gas).

In other words, in the present studied models of the Universe based on information, and incomplete, only about 4% of its content. It is not surprising that the results of evaluations on such models can show anything. Not to mention the fact that 105-107 light years – a negligible distance compared to the size of not only the Universe, but our galaxy, which may have different distorting factors.

So physics all the time come up with some new entity in order to use them the best way to explain the observed physical effects, adjusting them to fit your model. Such entities are "dark" matter and energy, together with the assumption that they somehow manifest itself only through gravity, and not any other way, i.e. so as not to spoil the basic tenets of the theory of inflation.

In recent years it has also become fashionable to invent extra dimensions of space-time which appear only on a small scale. On the one hand, it is clear – we need somehow to explain the paradoxical contradiction coordinate with the uncertainty principle. And on the other – you can, of course, to match theory and reality to enter into the consideration of additional degrees of freedom. But then the problem is transferred in depth, on another level, as you will need to explain why these parameters are exactly those, and not some other. Approximately, this situation, as in the Standard model.

Probably need to start from the basic physical facts of the Universe which the contemporary science. These facts are not so many, and the main of them follows.

First, the red shift. Explaining his Doppler effect, Hubble concluded that galaxies scatter. But as it turns out, not all. Some are attracted to each other and even collide. And the Hubble constant decreases, as predicted by himself, and is growing, as evidenced by recent measurements.

Second, the CMB is a possible witness and participant of the Big Bang, corresponds to a state of matter at a temperature of 2.7°K. In the Big Bang theory, it is believed that hot the CMB originated in the initial moments of the formation of the Universe, but then, as it grows, gradually cooled, and is currently evenly fills the entire Universe. That is in accordance with the basic position of modern cosmology – the cosmological principle, according to which each observer in the same moment of time, regardless of the location and the direction of observation, discovers in the Universe is on average the same picture.

But here there is a paradox – according to information obtained using the Hubble space telescope, after the analysis of data on propagation of radio waves from 160 distant galaxies, it was observed that the data do not correspond with the postulate of isotropy of space. As said this scientific expert "new York times" Wilfried noble (Noble Wilfred), space is not the same in all directions, in fact, it has "North and South", "bottom" and "top" as well as "West" and "East". This means that the universe may not be the same in all directions, and the cosmological principle and the Big Bang theory may be untenable.

In other words, the cosmological principle is a convenient postulate, which can be wealthy in a relatively small areas of the Universe, for example, in the galaxy or group of galaxies, and work in the whole Universe.

Therefore, supporters of the Big Bang theory, and its critics to draw final conclusions prematurely. More research is needed. And one of the areas of these research – development and analysis of new, alternative ideas about the Universe. Moreover, it is clear that these representations should be based on the relationship of the microcosm with the macrocosm.

Below will be described one of the possible options. At first I didn't publish anything, but for the sake of my own entertainment produced some estimated calculations, something like solving a charade. But then some of the results seemed so curious, and obtained the formula of the ratio – funny that I dared to ask them to publish.

Maybe someone can use it. Maybe someone, criticizing, think about these and their arguments and come to a new understanding of the fact that as yet nobody knows.

And if someone is a venerable scientist in this issue awkwardly, "come on up", asking me not to judge too harshly, understand, and, as they say, to forgive.

Specially emphasize: everything that will be presented below – is only the evaluation of the ratio. To them and should be treated. You need to understand that we will focus on the relationships of physical quantities, which differ in many tens of orders of magnitude. Therefore, the desire of obtaining exact figures would look too presumptuous.

But this disadvantage has positive properties: the assessment allow the use of fairly simple calculations that can be perceived by a wide range of readers of this almost popular article.

And in order that the merits of these estimates was as clear as possible, the following section presents the main provisions of the concept being considered, which is named "model of the Universe." It will have a little repeat of what was written in [1].

Model Universe

Currently modern physics is another pre-revolutionary condition. Most revolutions in physics were based on review and refinement of the concepts of "space", "time", "matter-energy".

But what does it mean to clarify or explain? It means to bring a new concept to something already known, more fundamental. And the most fundamental concepts, which by definition can no longer be reduced to others, should postulirovana. To explain them is impossible in principle.

In my opinion, the most fundamental concept is "matter-energy" and "space" and "time" from it derivatives.

Energy can be in concentrated form, forming a compound, or exist in the form fields. Now known 4 types of fields: electromagnetic, weak (together they form the electroweak field), field, strong nuclear force, the gluon field. Sometimes this list add a gravitational field, but as yet does not exist a quantum theory of gravity, it is considered that gravity is the manifestation of the properties of curved space-time.

All of these fields are quantized, i.e., the interaction fields are represented in the form of interaction of particles – quanta of the field.

People in their practical life which has shaped their view about the world, faced with the electromagnetic field. It was manifested in the form of sunlight, heat, electrical processes in the atmosphere (electrical discharges). All manifestations at the molecular and atomic level, all chemical processes in the human body is essentially different manifestations of the electromagnetic field. In other words, throughout the long historical path of the emergence and development of the person he was dealing only with the electromagnetic field. Well, of course, with gravity.

Naturally, this determined the way of perceiving the world: all that is visible and invisible electromagnetic field, it has become abstract to designate as "space". Therefore, it can be put the mark of conformity "electromagnetic field" – "space." Then something happened that logically had to happen: with the development of abstract thinking space is abstracted from matter, came to be regarded as an independent entity. Appeared geometry, and the evolution of matter began to be considered in this abstract space, which exists in itself, independently of matter.

But in nuclei, in addition to electromagnetic forces, there are also nuclear forces. How they change our ideas about space?

Indeed, nuclear power is another field. But the fact that atomic nuclei these forces are comparable to the solenoid: it is the comparability of these 2 forces determines the equilibrium and ensures stability of atomic nuclei. Therefore, it is expected that at the nuclear level the idea of space should not significantly change. What happens in practice. At least with modern precision measurements.

Bell for another of doubt and for the subsequent revolution in physics was the special theory of relativity. The result was the revision of the basic properties of time and space at near-light speeds. Light – electromagnetic field. It is no coincidence that the change in ideas about space and time has occurred at a deeper analysis of the substance that is electromagnetic phenomena.

General relativity is directly tied into one entity space and time, which is called "space-time" and properties which are determined by substance and energy, not to exist by themselves. Thus gravity was explained by the geometric position – not as a field, and as the curvature of space-time.

Further – more. Few years there have been quantum mechanics. From the point of view of changes in concepts of space and time, it is important to mention one of its most important principles – the principle of uncertainty. As we know from geometry and classical physics, the three spatial and one temporal coordinates are introduced for uniqueness dimensions. But if it turns out that in quantum mechanics it is impossible to simultaneously accurately measure the position and momentum of an elementary particle (as well as other associated variables: time and energy, angle and angular momentum, etc.), it means that such coordinates is something fundamentally wrong.

Again: the coordinates just introduced to the uniqueness of the measurements, for this they need! If it is impossible to draw a definite change, this coordinate system is insolvent by definition.

Something is amiss in the Danish Kingdom! This means that the real world is more complicated than the perfect 4-dimensional world, even if bent by gravity. At least in microcosm, or at very high energy densities, or on a small scale. If there is also possible to talk about space, its properties can be other than normal, we are all familiar world.

Now about the time.

What is the time, no one knows for sure, although it as a parameter included in all the equations of fundamental physics, and all the students with ease and look at the different chronometers, looking at their insides. The most known definition of time through the chaos: the more chaotic the situation, the further it is from some initial point. The measure of chaos is the entropy. Therefore, the "arrow of time", showing its focus coincides with the growth of entropy.

In a physical system can have 3 types of physical processes: with the growing chaos, with the reduction of chaos, periodic (actually – quasi-periodic). The latter is usually used to measure time. But the measure of something does not always mean understanding of its physical essence. Example: in France kept the standard mass is kilogram. Mass can also be measured in tons, grams, ounces, etc. It's just. But in order to understand what is mass, where it comes from, it took a Higgs boson.

Another example for clarity: the level of confidence of the existence of the Almighty God cannot be measured by the number of prayers offered in his glory.

As the chaotic processes in the world still prevail, that time can be understood through the chaos. But one thing chaos in a rarefied gas, another – in a system with dense packing of atoms (e.g., diamond). There is chaotic much less pronounced. In a sense we can say that in diamond's own time flows substantially more slowly than the surrounding air.

Of course, for understanding time, you can try to use ideas about chaos. But, in my opinion, clearer to perceive time through some rhythms of the electromagnetic field.

Now try to imagine the state of the Universe at the beginning of its development, i.e. what is commonly called the Big Bang. Huge energy density, atoms, there is only an elementary proto-particles, in which nothing is happening (otherwise the particles would not be elementary).

So there was not chaos in our usual understanding of time to which we are accustomed. But it is possible that some randomness was, but her descriptions can be a single parameter in the form familiar to us time and certain set of parameters, which also may be associated with an indicator of chaos, and each other non-linearly. Ie if we translate all this into the usual language, the analogue time can be multi-dimensional and curved.

At high energy levels, typical of the Big Bang, there are 4 or 5 types of fields. They all merge into a single field. You can bring this analogy: if the pot is a uniform water vapor, and You start strongly and rapidly cooling the bottom liquid nitrogen, then the following will occur: at the bottom, a layer of ice on the water layer, and the top will remain steam. I.e. when a sudden change in conditions of homogeneous pairs was divided into 3 phases. Something similar happened with the unified field – when "cooling" to highlight all the known fields.

In other words, in the initial state of the Universe could not be an electromagnetic field. And, therefore, could not exist of space and time in our usual performance. Ie it turns out that in the initial state of the Universe was not one of those old "space-time", to which we have long been accustomed.

So strange to read when described in detail what happened 10-43 or 10-35 seconds after the Big Bang. This second could not be in principle! To explode something with the subsequent release into the environment was also impossible, because did not exist the space.

By the way, in order to imagine what is 10-43 seconds, you can estimate the time of passage of light through a distance equal to the radius of proton or classical radius of an electron (about 10-15). This time will be about 10-23 s. This is 1020 times more than the same 10-43 seconds. If the proton be increased to the size of the galaxy, less in relation to it in 10to 20 times the amount will be comparable to the height of a man.

But then a single field is divided into its components, then an electromagnetic field, there is a time and space. The world began to buy the usual features: formed galaxies, star systems, planets. They had a life, then the people who are most restless of which was engaged in the study of nature, has established the physical laws, came up with an ingenious physical hypothesis and theory, including the patterns of the local nature of the global has put forward the hypothesis of the Big Bang.

So, in the considered below model of the Universe space-time will be associated with the electromagnetic field. Appears electromagnetic field along with it appears and the space-time. And since the electromagnetic field is quantized, it is logical to assume that space-time is also quantized. I.e., to speak of space-time below a certain scale it becomes somewhat meaningless. And these scales are consistent with the principle of uncertainty, which is considered one of the fundamental laws of nature.

The next fundamental concept of the considered model of the Universe is the physical vacuum – a kind of quantum environment of the proto-particle, which in some circumstances can fluctuate, be destroyed (cleaved), producing elementary particles.

It should be noted that at the time, "lattice" physical vacuum is considered plank. Lorentz deduced their relativistic formulas which are then used Einstein also based on the model of similar fundamental "immutable" grid. "Sea of Dirac" is also the physical vacuum.

The following are important for the considered model the concept of a global fluctuation of the physical vacuum in which the universe was created. The center of this global fluctuations hereinafter abbreviated as denoted CHF.

For clarity, imagine a large amount of supercooled liquid, which from a small volume under the action of a certain random factor was to form an array of ice. Over time it increases in volume, expanding into new layers of liquid. Who doesn't like super-cooled liquid – imagine a superheated liquid, which begins to form a steam bubble.

Another analogy: a thick array of glass, within which tensions arose and began to spread small cracks. Formed when the glass fine dust is the analog of the matter.

Another analogy is the hearth flame, for example, in the dry grass on a level field: the formation of ash and other products of combustion mainly occurs in the border region of combustion.

Of course, all these analogies are incomplete. Their main drawback – they are considered destructible environment – an analogue of the physical vacuum – as something localized in space-time. With the physical vacuum, this analogy fails is in the physical vacuum there is neither space nor time in our normal view. So that where there is localized, it is impossible to say, since the notion of localization is untenable. But there is a silver lining – this property brings the idea of physical vacuum, special theory of relativity.

Assume that the rate of destruction of the physical vacuum and the corresponding formation of matter and space-time occurs at the speed of light.

The generation of matter happens in elementary space-time cell – quanta of space-time. In this form all the known elementary particles – the quanta of the corresponding fields. In particular, the quanta of the electromagnetic field – photons. Including, photons, which are called quanta of the background radiation, but which in our model are not, because they are formed not only at the global fluctuations, but with a further breakdown of the physical vacuum.

What's the difference of this model from Big Bang? First and foremost is that the generation of matter is not only in CHF, but in the future as the subsequent destruction of the physical vacuum on the border of the Universe. And – basically it is the subsequent destruction of the physical vacuum. Ie global fluctuation in the origin of the Universe – the event, of course, decisive, but much less important than in the model of the Big Bang in this Universe matter continues to be formed and after the global fluctuations.

In this model also there is no global deletion of physical objects from each other. Ie, moving, rearrange the resulting the mother, of course, but the global displacement associated with the expansion of the Universe as a whole – no. And that is registered as a red or purple displacements of spectral lines is under a completely different physical basis: it is the apparent effect, due to the difference in the properties of space-time distant from other regions of the Universe.

Next we turn to the cosmological principle. In this model, where there is a highlighted area – CGF, the space can not be anisotropic. Therefore, in this model the cosmological principle can only be an approximation, applicable in relatively small areas of the Universe.

But what do you mean small? For example, the radius of the milky Way galaxy is about 3×10-5 of the Hubble radius, and its volume is about 10-16 of the volume of the visible Universe. For clarity: if the observable Universe to submit, at the scale of the globe, the milky Way will be the size of a saucer, which is somewhere inside this globe.

Therefore, within the galaxies and their clusters cosmological principle approximation is performed, but for the Universe as a whole it may not be applicable. For example, the theory of relativity it can afford to operate in a relatively small scale, but to falter when considering the Universe as a whole.

The relative smallness of the size of galaxies relative to the size of the Universe allows to use the model approximation of a homogeneous Universe, which greatly simplifies the following calculations.

Thus, the main provisions are described below the model, it is considered that the universe is formed by the breakdown (destruction) of the physical vacuum.

For the numerical evaluations will require some physical constants. For convenience, they are concentrated in the following section.

Physical constants

γ – the gravitational constant, 6,67384×10-11 m3/(kg×s2)

H – Hubble constant, 2,169×10-18 c

c is the speed of light in vacuum, 2,99792 x 108 m/s

1 billion light years = 0,946×1025 m

Hubble radius R0 (the radius of the observable Universe) = 1,38216×1026 m = 14,61×109 light years

the size (radius) of the milky Way = 5×104 light years (3.5 x 10-5 radius of the Hubble)

critical density of matter 0,931×10-26 kg/m3 (5,2·10-6GeV/cm3)

the energy density of the CMB ρE = 4×10-14 j/m3

the temperature of the background radiation – 2,72548 K.

Planck's constant h = 6,62607×10-34 j×s

) = h/(2×π) = 1,054571×10-34 j×s

elementary electric charge e = 1,6021765×10-19 To

dielectric permittivity of vacuum ε0 = 8,85419×10-12 f/m

the electron mass me = 0,9109383×10-30 kg

the classical electron radius re = e2/(4×π×ε0×m×c2) = 2,81794×10-15 m

the fine structure constant α = e2/(2×ε0×c×h) = 1/137,036

the conversion of mass in kg per energy unit:

1 eV → 1,7827×10-36 kg

1 MeV → 1,7827×10-30 kg

1 GeV → 1,7827×10-27 kg

1 TeV → 1,7827×10-24 kg

 

Gravitational energy model of the Universe

The analysis will start with the simplest – with an almost school puzzles on their own of the gravitational energy of the ball.

First, consider a ball of uniform density. Tension Eg the gravitational field (acceleration) within it defined as follows

Eg(r) = γ×ρ×4×π×r/3

where γ is the gravitational constant, equal 6,67384×10-11 m3/(kg×s2);

ρ – mass density of a homogeneous ball.

The potential φof g the electric field inside a homogeneous sphere:

+ γ×ρ×(R2 – r2)/6 + γ×ρ×R2×ρ/3 = – γ×ρ×r2/6 + γ×ρ×R2×ρ/2

 

The total gravitational energy Wg ball in its own gravitational field

= 4×π×{– 2×π×ρ2×γ×R5/30 + 2×π×ρ2×γ×R5/6 = (4×π×ρ)2×γ×R5/15 = (3/5)×γ×M2/R,

where M is the mass of the ball.

Another way of computing Wg, based on the determination of the energy required to consistently remove all of the elementary spherical layers of the sphere at infinity, actually spray it:

= (3/5)×γ×M2/R

You can offer and 3rd method of determining Wg, where the factor 3/5 is obtained by calculating the triple integral representing the sum of the gravitational energies of the elemental mass of the ball relative to each other:

where L = [r2 + ρ2 – 2×r×ρ×cos(θ)]1/2 is the distance between the elements of the mass.

Here the integral is determined by the energy of gravitational interaction of masses in the ball of radius with unit density. In the calculation is the interaction of a circular cylinder of radius r×dψ with the axes coinciding with the axis 0z, with the rest of the elementary masses of the globe. The radii of the cylinders increase with the distance from the origin, forming a single solid angle. The rest of the basic mass form a ring with a mass of 2×π×ρ2×dθ, strung on the axis 0z, i.e. pre-integration angle φ.

The multiplier before the integral

 9/(8×π) = 2×π/(4×π/3)2

 

The numerator is half of the full solid angle 4×π (half, because the gravitational energy of 2 elementary mass in the integral is calculated twice, and to compensate for this I have to share the full solid angle 2).

The denominator is the square of the mass of a homogeneous sphere with unit radius and unit density.

Now consider a ball of mass M, where ρ the density decreases along the radius r according to the law of inverse square:

ρ(r) = A/r2,

where A is a constant determined by the condition:

 

A = M/(4×π×R)

The average density of the ball ρ0 = 3×M/(4×π×R3). So

ρ(r) = (ρ0/3)×(R/r)2

The mass of the inner sphere with radius r:

M = (4×π/3)×ρ0×R2×r

The intensity of the gravitational field inside the ball of radius r:

Eg(r) = γ×ρ0×(4×π/3)×R2

I.e., inside the sphere the gravitational field intensity is constant.

= γ×ρ20×(4×π/3)2×R5 = γ×M2/R

A multiplier of "1" in the calculation of Wg can also be obtained by summing (integrating) energies of the elements of the mass in the box to each other:

Now consider a stationary Universe in the form of a ball of mass whose density decreases according to the law of inverse square.

Now becomes clear why in the previous examples, had to mess around with integrals. The fact that in determining the gravitational energy of the Universe, the previous ratio is not quite suitable, because they do not account that some masses can not interact with (i.e. contribute to the total gravitational energy of the Universe), if they are + time of propagation of interaction to the other element exceeds the lifetime of the Universe.

This amendment lead to the fact that in the expression for the gravitational energy is the factor 1 in front of γ×M2/R applied value of I defined by the following integral:

where

I.e. the expression for I is the same as from the previous one; but it incorporates an additional multiplier χ(ρ, L, r).

The integral I can be easily computed numerically. It is equal 0,5119. I.e. these amendments lead to a decrease in gravitational energy of the ball about 2 times.

Determination of the mass of the Universe

If the universe was formed as a result of a fluctuation, given the fact that gravitational energy is negative, should be

– Wg + M×c2 = 0

I.e. the total energy of the Universe must be zero. Then its mass M be determined from the relationship:

0,5119×γ×M2/R = M×c2,

where R is the radius of the Universe.

Here

M = 1,9535×R×c2

The average density ρ of matter in the Universe will be

ρ = 3×M/(4×π×R3) = (0,46636/γ)×(c/R)2

Currently, it is believed that the average density ρwith the matter in the Universe is approximately equal to the so-called critical density of 0,931×10-26 kg/m3 (of 5.20×10-6GeV/cm3).

Hence to calculate the radius R of the Universe:

R = {0,46636×c2/(γ×ρwith)}1/2 =

R = {0,46636×0,898752×1017/(6,67384×10-11×0,931×10-26)}1/2 =

= 2,5973×1026 m

 

I.e. obtained value is about 2 times superior to the Hubble radius (the radius of the observable Universe), equal 1,38216×1026 m, i.e. approximately 27,456 billion light years.

The total mass of the Universe will be

M = 6,8328×1053 kg

So

M = 1,9535×R×c2

Looking at this very simple relation, come to very difficult conclusions.

Chief among them: the mass of the Universe is proportional to its radius. This is an indication that with increasing radius can create additional mass, which increases the total mass of the Universe. In other words, not only the global fluctuation such as a Large explosion, but the subsequent formation of matter. In fact, the global fluctuation is not so much the generator of a substance, and a detonator of a process of education in the destruction of the physical vacuum.

Naturally arises the question: where, how and why created this new matter? What is the mechanism of its generation? It is clear that to answer these questions it is necessary to consider the fundamental physical processes at the level of elementary particles, i.e. go from the macro level (the universe) to the micro level (elementary particles).

Quanta the time quanta of space and the fine structure constant

Moving from the macro to the micro level. Denote:

τ is the time slice: minimum time interval below which the existing perception of time become insolvent;

λ is the quantum time: the minimum time interval below which existing notions of space become untenable.

It is generally believed that between values of τ and λ there is a simple correlation:

λ = c×τ,

where c is the speed of light in vacuum.

But there are no physical obstacles to believe that

λ = a×c×τ,

where a is some dimensionless constant a < 1, characterizing the fundamental properties of matter and depending on the properties of the physical environment in elementary space-time cell.

As a possible candidate for the parameter "a", consider the fine structure constant α

α = e2/(2×ε0×c×h)

The fine structure constant α is a unitless value that has always been the object of admiration for physicists. Richard Feynman, one of the founders of quantum electrodynamics, called her "one of the greatest damn mysteries of physics: a magic number that comes to us without any understanding of his person." Were made number of attempts to Express the constant α via purely mathematical quantities (numerology) or calculate on the basis of any physical considerations.

Sommerfeld introduced α as the ratio of the speed of the electron in the first circular orbit to light speed in the Bohr model of the atom.

But a formal fine-structure constant α is the ratio of two energies: the energy required to overcome electrostatic repulsion between two elementary charges, bringing together their infinity to a certain distance s:

e2/(4×π×ε0×s)

and energy of a photon with a wavelength of 2×π×s.

But what is the physical meaning of the value s – is unclear.

Meanwhile, if we proceed from the hypothesis of the existence of quanta of space-time, it is logical as to consider the s her proportional value of the quantum λ-space.

If we assume that matter is the Universe formed as a result of destruction (splitting) of the physical vacuum due to some initial fluctuations in it, the value of the total energy E of matter in the unit cell of space-time can be estimated from the known ratio of the uncertainty principle

E×τ = (

If we put λ = c×τ, then E = m×c/λ.

On the other hand, the energy Ek of the electromagnetic field in the formation of two opposite charges is defined as

Ek = e2/(4×π×ε0×λ),

and the ratio of these two quantities is exactly equal to the fine structure constant α:

{ e2/(4×π×ε0×λ)}/{ m×c/λ } = e2/(4×π×ε0מ×c) = e2/(2×ε0×h×c)

Ie, when you use views in quanta of space-time all becomes quite logical.

But we can go further.

If we consider that the energy of the electromagnetic field – negative, and the energy formed matter is positive, is equal to the sum of these energies is zero (similar to the equality to zero of the total energy of the Universe) follows:

– e2/(4×π×ε0×λ0) + a/τ = 0

It turns out that the depth of the potential well, which are formed from physical vacuum of particles is m/τ, and this value corresponds to the value λ0 = α×λ. With this value we can assume that begins to form space-time.

The magnitude of the quantum space of λ0 is 1/α =137,036 times smaller than the previously considered values of λ, and it corresponds to the speed "C", and α×c. This should not cause much surprise that the density of mass-energy in the elementary space-time cell is likely very high, so all the process there is slow, similar to how very long the light comes out of the center of the Sun, and can occur at speeds (if at all appropriate to use the term "speed" at these spatial and temporal scales), much less the speed of light. In this connection, you can still remember about the speeds of the electrons in the atoms that are approximately equally different from the speed of light.

In order to get out of the potential well, the basic proto-particles must overcome a potential barrier )/τ. It can do only those few of them whose energy is sufficient for this purpose.

Suppose that the distribution of particle energy corresponds to a Boltzmann law with a certain value of the average energy. As you know, this law is characterized by the fact that the average remaining energy of the particles overcome any energy barrier, has the same value as the average energy of the particles at the bottom of the potential well. Use this property to determine the average energy of the particles.

When the particles almost overcome the energy barrier, coming on line λ, the energy of the electromagnetic interaction will be α×m/τ. If we accept that each degree of freedom has the same average energy, the average energy of the particles on the boundary of λ must also be equal to α×m/τ. Further, according to the Boltzmann average energy of the particles at the bottom of the potential well is also equal to α×m/τ.

For that complete out of their potential well, to completely break the link with the parent of the physical vacuum, the particles need to overcome

1 + 1/α

units average energy.

But, as has been said, to reach such "heights" are all particles. Most of them go back to primary physical vacuum by the attraction of charges of opposite sign. Overcome an energy barrier only few of them, the energy of which exceeds α×m/τ in (1+1/α) times. Therefore, the probability "p" of the exit of particles from an elementary space-time cell will be

p = exp { – (1 + 1/α) } = e-1×exp{ – 1/α }

Here e = 2,718281828 – the base of natural logarithms.

The value "R" is 1,12649×10-60. E. the vast majority of primary proto-particles is returned to the physical vacuum.

The output is the average energy of the particles will be α×m/τ = αמ×c/λ. This energy corresponds to the mass of particles

m = α×m/(c×λ)

Of course, the assumption of admissibility of the application of the Boltzmann distribution is likely to be a rough approximation. But where the orders of magnitude differ by many tens, it can be used for rough preliminary estimates.

By the way, if it is determined the real energy distribution of the primary particles in the elementary space-time cell, it is likely that it will be symmetrical with respect to particles and antiparticles – the need to conform to what the world consists mostly of matter and antimatter in it much less.

Thus, as a result of consideration received is important for subsequent assessments, the result is determined by the probability "p" of the output of the elementary particles of the elementary cell of space-time and their total weight.

Determination of size of quanta of space and time from the interaction of microcosm and macrocosm

Basic assumptions:

  • the size of the Universe correspond to the radius R;
  • the random process of formation of matter in quanta of space-time – dynamic: matter is formed and disappears in the dynamic mode; wherein the average intensity of both processes are the same because in the elementary cells is equal to negative energy is positive;
  • the number of elementary blobs of matter that formed during the existence of the Universe, equal to the number of generated quanta of space-time; but a significant part of the matter disappears, turning back into the physical vacuum.

Consider the formation of matter in the layer of thickness λ, when the destruction of the physical vacuum with the formation of matter and space-time occurs at the boundary of the Universe having the radius r. In this case, you must be running ratio:

1,9535×λ×c2/γ = m×(4×(π/e)×r22)×exp(-1/α) =

= m×α×4×(π/e)×R2/(c×λ -3)×exp(-1/α) = (2/e)×h×α×r2/(c×λ -3)×exp(-1/α)

Where

λ = 0,7834×exp(– 0,25/α)×(α×h×r2×γ/c3)1/4

τ = λ/c = 0,7834×exp(– 0,25/α)×(α×h×r2×γ/c7)1/4

m = α×m/(c×λ) = 0,20316×(α×h)3/4×exp(0,25/α)×(r2×γ×c) -1/4

Ie it turns out that the values of λ, τ, m depend not only on the I α, h, but similar to the gravitational constant and r the distance to CGF. And the speed of light "C" that matches these two parameters.

In other words, the cosmological principle is violated. But it seems that there is nothing to worry. As mentioned above, in the present model of the Universe violation of the cosmological principle is quite natural, because originally it allowed the space anisotropy associated with the presence CGF.

But perhaps some laws of physics in different parts of the Universe do differ. Then move between remote areas would be impossible or at least difficult not only for technical but also for reasons of principle – floating in some space-time tunnels, which, like so many science fiction objects fall into a different physical reality and can cease to exist.

Will produce a numerical evaluation, substituting the known values of physical quantities in the relations for λ, τ, m.

On the edge of the Universe at r = R:

λ = 3,10692×10-20 m; λ0 = α×λ = 2,2672×10-22 m

τ = 1,0364×10-28 C; m = 0,82621×10-25 kg

Thus, the granularity of the space λ of approximately 30-90 thousand times smaller than the classical radius of the electron and proton.

When r = 0,266×R, which is about the milky Way (see below):

λ = 1,6024×10-20 m; λ0 = α×λ = 1,1693×10-22 m;

τ = 0,5345×10-28 C; m = 1,602×10-25 kg

Usually the mass of elementary particles is measured in energy units: eV, MeV, GeV.

The mass value 1,602×10-25 kg corresponds to 90 GeV. It is the average mass of all pairs of elementary particles produced in elementary space-time cell.

In table.1 shows the masses of some elementary particles.

Table.1

The mass of elementary particles

 

In General, these figures are orders of magnitude correspond to the obtained above estimates. So there is a chance that these scores somehow reflect the physical reality.

 The expansion of the Universe

The idea of the quanta of space-time allow another way to interpret the physical effects that have received the General name of "expanding Universe".

Physical observations of the expansion of the Universe manifests itself in the shifts of emission spectra of remote objects. The presence of shift indicates the speed "v" of removal/approximation of the objects.

It is believed that the universe is expanding in accordance with Hubble law:

v = H×L,

where H = 2,169×10-18 c – the Hubble constant;

L is the distance between the observer and the object.

Under the hypothesis also assumed that the rate of removal of objects can change over time. Then the universe will be either rapidly expanding or rapidly slow down.

But the experimentally observed shifts of spectral lines can be explained differently in the present model of the Universe. They can be due to different values of the quanta of space-time in distant from each other regions of the Universe even in the absence of real rate of change of distance between objects.

From a formal point of view this follows from the previously obtained ratios:

τ = λ/c = 0,7834×exp(-0.25 per/α)×(α×h×r2×γ/c7)1/4

f = 1/τ = 1,2765×exp(0,25/α)×{r2×γ×h/c7}-1/4 = B×r-1/2

where

B = 1,2765×exp(0,25/α)×{γ×h/c7}-1/4

For clarity, we consider the simplest case when the objects are located on the same radius. Then from the last expression it is easy to get

df/f = – dr/(2×r) = – L/(2×r),

where L is the distance between objects.

If the quanta of time do exist, then we can assume that the frequency of electromagnetic radiation is bound to the values of these quanta, as a natural physical scale.

The change in frequency df/f of the electromagnetic radiation from the remote at a distance L of the object, which is fixed at the point of observation at radius r, seen as a change of the distance L to the object with the relative velocity v/c:

v/c = df/f = – L/(2×r),

Where

v = { c/(2×r) }×L,

The multiplier c/(2×r) in the physical sense of should be equal to Hubble constant "H". In the considered model Universe, it depends on the distance r is the distance of the observer from CHF.

Ie it turns out that the Hubble constant depends on the position of the observer. But this dependence is weak: it is enough to say that the size of our galaxy – the milky Way – is only a 5 millionths of the size of the Universe. Therefore, to detect changes in H will be very difficult.

For an observer in our galaxy H = 2,169×10-18 c. This value corresponds to the radius r, determined from the ratio

r = c/(2×H) = 2,99792 x 108/(2×2,169×10-18) = 0,691×1026 m

I.e. the milky Way is about one-quarter of the distance from CGP to the edge of the Universe, at half the Hubble radius from CHF. In this place the "expansion" of the Universe will occur at a rate corresponding Hubble constant.

In the present model of the Universe, which is inherent in the anisotropy space (since there is a preferential direction for CHF), we should expect a weak dependence at least some physical parameters of the observer's location. As mentioned above, this is manifested as a violation of the cosmological principle.

So, no extension may not be. But due to the changes of λ and τ in the transition from one point to another, can change the rhythms of oscillations of the electromagnetic field, leading to shifts of spectral lines. The observer it will be perceived as a change in the distance between the source and the radiation receiver, i.e. the rate of change of the distance between the objects.

Moreover, the value of L has a sign: positive in the direction of increasing r and negative in the direction of ZGF. Therefore, if the observer looks towards the centre of the global fluctuations, he can see not the red and blueshift of the emission spectrum. This blueshift is actually observed [2].

In the present model of the Universe is quite clear: if there is the city library, the extension can only be from the center, not the center.

But you can also note: since in this model the milky way is located at a distance of approximately a quarter of the radius of the Universe, the number of objects that identifitseerida as receding, will exceed the number of "upcoming" features.

Now consider the question of fast/slow expansion of the Universe. With regard to the content of the last paragraph, it may be also apparent.

Because

d2f/f = (3/4)×dr2/r2 = (3/4)×L2/r2,

The frequency change d2f/f electromagnetic radiation from the remote at a distance L of the object, which is fixed at the point of observation at radius R, seen as a rate of change of the distance L, i.e., as the deceleration of the a/c delete an object:

a/c = 2×(d2f/f)×c/L = – (3/2)×c×L/r2

Location:

a/L = – (3/2)×(c/r)2 = – 6×H2

Thus, at the point of observation radiation spectra of distant objects are shifted as though is happening slow the expansion of the Universe. And, we should observe a pattern similar to the effect of the Hubble, but for deceleration – proportional "a" to the distance L between the objects. And the proportionality coefficient 6×H2 depends on the value of R. For our galaxy, it is equal to 2,8227×10-35 with -2.

About the background electromagnetic radiation of the Universe

Thrown from destroying the physical vacuum of proto-particles, forming spatio-temporal cells and extending in the created space-time will experience a series of transformations, turning into ordinary particles. And since it carries electric charge, it should have some kind of background electromagnetic radiation, which is then formed a kind of matter, a by-product of this process.

Try to evaluate the properties of this radiation.

Perhaps the background radiation is mainly formed electrons and positrons. Therefore, it is logical to use assessments of characteristics of these particles.

In classical physics, a so-called electron radius

re = e2/(4×π×ε0×m×c2)

But if you consistently make classical performances more logical instead of the previous ratio to use another:

re = (3/5)×e2/(4×π×ε0×m×c2)

From previous it differs by the presence of the additional multiplier 3/5, which defines the spherical energy of an electron in its own field, similar to how the self-gravitational energy of a homogeneous ball.

According to the latest calculated correlation value re equal to 1.69×10-15 m.

We determine the magnitude of fluctuation of energy in a spatial cell of size re. It is through such fluctuations arise pairs of electrons and positrons, and in their disappearance is formed by electromagnetic radiation.

m×c/re = m×c2×m×c/{(3/5)×e2/(4×π×ε0)} = (5/3)×m×c2

Where for a pair of particles

2×m×c2 = (6/5)×α×(m×c/re) = (6/5)×α×(m×c/λ)×(λ/re)

But energy is only 3/5 of the total energy of the electromagnetic field of a uniform charged sphere. Therefore, the total electromagnetic energy of pair of particles is equal to

2×α×(m×c/λ)×(λ/re)

The dimensionless quantity 2×(λ/re) is equal 3,676×10-5. If it is compared to another dimensionless quantity α2 = 5,325×10-5, it will be about 1.5 times less. Therefore, the magnitude of the total electromagnetic energy of the pair particles can be approximately written also in the form

(2/3)×α3×(m×c/λ)

Dividing this energy by the volume of π×λ3/6 elementary spatial cells, we get the following estimate for the energy density u of the electromagnetic background field:

u = (4/π)×α3×p×(m×c/λ4)

It should be specially mentioned: so far, nothing is said about the distribution of electromagnetic energy on frequencies, the number of photons, which result from multiple scattering can be formed very much, and until they get out of the space-time cells, their average energy will be very small. Now we are talking only about the total energy density of the electromagnetic field generated in the unit cell.

If the last ratio to substitute λ = 3,10692×10-20 m, we can obtain u = 1,891×10-14 j/m3, i.e. about 2×10-14 j/m3.

But this generated electromagnetic energy, almost instantly goes out of the cell. But in each volume element come the photons produced in other parts of the Universe. They form a density of the background radiation.

In order to calculate the energy of all photons received, you need to compute a simple integral. But it is necessary to consider that the density of generated radiation energy u depends on the parameter λ, which in turn depends on r – the radius of the point where the formed photon.

From the previously obtained relations it follows that λ ~ r -1/2, i.e. λ4 ~ r -2.

This dependence u(r) used in calculating the indicated integral.

The value of the integral, which determines the density U(r) the energy of the incoming radiation at points located at different radii r from CHF shown in Fig. 1. These values are represented in the scale the density of radiation at radius R, i.e. in the scale of u(R).

 

Fig. 1. Energy density depending on the distance from CGF.

Draw attention to two features of the results.

First, the high stability of U(r) at r/R < 0,5. So, for r/R = 0,266, where is the milky Way, when deviation Δ(r/R) = 0.05 energy density changes by less than 1%.

The deviations Δ(r/R) = ± 0,05 correspond to the distance ± 1.37 billion light years – almost 14 thousand times larger than our galaxy.

Secondly, when approaching r/R to 1 an increase of the energy density. If you remember the analogy of the Universe with fire on a dry, flat field, when you move away from the center of the fire to the border where burns dry grass, the temperature also rises sharply.

In order to determine the value of U at r/R = 0,266, you need to go from relative units to absolute – multiply is shown in Fig. 1 values at 1,891×10-14 j/m3. Get U = 1,85×10-14 j/m3.

You can define also the angular characteristics of this radiation for an observer of the milky Way, for example, the power density per unit solid angle. These data are presented in Fig. 2 also in relative terms, but in this case, the scale is u×S.

Now a natural question arises: is there any radiation with similar characteristics in the uniformity and energy density of the order of 10-14 j/m3?

Answer: indeed there is, it's called relic, and has an energy density of 4×10-14 j/m3. This cosmic microwave background radiation that uniformly fills the Universe, having a high degree of isotropy and the spectrumcharacteristic for absolutely black body with temperature 2,72548 ± 0,00057 K [3-6]. In the Big Bang theory, it is believed that the CMB originated in the era of primary recombination of hydrogen.

Fig. 2. The angular power density at r = 0,266×R the distance from CGF.

 

Most likely, this is what you need: a difference was the energy density from the calculated values should not be confused, given the estimates drawn from the above calculations.

Currently, the maps of background radiation (Fig. 3).

Fig.3. Map of background radiation

 

And it turned out that these maps are interesting details that contradict the Big Bang theory.

For example, in 2004 there was discovered a mysterious "cold spot" in the microwave "echo" of the Big Bang (Fig. 3 it's outlined in a white ellipse). The size of spots exceeds theory.

Over nature of this spot, scientists are wondering still. At first they thought they were dealing with a defect of the instrument or some kind of statistical error, however, the following similar apparatus – the European telescope "Planck" – confirmed the existence of spots.

Currently, it is believed that the stain originated from the existence of a giant cosmic "desert" with a length of 1.8 billion light years in length and several tens of billions of light-years in thickness, where there are almost no galaxies and stars. The nearest boundary "of the desert" is 3 billion light years from Earth.

It is assumed that in the "desert" microwave background of the Universe in a special way slowed and "stretched" under the influence of dark energy causing the Universe to expand with acceleration. Due to this, the echoes of the birth of the Universe, flying through this "desert" on the way to the Ground, will seem to telescopes more "blue" and "cold" than the beams on the CMB, which was accelerated by large clusters of galaxies.

As acknowledged by astrophysicists, the existence of a giant cosmic "desert" in this part of the constellation Eridanus, apparently, is not accidental, and her appearance has something to do with the processes that are also involved in the origin of the "cold spot".

The other anomaly that was found in the "plank" – global "perenoshennoy", the unevenness in the distribution of fluctuations. On one half of the microwave range, the reason was more about "cold" spots than the other.

Scientists say that this may indicate an uneven distribution of matter throughout the Universe, and this puts into question the existing theory of inflation – super-fast expansion of the Universe in the first moments after the Big Bang, because the theory assumes that the universe was expanding uniformly.

Some very daring theorists suggest that the asymmetry due to adjacent Universe, which is "hit" during our rapid expansion. However, scientists still do not know even how to check this assumption.

Thus, the results of measurements of the background radiation showed that the universe is so isotropic on large scales, as expected. Already the first studies of the relict microwave background, using space telescopes revealed strong evidence that the Northern and southern hemispheres of the sky don't look quite as required in the isotropic theory of the Universe. Was discovered anomalous "cool places" in the distribution of the background is anomalous from the point of view of not only temperature, but also the shape and General size.

In other words, despite the success of the Big Bang theory, stemming from the cosmological principle, there are observational facts which contradict the cosmological principle, and, therefore, based on this theory. Chief among them is the so – called "axis of evil"discovered in 2006 – unexplained weak anisotropy of the CMB.

In the present in this article a hypothetical model of the Universe the background radiation is not the result of cooling of the primary radiation, as a by-product of an ongoing process of creation of matter in space-time cells of the physical vacuum. And the presence of the "axis of evil" in this model is not a paradox, but quite natural: it defines a preferred direction in CHF. Incidentally, the estimate of the distance from the milky Way to the center of the spot roughly corresponds to the above estimated distance r = 0,266×R.

Conclusion

Above was indicated the main outlines of the hypothesis about the origin and development of the Universe, the alternative hypothesis of the Big Bang.

The main assumptions of this hypothesis is the following.

Matter in the Universe is not created in the Big Bang that occurred in a negligible amount for a small fraction of a second, but gradually, as a result of destruction (splitting, cracking) of the physical vacuum. Failure occurs because of some initial fluctuations, which in this process plays a major and complementary role of the detonator. Matter is not created as a result of initial fluctuations, and subsequent destruction of the vacuum in the distribution of its deformation.

Together with matter at a certain stage of its transformation, when there is an electromagnetic field to form another cell of space-time. In fact, as a result of the electromagnetic field can be formed in the usual space and time. Before talking about space-time is meaningless, it does not exist physically.

As generating space-time the electromagnetic field is quantized, the quantization property is transferred and formed by them the space-time. Therefore, it is logical to assume the existence of space quanta and time quanta.

In these elementary space-time cell, as in a kind of bleeding wounds on the body of the physical vacuum, formed matter in the form of clusters of primary proto-particles, a very small fraction of which manages to escape from the depths of these cells, but loss of baboutLisa part of its energy. As a result of subsequent transformations of these proto-particles (possibly taking place in several stages), the formation of known and yet unknown elementary particles, giving rise to a new mass of the Universe. But this process is dynamic: at all stages of transformation of matter it can go both in one and in the other direction. Further – conversion scheme this matter similar to that dealt with in the scheme of the Big Bang theory at the stage of inflation.

In other words: instead of one giant Big Bang at the point – a lot of explosions in the elementary space-time cell.

The specified hypothesis is on the order of magnitude of the observed masses of elementary particles, gives his explanation for the effect of the expansion of the Universe, presenting it as seemingly as well – the background radiation of the Universe, which in the terminology of the hypothesis of the Big Bang called the cosmic microwave background radiation.

It also allows you to set possible values of quanta of space (λ ~ 10-20 m) and time (τ ~ 10-28 C) from the matching conditions of the macrocosm and the microcosm. Of course, theoretically, in the abstract, you can consider a much smaller magnitude, but how they are real – the question.

With regard to the cosmological principle one of the main principles of modern cosmology in the framework of the considered hypothesis, it is not a universal fit for any area of the Universe, and local applicable approximately. However, the size of the areas of its approximate applicability of the huge scope of not only the Solar system but the galaxy. The fact that in the framework of the considered model the cosmological principle is not strictly applicable, it follows from the natural anisotropy of space-time caused by the presence of CHF. And, it seems, experimental data on the characteristics of the relict background radiation confirm the presence of a certain preferred direction in the Universe.

In the considered hypothesis, a key role plays the physical parameter α, which are currently very modestly called the "fine structure constant". In fact, it may be that α is not one of the many dimensionless physical factors in the formulas, and the backbone, the fundamental parameter of the universe.

It is clear and expected that the hypothesis of that affects the scientific interests of many researchers, will cause a strong, not always undeserved, criticism. Perhaps, in the mouth of critics by the most delicate may be the phrase from the novel "Master and Margarita": "I am pleased to inform you that your theory is solid and witty. However, because all theories stand one another."

Indeed, the confusion in the various hypotheses quite a lot.

Although, given the big problems of the Big Bang theory and critically evaluating only the proposed hypothesis, we can recall another phrase from the works of the great Master: "I am beginning to fear that this confusion will last for a very long time."

List of sources used

  1. Fomin A. N. "About the Big Bang after the end of the world" // ANO Center for strategic estimates and projections – M., 2013, – http://csef.ru/ru/nauka-i-obshchestvo/459/o-bolshom-vzryve-posle-koncza-sveta-3902
  2. Popov S., A. Toporensky "the universe and the world of Hubble" – http://galspace.spb.ru/index76.html
  3. The "fruit "Bar": a cold spot and a bruise from another Universe" – 2013, https://ria.ru/science/20131023/972164404.html
  4. Verkhodanov O., "anomalies of the cosmic microwave background radiation" – 2014, https://postnauka.ru/video/41193
  5. "Astronomers have uncovered the mysterious nature of the cold spot of the Big Bang" // RIA "Novosti" – 2015, https://ria.ru/science/20150420/1059686828.html
  6. Ways of exploring outer space. Telescopes microwave range – http://galspace.spb.ru/index62-7two.html

Alexander Fomin


RELATED MATERIALS: Science and Society
Возрастное ограничение