On Space Time And The Fabric Of Nature Field Model Intro

Introduction to the "field" concept and the modelling thereof

To the Mathematician, 2 apples multiplied by 6 oranges is 12.
To the Engineer, the multiplication of apples and oranges makes no sense at all.

The above statement is a good illustration of the difference between Mathematics and Physics. In Physics, Mathematics is a fantastic tool. It allows us to make highly accurate predictions and formal descriptions of the processes we observe to take place in Nature. In a way, Mathematics is a very powerful language, because the symbols it uses and defines, have an enormous expressive power. Perhaps, above all, it is this expressive power, formalized in a language, which makes it so very useful in Physics.

It is this same expressive power, for example, which makes Python such a powerful programming language. One of the reasons Python is such an expressive programming language is because in a way "everything goes anywhere". You can add "strings" and "numbers" in Python, for example, which is a bit like adding apples and oranges in other languages. The disadvantage of such a language, however, is that sometimes errors occur when a program is being run with unexpected inputs.

The same argument can be made about Mathematics. Mathematics doesn't care if you are trying to multiply apples and oranges, even though the obtained result has no meaning in the "real world". So, what makes Mathematics such a powerful tool and language, is that because it defines all kinds of abstract concepts, relationships and calculation methods in a formal symbolic language, it has an enormous expressiveness. Expressiveness, which can be used to accurately describe all kinds of processes and systems and to solve all kinds of problems associated with these. However, it does not give a "real world" meaning to these abstract concepts it studies and describes.

So, it is not up to Mathematicians, but up to Physicists and Engineers to make sure that the Mathematics they use actually produce meaningful results. This is illustrated by Albert Einstein, who wrote in "The Evolution of Physics" (1938) (co-written with Leopold Infeld):

"Fundamental ideas play the most essential role in forming a physical theory. Books on physics are full of complicated mathematical formulae. But thought and ideas, not formulae, are the beginning of every physical theory. The ideas must later take the mathematical form of a quantitative theory, to make possible the comparison with experiment."

As we saw earlier, Freeman Dyson illustrated how Maxwell´s field model, which was founded upon well described and understood Newtonian principles, gradually evolved from a meaningful concept into a pure abstract concept, whereby eventually all connection to the "real world" has been lost:

Maxwell's theory becomes simple and intelligible only when you give up thinking in terms of mechanical models. [...] Fields are an abstract concept, far removed from the familiar world of things and forces.

In his paper "A Foundation for the Unification of Physics"(1996) Paul Stowe described this as follows:

Many of apparent inconsistencies that exist in our current understanding of physics have results from a basic lack of understanding of what are called fields. These fields, electric, magnetic, gravitational...etc, have been the nemesis of physicists since the birth of modern science, and continues unresolved by quantum mechanics. A classical example of this is the problem of an electron interacting with it's own field. This case results in the equations of quantum mechanics diverging to infinity. To overcome this problem, Bethe introduced the process of ignoring the higher order terms that result from taking these equations to their limit of zero distance, in what is now a common practice called renormalization.
These field problems result in a class of entities called virtual, existing only to balance and explain interactions. These entities can (and do) violate accepted physical laws. This is deemed acceptable since they are assumed to exist temporarily at time intervals shorter than the Heisenberg's uncertainty limit. It has been known for some time that such virtual entities necessitate the existence of energy in this virtual realm (Field), giving rise to the concept of quantum zero point energy.
As a result of this presentation I will propose the elimination of both the need for renormalization and any such virtual fields. This will be accomplished by replacing the virtual field with a real physical media within which we define elemental particles (which more precisely should be called structures) and the resultant forces which act between them.

Currently, the field concept has departed so much from Maxwell's down to Earth origins, that there is little room left to distinguish the modern field concept from pseudo-science, if we are to follow Karl Popper's definition:

"In the mid-20th century, Karl Popper put forth the criterion of falsifiability to distinguish science from nonscience. Falsifiability means a result can be disproved. For example, a statement such as "God created the universe" may be true or false, but no tests can be devised that could prove it either way; it simply lies outside the reach of science. Popper used astrology and psychoanalysis as examples of pseudoscience and Einstein's theory of relativity as an example of science."

With the current definitions of, for example, the electric-, magnetic- and gravitational fields, there are both units of measurement as well as instruments with which one can measure the strength of these fields. In hindsight, we might argue that these units of measurements are defined somewhat arbitrarily, but they are measurable nonetheless and thus "make possible the comparison with experiment".

But what about "virtual particles", "dark matter", "weak nuclear forces", "strong nuclear forces" and even "10 to 26-dimensional string theories"? Isn't there a strong sceptic argument to be made here that, at the very least, these kinds of unmeasurable concepts are bordering on the edge of pseudo-science?

Either way, even though we are well aware of the wave-particle duality principle and should have concluded that therefore there can be only One fundamental force and therefore only one field, a plethora of fields have been defined, none of which has brought us any closer to the secret of the "old one." So, if there can be only one fundamental physical field of force, what is it's Nature? How do we use this wonderful Mathematical and abstract concept of a "field" and use it in a physically meaningful way?

Let us simply go back to something that stood the test of time: the original foundation Maxwell's equations were based upon, which is to postulate the existence of a real, physical fluid-like medium wherein the same kind of flows, waves, and vortex phenomena occur as which we observe to occur in, for example, the air and waters all around us. We shall do just that and then work out the math, using nothing but "classic" Newtonian physics, meanwhile making sure that the mathematical concepts we use have a precisely defined physical meaning and produces results with well defined units of measurement.

And since it is the field concept which has made a life of it's own, let us first consider what we mean by a physical field of force and define it's units of measurement, so that we can clearly distinguish a physical field of force from the more general mathematical abstract field concept we use to describe our physical field. That way, we can do all kinds of meaningful calculations, predictions and experimental verifications. And just like 2*6=12 has no meaning in and of itself in Physics, the abstract mathematical field concept has no meaning in and of itself in Physics. To sum this up:

In a way, Physics is the art of using abstract Mathematical concepts in a way that is meaningful for describing and predicting the Physical phenomena we observe in Nature.

In practice, that comes down to a book-keeping exercise. All we really need to to is to keep track of which mathematical concept we use to describe what. For example, if we use the abstract field concept to describe something we call a physical field of force, we should unambiguously associate a unit of measurement to the abstract mathematical concept used. This way, we have clearly defined what the abstract concept means within a certain context. In Software Engineering, this is what's called type checking:

In programming languages, a type system is a collection of rules that assign a property called type to various constructs a computer program consists of, such as variables, expressions, functions or modules. The main purpose of a type system is to reduce possibilities for bugs in computer programs by defining interfaces between different parts of a computer program, and then checking that the parts have been connected in a consistent way.

Just read "unit of measurement" for "type" and we are talking about the exact same concept.

Introduction to vector calculus

"The partial differential equation came into theoretical physics as a servant, but little by little it took on the role of master." - Albert Einstein (1931)

While this statement by Albert Einstein may or may not ring true to you, many people will ask the question: "What does it mean?" Since it is precisely that question that concerns us when we want to define what we mean by a "physical field of force", let us consider this statement a little bit further, because we will be using partial differential equations to describe our model, although we will make use of the expressiveness vector calculus offers us in order to keep things understandable and to express the concepts we are considering in a meaningful way.

As an illustration of the expressive power of vector calculus, let us consider the definition of "divergence":

Let x, y, z be a system of Cartesian coordinates in 3-dimensional Euclidean space, and let i, j, k be the corresponding basis of unit vectors. The divergence of a continuously differentiable vector field F = Ui + Vj + Wk is defined as the scalar-valued function:

{$$ \operatorname{div}\,\mathbf{F} = \nabla\cdot\mathbf{F} = \left( \frac{\partial}{\partial x}, \frac{\partial}{\partial y}, \frac{\partial}{\partial z} \right) \cdot (U,V,W) = \frac{\partial U}{\partial x} +\frac{\partial V}{\partial y} +\frac{\partial W}{\partial z}. $$}

At the left, we have the notation in words, followed by a notation using the $\nabla$ operator ($\nabla$ is the Greek letter "nabla"), while at the right, we have the same concept expressed in partial differential notation. So, when we use this $\nabla$ operator in equations, we are actually using partial differential equations. However, with the notation we will use, we can concentrate on the physical meaning of the equations rather than to distract and confuse ourselves with the trivial details.

So, what Einstein actually said was something like that the concepts we will use, too, gradually morphed from being useful and meaningful tools into essentially taking all branches of physics hostage. As early as 1931, Einstein already recognised that it was no longer reasoning and fundamental ideas that guided scientific progress, but rather a number of abstract concepts which drifted ever further away from having any physical meaning at all, a destructive process which still continues this very day and age.

Now let us briefly introduce the main mathematical concepts we will use: derivative, divergence, curl, gradient, the "Laplacian" and some useful vector identities. For a more in depth introduction, one can follow a free online class on multivariable calculus with exercises and instruction video's on Khan Academy.

derivatives

The vector calculus concepts we will introduced below (divergence, gradient, curl and Laplacian), are what are mathematically called derivatives:

The derivative of a function of a real variable measures the sensitivity to change of a quantity (a function value or dependent variable) which is determined by another quantity (the independent variable). Derivatives are a fundamental tool of calculus. For example, the derivative of the position of a moving object with respect to time is the object's velocity: this measures how quickly the position of the object changes when time is advanced.

Contrary to financial "derivatives", financial weapons of mass destruction, in mathematics a derivative of a function is, as the name suggests, a function derived from another function. It is not destructive in any way, it simply says something about a function. And what it says is not arbitrarily, but follows specific definitions. In 1 dimension, for example, the derivative of a function always gives the slope of that function, as illustrated in this WikiPedia picture:

The graph of a function, drawn in black, and a tangent line to that function, drawn in red. The slope of the tangent line is equal to the derivative of the function at the marked point.

So, the mathematical concept of derivative says something about the "thing" it's the derivative of. In 1 dimension, 1D, this is always the rate of change of the function the derivative is calculated of.

Now this concept can be applied multiple times. For example, like the derivative of the position of a vehicle gives it's speed, the derivative thereof on it's turn gives you the rate of change of speed, which is called "acceleration". And since it's the second derivative of position, it's called the second derivative, or the second order derivative:

In calculus, the second derivative, or the second order derivative, of a function f is the derivative of the derivative of f. Roughly speaking, the second derivative measures how the rate of change of a quantity is itself changing; for example, the second derivative of the position of a vehicle with respect to time is the instantaneous acceleration of the vehicle, or the rate at which the velocity of the vehicle is changing with respect to time. In Leibniz notation:

{$$ \mathbf{a} = \frac{d\mathbf{v}}{dt} = \frac{d^2\boldsymbol{x}}{dt^2}, $$}

where the last term is the second derivative expression.

In vector calculus, four important derivatives are defined. Three first-order ones (divergence, curl and gradient) and one second order one (Laplacian). Within this context, it is important to note the difference between a scalar and a vector (field):

  • A scalar is a single number, such as for example the air pressure. And since a scalar is described a single number, it has a magnitude but does not have a direction.
  • A vector contains multiple numbers and therefore has both a direction as well as a magnitude, such as for example the flow of a fluid down a hill.

Further, within vector calculus, the derivatives we use are spatial derivatives, which express the rate of change with respect to space or position rather than with respect to time:

A spacial derivative is a measure of how a quantity is changing in space. This is in contrast to a temporal derivative which would be a measure of how a quantity is changing in time.
For instance, is you placed a metal bar with one end in ice water, and the other end in boiling water, you could measure the temperature along the bar. The temperature would be different at each point along the bar. The rate of change of this temperature along the bar is a spacial derivative. (A temporal derivative would be if you took a hot piece of metal and put one end in ice, then measured the temperature at the other end over time, and found the rate at which it cools down.)

This means that we can associate a unit of measurement to these derivatives, namely per meter [{$/m$}] for first order derivatives and per meter squared [{$/m^2$}] for second order derivatives. And since these units of measurements have a specific meaning in physics, it is most important to keep track of these throughout the whole of or model. This way, we can keep our model consistent and meaningful.

It is very important to have an intuitive interpretation of what these derivatives mean in the physical context of fluid dynamics, which is what aether physics is all about.

divergence

In vector calculus, divergence is a vector operator that produces a signed scalar field giving the quantity of a vector field's source at each point. More technically, the divergence represents the volume density of the outward flux of a vector field from an infinitesimal volume around a given point.
As an example, consider air as it is heated or cooled. The velocity of the air at each point defines a vector field. While air is heated in a region, it expands in all directions, and thus the velocity field points outward from that region. The divergence of the velocity field in that region would thus have a positive value. While the air is cooled and thus contracting, the divergence of the velocity has a negative value.

As an intuitive explanation, one can say that the divergence describes something like the rate at which a gas, fluid or solid "thing" is expanding or contracting. When we have expansion, we have an out-going flow, while with contraction we have an inward flow. As an analogy, consider blowing up a balloon. When you blow it up, it expands and thus we have a positive divergence. When you leave air out, it contracts and thus we have a negative divergence.

It can be both denoted as "{$ div $}" and by using the "nabla operator" as "{$ \nabla \cdot $}".

In fluid dynamics, divergence is a measurement of compression. And therefore, by definition, for an incompressible medium or vector field, the divergence is zero, like for example with the magnetic field, which is called Gauss's law for magnetism:

{$$ \nabla \cdot \mathbf{B} = 0 $$}

curl

In vector calculus, the curl is a vector operator that describes the infinitesimal rotation of a 3-dimensional vector field. At every point in the field, the curl of that point is represented by a vector. The attributes of this vector (length and direction) characterize the rotation at that point.
The direction of the curl is the axis of rotation, as determined by the right-hand rule, and the magnitude of the curl is the magnitude of rotation. If the vector field represents the flow velocity of a moving fluid, then the curl is the circulation density of the fluid.

[...]

The alternative terminology rotor or rotational and alternative notations {$ rot \, \mathbf{F} $} and {$ \nabla \times \mathbf{F} $} are often used (the former especially in many European countries).

[...]

Intuitive interpretation
Suppose the vector field describes the velocity field of a fluid flow (such as a large tank of liquid or gas) and a small ball is located within the fluid or gas (the centre of the ball being fixed at a certain point). If the ball has a rough surface, the fluid flowing past it will make it rotate. The rotation axis (oriented according to the right hand rule) points in the direction of the curl of the field at the centre of the ball, and the angular speed of the rotation is half the magnitude of the curl at this point.

Let us note that, unlike the divergence, the curl as both a length and a direction, which means that it gives you a vector, while the gradient gives you a single number, which is called a scalar.

gradient

In mathematics, the gradient is a generalization of the usual concept of derivative to functions of several variables. [...] Similarly to the usual derivative, the gradient represents the slope of the tangent of the graph of the function. More precisely, the gradient points in the direction of the greatest rate of increase of the function, and its magnitude is the slope of the graph in that direction.

The gradient concept is very similar to that of Grade or slope:

The grade (also called slope, incline, gradient, pitch or rise) of a physical feature, landform or constructed line refers to the tangent of the angle of that surface to the horizontal. It is a special case of the gradient in calculus where zero indicates gravitational level. A larger number indicates higher or steeper degree of "tilt". Often slope is calculated as a ratio of "rise" to "run", or as a fraction ("rise over run") in which run is the horizontal distance and rise is the vertical distance.

Intuitively, the gradient gives you the direction and size of the biggest change of a function. In the mountain analogy, the gradient points in the direction a ball put on a mountain surface would start rolling. The steeper the surface, the bigger the gradient.

Let us note, that unlike the divergence which takes a vector and gives you a scalar value, the gradient takes a scalar and gives you a vector. So, the gradient and the divergence are complementary to one another.

Laplacian

In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a function on Euclidean space. It is usually denoted by the symbols {$\nabla \cdot \nabla$}, {$\nabla^2$}, or {$\Delta$}. The Laplacian {$ \Delta f(p)$} of a function f at a point p, up to a constant depending on the dimension, is the rate at which the average value of f over spheres centered at p deviates from f(p) as the radius of the sphere grows. In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems such as cylindrical and spherical coordinates, the Laplacian also has a useful form.

[...]

The Laplacian occurs in differential equations that describe many physical phenomena, such as electric and gravitational potentials, the diffusion equation for heat and fluid flow, wave propagation, and quantum mechanics. The Laplacian represents the flux density of the gradient flow of a function. For instance, the net rate at which a chemical dissolved in a fluid moves toward or away from some point is proportional to the Laplacian of the chemical concentration at that point; expressed symbolically, the resulting equation is the diffusion equation. For these reasons, it is extensively used in the sciences for modelling all kinds of physical phenomena.
The Laplace operator is named after the French mathematician Pierre-Simon de Laplace (1749–1827), who first applied the operator to the study of celestial mechanics, where the operator gives a constant multiple of the mass density when it is applied to a given gravitational potential. Solutions of the equation ∆f = 0, now called Laplace's equation, are the so-called harmonic functions, and represent the possible gravitational fields in free space.
The Laplace operator is a second order differential operator in the n-dimensional Euclidean space, defined as the divergence (∇·) of the gradient (∇ƒ). Thus if ƒ is a twice-differentiable real-valued function, then the Laplacian of ƒ is defined by

{$$ \Delta f = \nabla^2 f = \nabla \cdot \nabla f $$}

In this definition, the function f is a real-valued function:

In mathematics, a real-valued function or real function is a function whose values are real numbers. In other words, it is a function that assigns a real number to each member of its domain.

This means that such a function defines a scalar field in terms of vector calculus. And therefore, it should be no surprise that the Laplacian of a scalar field {$\psi$} is defined exactly the same as the divergence of the gradient:

{$$ \nabla^2 \psi = \nabla \cdot (\nabla \psi) $$}

And since the divergence gives a scalar result, the Laplacian of a scalar field also gives a scalar result.

When we equate the Laplacian to 0, we get Laplace's equation:

{$$ \nabla^2 \phi=0, $$}

Laplace's equation and Poisson's equation are the simplest examples of elliptic partial differential equations. The general theory of solutions to Laplace's equation is known as potential theory. The solutions of Laplace's equation are the harmonic functions, which are important in many fields of science, notably the fields of electromagnetism, astronomy, and fluid dynamics, because they can be used to accurately describe the behavior of electric, gravitational, and fluid potentials. In the study of heat conduction, the Laplace equation is the steady-state heat equation.

It is also used in the Wave equation {$$ \nabla^2 \psi=\frac{1}{v^2}\frac{\partial^2\psi}{\partial t^2}, $$}

the Helmholtz equation {$$ \nabla^2 \psi+k^2\psi=0, $$}

and the Schrödinger equation {$$ i\hbar \frac{\partial\Psi(x,y,z,t)}{\partial t}=\left[-\frac{\hbar^2}{2m} \nabla^2+V(x) \right] \Psi(x,y,z,t). $$}

The Laplacian can be generalized from three dimensions to four-dimensional "spacetime", which is known as the d'Alembertian or wave operator.

In the following video, the Laplacian is presented with an intuitive explanation:

http://www.youtube.com/watch?v=EW08rD-GFh0

In Image Processing a Laplacian filter is used for edge detection:

Intuitive explanation

The Laplacian is analogous to the second order derivative in one dimension. At areas where the function has a (local) minimum, the Laplacian is positive, while in areas where the function has a local maximum, the Laplacian is negative. As can be seen in the above pictures, this can be used in image processing to find areas in a picture where there are large changes in brightness, which are called edges.

Vector Laplacian

The (scalar field) Laplacian concept can be generalized into vector form, the Vector Laplacian:

In mathematics and physics, the vector Laplace operator, denoted by {$ \nabla ^{2} $}, named after Pierre-Simon Laplace, is a differential operator defined over a vector field. The vector Laplacian is similar to the scalar Laplacian. Whereas the scalar Laplacian applies to scalar field and returns a scalar quantity, the vector Laplacian applies to the vector fields and returns a vector quantity. When computed in rectangular cartesian coordinates, the returned vector field is equal to the vector field of the scalar Laplacian applied on the individual elements.
The vector Laplacian of a vector field {$ \mathbf{A} $} is defined as:

{$$ \nabla^2 \mathbf{A} = \nabla(\nabla \cdot \mathbf{A}) - \nabla \times (\nabla \times \mathbf{A}) $$}

From this, we can work out the curl of the curl:

{$$ \nabla \times \left( \nabla \times \mathbf{A} \right) = \nabla(\nabla \cdot \mathbf{A}) - \nabla^{2}\mathbf{A}$$}

and the gradient of the divergence:

{$$ \nabla(\nabla\cdot\mathbf{A})=\nabla^{2}\mathbf{A} + \nabla\times(\nabla\times\mathbf{A}) $$}

some vector calculus identities

Identities are equations, whereby the term left of the '=' sign gives the same result as the one on the right. So, the equations just above are examples of vector identities. We will also use the following ones:

  • The curl of the gradient of any twice-differentiable scalar field {$ \phi $} is always the zero vector:

{$$\nabla \times ( \nabla \phi ) = \mathbf{0}$$}

  • The divergence of the curl of any vector field A is always zero:

{$$\nabla \cdot ( \nabla \times \mathbf{A} ) = 0 $$}

Physical field of force

Now let us consider Newton's second law of motion:

The second law states that the rate of change of momentum of a body, is directly proportional to the force applied and this change in momentum takes place in the direction of the applied force.

{$$ \mathbf{F_N} = \frac{\mathrm{d}\mathbf{p}}{\mathrm{d}t} = \frac{\mathrm{d}(m\mathbf v)}{\mathrm{d}t} $$}

The second law can also be stated in terms of an object's acceleration. Since Newton's second law is only valid for constant-mass systems, it can be taken outside the differentiation operator by the constant factor rule in differentiation. Thus,

{$$ \mathbf{F_N} = m\,\frac{\mathrm{d}\mathbf{v}}{\mathrm{d}t} = m\mathbf{a}, $$}

where FN is the net force applied, m is the mass of the body, and a is the body's acceleration. Thus, the net force applied to a body produces a proportional acceleration. In other words, if a body is accelerating, then there is a force on it.

From this, we can make an intuitive, first explanation for what a physical field of force actually is:

A physical field of force is the 3D version of the 1D concept of acceleration.

In 3D, things are more complicated, but the same principles apply. Note though, that velocity and acceleration are first and second derivatives of the position with respect to time (t).

Stowe's aether model

The basis of Stowe's theory is the definition of a simple model for describing the aether as if it were a compressible, adiabatic and inviscid fluid. Such a fluid can be described with Euler's equations:

The equations represent Cauchy equations of conservation of mass (continuity), and balance of momentum and energy, and can be seen as particular Navier–Stokes equations with zero viscosity and zero thermal conductivity.

In other words, with such an aether model, we can describe the conservation of mass, momentum and energy and if the hypothesis of the existence of such a kind of aether holds, these are the only three quantities that are (fundamentally) conserved.

The definition of his aether model is straightforward and can be found in his "A Foundation for the Unification of Physics" (1996) (*):

We will start by defining a single vector entity (a basic quantum [not a photon, neutrino, graviton]). The fundamental properties of this quantum entity is; it has momentum P, occupies space consisting of volume s, obeys Newton laws of motion, exerts no force, and no external forces are exerted on it. These quanta therefore move through four dimensional space (x,y,z,t) at velocity V and have an apparent mass m, equal to (P/V).
Next, a population of n of these quanta, having random orientation, occupying volume s', [...] results in a system described by basic kinetic theory (without friction or interacting forces {a superfluid state}). Since each quantum, by definition, has an intrinsic momentum P, the system momentum p_s, becomes simply n[p].

With this definition, all kinds of considerations can be made, for example about the question of whether or not an aether model should be compressible or not. In a Usenet posting dated 4/26/97 he wrote(*):

A little history of Maxwell's work. Maxwell fully acknowledges that his Treatise's were, of necessity, incomplete (or as he phrased it: "in our current state of ignorance"). He take the classical simplification of assuming an incompressible medium. This is done because it significantly simplifies the resulting derivations, and unless the media departs significantly from its equilibrium density, such compressibility has very little (negligible) impact on the results under consideration. But compressibility does affect the basic properties. Assumption of incompressibility mathematically defines the divergence of field velocity v as:

{$$ div \, \mathbf{v} = 0 $$}

where v is the media's particulate velocity. A direct consequence of this definition is that waves cannot be created or propagated in such a system (wave speed is infinite). But, as we all know, even though we assume incompressibility, every media (even liquids and solids) are not incompressible. The consequence of this is, for field velocity v:

{$$ div \, \mathbf{v} > 0 $$}

Thus the momentum field property (p = mv) is

{$$ div \, \mathbf{p} > 0 $$}

This has measurable physical consequences, and IS A FUNDAMENTAL UNIQUE PROPERTY of the field! Given that divergence is defined as:

{$$ div = \lim_{V \to 0} \oint \frac{\delta A}{\delta V} \qquad \qquad \text{(A is area)}$$}

and has physical units of inverse distance (meters), Div v become the measure of an oscillation in the velocity field at any point in the continuum. The resulting momentum fluctuation is ... elemental charge, a unique property that is a consequence of the field's compressibility.

This statement illustrates the reasoning which led to Stowe's interpretation of the concept of charge, which he interprets as being a property of the field c.q. the medium. In "A Foundation for the Unification of Physics" (1996) he explained that in order to calculate the value of e, one needs to consider a torroidal topology, whereby both the enclosed volume as the surface area can be expressed in terms of the large toroidal radius, R, and the poloidal axis,r (*):

(*) Slightly edited for clarity, replaced ascii formulas with math symbols, added epmhasis, etc.


high res

1D

1D/2D

http://www.youtube.com/watch?v=yNlOfDHyaXg

http://www.youtube.com/watch?v=xt5q3UOfG0Y

3D

Haramein's "string theory":

http://www.youtube.com/watch?v=Yb1ToYeCVnI


http://www.youtube.com/watch?v=VE520z_ugcU


Decomposing Stowe's vector field

Now that we are aware that the consideration of a toroidal topology yields remarkable results - which can even explain some anomalies - we can apply vector calculus in order to come to a formal derivation and verification of the results acquired by Stowe, which we shall do by decomposing the field proposed by Stowe into two components. We will begin by following Stowe and define a vector field analogue to fluid dynamics, using the continuum hypothesis:

At a microscopic scale, fluid comprises individual molecules and its physical properties (density, velocity, etc.) are violently non-uniform. However, the phenomena studied in fluid dynamics are macroscopic, so we do not usually take this molecular detail into account. Instead, we treat the fluid as a continuum by viewing it at a coarse enough scale that any “small” fluid element actually still contains very many molecules. One can then assign a local bulk flow velocity v(x,t) to the element at point x, by averaging over the much faster, violently fluctuating Brownian molecular velocities. Similarly one defines a locally averaged density ρ(x,t), etc. These locally averaged quantities then vary smoothly with x on the macroscopic scale of the flow.

We define this vector field P as:

{$$ \mathbf{P}(\mathbf{x},t) = \rho(\mathbf{x},t) \mathbf{v}(\mathbf{x},t), $$}

where x is a point in space, $\rho(\mathbf{x},t)$ is the averaged aether density at x and $\mathbf{v}(\mathbf{x},t)$ is the local bulk flow velocity at x. Since in practice, this averaging process is usually implied, we consider the following notations to be roughly equivalent:

{$$ \mathbf{P} = \rho \mathbf{v}, $$} {$$ \mathbf{p} = m \mathbf{v}, $$} {$$ \mathbf{P} = m \mathbf{v}. $$}

We will attempt to use P for denoting the "bulk" field and p to refer to an individual "quantum", but that may not always be the case.

Helmholtz decomposition

Let us now introduce the Helmholtz decomposition:

In physics and mathematics, in the area of vector calculus, Helmholtz's theorem, also known as the fundamental theorem of vector calculus, states that any sufficiently smooth, rapidly decaying vector field in three dimensions can be resolved into the sum of an irrotational (curl-free) vector field and a solenoidal (divergence-free) vector field; this is known as the Helmholtz decomposition.

The physical interpretation of this decomposition, is that the a given vector field can be decomposed into a longitudinal and a transverse field component:

A terminology often used in physics refers to the curl-free component of a vector field as the longitudinal component and the divergence-free component as the transverse component. This terminology comes from the following construction: Compute the three-dimensional Fourier transform of the vector field Fv. Then decompose this field, at each point k, into two components, one of which points longitudinally, i.e. parallel to k, the other of which points in the transverse direction, i.e. perpendicular to k.

It can be shown that performing a decomposition this way, indeed results in the Helmholtz decomposition. Also, a vector field can be uniquely specified by a prescribed divergence and curl:

The term "Helmholtz Theorem" can also refer to the following. Let {$\mathbf{C}$} be a solenoidal vector field and {$\mathbf{d}$} a scalar field on {$ \mathbf{R}_3$} which are sufficiently smooth and which vanish faster than {$ 1/r^2 $} at infinity. Then there exists a vector field {$\mathbf{F_v}$} such that

{$$ \nabla \cdot \mathbf{F_v} = d \text{ and } \nabla \times \mathbf{F_v} = \mathbf{C} $$}

if additionally the vector field {$\mathbf{F_v}$} vanishes as {$r \to \infty $}, then {$\mathbf{F_v}$} is unique.
In other words, a vector field can be constructed with both a specified divergence and a specified curl, and if it also vanishes at infinity, it is uniquely specified by its divergence and curl. This theorem is of great importance in electrostatics, since Maxwell's equations for the electric and magnetic fields in the static case are of exactly this type.

Definition of the One field

So, let us define a vector field {$\mathbf{A}_T$} for the magnetic potential, a scalar field {$\Phi_L$} for the electric potential, a vector field {$\mathbf{B}$} for the magnetic field and a vector field {$\mathbf{E}$} for the electric field by:

{$$ \mathbf{A}_T = \nabla \times \mathbf{v} $$} {$$ \Phi_L = \nabla \cdot \mathbf{v} $$}

{$$ \mathbf{B} = \nabla \times \mathbf{A}_T = \nabla \times (\nabla \times \mathbf{v}) $$} {$$ \mathbf{E} = - \nabla \Phi_L = - \nabla (\nabla \cdot \mathbf{v}) $$}

According to the above theorem, {$ \mathbf{v} $} is uniquely specified by {$\Phi_L$} and {$\mathbf{A}_T$}. And, since the Curl of the gradient of any twice-differentiable scalar field {$ \Phi $} is always the zero vector, {$\nabla \times ( \nabla \Phi ) = \mathbf{0}$}, and the divergence of the curl of any vector field P is always zero, {$\nabla \cdot ( \nabla \times \mathbf{v} ) = 0 $}, we can establish that {$\Phi_L$} is indeed curl-free and {$\mathbf{A}_T$} is indeed divergence-free.

For the summation of {$ \mathbf{E} $} and {$ \mathbf{B} $}, we get:

{$$ \mathbf{E} + \mathbf{B} = - \nabla (\nabla \cdot \mathbf{v}) + \nabla \times (\nabla \times \mathbf{v}) = - \nabla^2 \mathbf{v}, $$}

which is the negated vector Laplacian for {$ \mathbf{v} $}.

Since {$\mathbf{v}$} is uniquely specified by {$\Phi_L$} and {$\mathbf{A}_T$}, and vice versa, we can establish that with this definition, we have eliminated "gauge freedom". This clearly differentiates our definition from the usual definition of the magnetic vector potential, about which it is stated:

[The usual] definition does not define the magnetic vector potential uniquely because, by definition, we can arbitrarily add curl-free components to the magnetic potential without changing the observed magnetic field. Thus, there is a degree of freedom available when choosing A.

With our definition, we cannot add curl-free components to {$ \mathbf{v} $}, not only because {$ \mathbf{v} $} is well defined, but also because such additions would essentially be added to {$ \Phi_L $}, which encompasses the curl-free component of our decomposition.

Notes and cut/paste stuff

While this is a logical conclusion indeed, one must take the limitations of the used mathematics into account. In the case of Continuum Mechanics, these are well known:

Continuum mechanics is a branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such models in the 19th century. Research in the area continues today.

[...]

Modelling an object as a continuum assumes that the substance of the object completely fills the space it occupies. Modelling objects in this way ignores the fact that matter is made of atoms, and so is not continuous; however, on length scales much greater than that of inter-atomic distances, such models are highly accurate. Fundamental physical laws such as the conservation of mass, the conservation of momentum, and the conservation of energy may be applied to such models to derive differential equations describing the behaviour of such objects, and some information about the particular material studied is added through constitutive relations.

In other words: when one does not take these limitations into account when using the "partial differential equations" derived this way, your equations take "on the role of master" "little by little" indeed...

Conservation laws

https://en.wikipedia.org/wiki/Fluid_dynamics#Conservation_laws

TODO

Laplace

https://en.wikipedia.org/wiki/Laplace%27s_equation

In mathematics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace who first studied its properties. This is often written as:

{$$ \nabla^{2}\varphi =0 \qquad {\mbox{or}}\qquad \Delta \varphi =0 $$}

where ∆ = ∇2 is the Laplace operator and {$ \varphi $} is a scalar function.

Laplace's equation and Poisson's equation are the simplest examples of elliptic partial differential equations. The general theory of solutions to Laplace's equation is known as potential theory. The solutions of Laplace's equation are the harmonic functions, which are important in many fields of science, notably the fields of electromagnetism, astronomy, and fluid dynamics, because they can be used to accurately describe the behavior of electric, gravitational, and fluid potentials.

[...]

The Laplace equation is also a special case of the Helmholtz equation.

https://en.wikipedia.org/wiki/Vector_Laplacian

In mathematics and physics, the vector Laplace operator, denoted by {$ \nabla ^{2} $}, named after Pierre-Simon Laplace, is a differential operator defined over a vector field. The vector Laplacian is similar to the scalar Laplacian. Whereas the scalar Laplacian applies to scalar field and returns a scalar quantity, the vector Laplacian applies to the vector fields and returns a vector quantity. When computed in rectangular cartesian coordinates, the returned vector field is equal to the vector field of the scalar Laplacian applied on the individual elements.

The vector Laplacian of a vector field {$ \mathbf{A} $} is defined as

{$$ \nabla^2 \mathbf{A} = \nabla(\nabla \cdot \mathbf{A}) - \nabla \times (\nabla \times \mathbf{A}). $$}

In Cartesian coordinates, this reduces to the much simpler form:

{$$\nabla^2 \mathbf{A} = (\nabla^2 A_x, \nabla^2 A_y, \nabla^2 A_z), $$}

where {$A_x$}, {$A_y$}, and {$A_z$} are the components of {$\mathbf{A}$}. This can be seen to be a special case of Lagrange's formula; see Vector triple product.

https://en.wikipedia.org/wiki/Helmholtz_equation

In mathematics, the Helmholtz equation, named for Hermann von Helmholtz, is the partial differential equation

{$$ \nabla^2 A + k^2 A = 0 $$}

where {$ \nabla^2 $} is the Laplacian, k is the wavenumber, and A is the amplitude.

https://en.wikipedia.org/wiki/Gauss%27s_law_for_gravity

The gravitational field g (also called gravitational acceleration) is a vector field – a vector at each point of space (and time). It is defined so that the gravitational force experienced by a particle is equal to the mass of the particle multiplied by the gravitational field at that point.

Gravitational flux is a surface integral of the gravitational field over a closed surface, analogous to how magnetic flux is a surface integral of the magnetic field.

Gauss's law for gravity states:

The gravitational flux through any closed surface is proportional to the enclosed mass.

[...]

The differential form of Gauss's law for gravity states

{$$ \nabla\cdot \mathbf{g} = -4\pi G\rho, $$}

where {$\nabla\cdot$} denotes divergence, G is the universal gravitational constant, and ρ is the mass density at each point.


https://en.wikipedia.org/wiki/Divergence

It can be shown that any stationary flux v(r) that is at least twice continuously differentiable in R 3 {\displaystyle {\mathbb {R} }^{3}} {\mathbb {R} }^{3} and vanishes sufficiently fast for | r | → ∞ can be decomposed into an irrotational part E(r) and a source-free part B(r). Moreover, these parts are explicitly determined by the respective source densities (see above) and circulation densities (see the article Curl):