Phase Transitions, Chaos and Joint Action in the Life Space Foam
Abstract
This paper extends our recently developed Life Space Foam (LSF) model of motivated cognitive dynamics [1]. LSF uses adaptive path integrals to generate Lewinian force–fields on smooth manifolds, in order to characterize the dynamics of individual goal–directed action. According to explanatory theories growing in acceptance in cognitive neuroscience, one of the key properties of this dynamics, capable of linking it to microscopiclevel cortical neurodynamics, is its metastability and the resulting phase transitions. Our extended LSF model incorporates the notion of phase transitions and complements it with embedded geometrical chaos. To describe this LSF phase transition, a general path–integral is used, along the corresponding LSF topology change. As a result, our extended LSF model is able to rigorously represent coaction by two or more actors in the common LSF–manifold. The model yields substantial qualitative differences in geometrical properties between bilateral and multilateral coaction due to intrinsic chaotic coupling between actors when .
Keywords: cognitive dynamics, adaptive path integrals, phase transitions, chaos, topology change, human joint action, function approximation
1 Introduction
General stochastic dynamics, developed in a framework of Feynman path integrals, have recently [1] been applied to Lewinian field–theoretic psychodynamics [2, 3, 4], resulting in the development of a new concept of life–space foam (LSF) as a natural medium for motivational (MD) and cognitive (CD) psychodynamics. According to the LSF–formalism, the classic Lewinian life space can be macroscopically represented as a smooth manifold with steady force–fields and behavioral paths, while at the microscopic level it is more realistically represented as a collection of wildly fluctuating force–fields, (loco)motion paths and local geometries (and topologies with holes).
A set of least–action principles is used to model the smoothness of global, macro–level LSF paths, fields and geometry, according to the following prescription. The action , psycho–physical dimensions of and depending on macroscopic paths, fields and geometries (commonly denoted by an abstract field symbol ) is defined as a temporal integral from the initial time instant to the final time instant ,
(1) 
with Lagrangian density given by
where the integral is taken over all coordinates of the LSF, and are time and space partial derivatives of the variables over coordinates. The standard least action principle
(2) 
gives, in the form of the so–called Euler–Lagrangian equations, a shortest (loco)motion path, an extreme force–field, and a life–space geometry of minimal curvature (and without holes). In this way, we effectively derive a unique globally smooth transition map
(3) 
performed at a macroscopic (global) time–level from some initial time to the final time . In this way, we have obtained macro–objects in the global LSF: a single path described by Newtonian–like equation of motion, a single force–field described by Maxwellian–like field equations, and a single obstacle–free Riemannian geometry (with global topology without holes).
To model the corresponding local, micro–level LSF structures of rapidly fluctuating MD & CD, an adaptive path integral is formulated, defining a multi–phase and multi–path (multi–field and multi–geometry) transition amplitude from the state of to the state of ,
(4) 
where the Lebesgue integration is performed over all continuous , while summation is performed over all discrete processes and regional topologies . The symbolic differential in the general path integral (22), represents an adaptive path measure, defined as a weighted product
(5) 
The adaptive path integral (4)–(5) represents an dimensional neural network, with weights updating by the general rule [1]
On the other hand, it is well–known that phase transitions (PTs) are phenomena which bring about qualitative physical changes at the macroscopic level in presence of the same microscopic forces acting among the constituents of a system. Their mathematical description requires to translate into quantitative terms the mentioned qualitative changes. The standard way of doing this is to consider how the values of thermodynamic observables, obtained in laboratory experiments, vary with temperature, or volume, or an external field, and then to associate the experimentally observed discontinuities at a PT to the appearance of some kind of singularity entailing a loss of analyticity [22]. Despite the smoothness of the statistical measures, after the Yang–Lee theorem [23] we know that in the limit non–analytic behaviors of thermodynamic functions are possible whenever the analyticity radius in the complex fugacity plane shrinks to zero, because this entails the loss of uniform convergence in (number of degrees of freedom) of any sequence of realvalued thermodynamic functions, and all this depends on the distribution of the zeros of the grand canonical partition function. Also the other developments of the rigorous theory of PTs (see, e.g., [24, 25]), identify PTs with the loss of analyticity.
Similarly, experimental findings and theoretical insights of modern neuroscience converge on interpreting brain physiology within the conceptual framework of nonlinear dynamics, operating at the brink of criticality, which is achieved and maintained by selforganization [5]. In this approach, dynamical patterning of both brain activity and corresponding behaviors is examined in order to develop models of how brain and behavioral events are coordinated. Growing evidence supports the assumption that the linkages between events at microscopic level of neuronal assemblies and those at macroscopic behavioral levels are better explained as based on shared dynamics – not on any ontological priority [6, 7]. This dynamics is characterized by metastability and phase transitions in selforganized criticality [8], both at the level of neuronal assemblies [9, 5], functional connectivity of the human brain [10] and corresponding behavior patterns [6, 11]. A key feature of this approach is that phenomenological laws at the behavioral level can be connected to a fieldtheoretical description of cortical dynamics [6]. Dynamic Field Theory (DFT) [12] extends this approach by developing fieldtheoretic representations of both behavior and its environment [13], thus building on a long established, albeit metaphorical, tradition of behavioral forcefield analysis [2, 3, 4]. Our LSF–formalism can be seen as a further extension of DFT.
Regarding brain modelling: classical physics has provided a strong foundation for understanding brain function through measuring brain activity, modelling the functional connectivity of networks of neurons with algebraic matrices, and modelling the dynamics of neurons and neural populations with sets of coupled differential equations [14, 15]. Various tools from classical physics enabled recognition and documentation of aspects of the physical states of the brain; the structures and dynamics of neurons, the operations of membranes and organelles that generate and channel electric currents; and the molecular and ionic carriers that implement the neural machineries of electrogenesis and learning. They support description of brain functions at several levels of complexity through measuring neural activity in the brains of animal and human subjects engaged in behavioral exchanges with their environments. One of the key properties of brain dynamics are the coordinated oscillations of populations of neurons that change rapidly in concert with changes in the environment [16].
Also, most experimental neurobiologists and neural theorists have focused on sensorimotor functions and their adaptations through various forms of learning and memory. Reliance has been placed on measurements of the rates and intervals of trains of action potentials of small numbers of neurons that are tuned to perceptual invariances and modelling neural interactions with discrete networks of simulated neurons. These and related studies have given a vivid picture of the cortex as a mosaic of modules, each of which performs a sensory or motor function; they have not given a picture of comparable clarity of the integration of modules (see [16] and references therein).
The EEG analysis performed on rabbits and cats trained to discriminate conditioned stimuli in the various modalities (with EEGrecordings collected from highdensity electrode arrays fixed on the epidural surfaces of primary sensory and limbic areas) has shown that cortical activity does not change continuously with time but by multiple spatial patterns in sequences during each perceptual action that resemble cinematographic frames on multiple screens [16]. The carrier waves of the patterned activity in frames have come in at least two ranges identified with beta (1230 Hz) and gamma (3080 Hz) oscillations. The abrupt change in dynamical state with each new frame, proposed to be formed by a phase transition [17], has not been describable either with classic integrodifferential equations, or the algebras of neural networks. The initiation and maintenance of shared oscillations by this phase transition requires rapid communication among neurons. Several alternative mechanisms have been proposed as the agency for widespread synchrony. These are based in the dendritic loop current as the chief agent for intracellular communication and the axonal action potential as the chief agent for intercellular communication [16].
According to [16], manybody quantum field theory appears to be the only existing theoretical tool capable to explain the dynamic origin of longrange correlations, their rapid and efficient formation and dissolution, their interim stability in ground states, the multiplicity of coexisting and possibly noninterfering ground states, their degree of ordering, and their rich textures relating to sensory and motor facets of behaviors. It is historical fact that manybody quantum field theory has been devised and constructed in past decades exactly to understand features like ordered pattern formation and phase transitions in condensed matter physics that could not be understood in classical physics, similar to those in the brain.
Communication by propagating action potentials imposes distancedependent delays in the onset of resynchronization during a phase transition over an area of cortex. The delays are measurable as brief but distancedependent phase lags at the various frequencies of oscillation [17]. However, the length of most axons in cortex is a small fraction of observed distances of longrange correlation, with the requirement for synaptic renewal at each successive relay. These longrange correlations are maintained despite continuous variations in transmission frequencies that are apparent in aperiodic ‘chaotic’ oscillations.
Some researchers have sought to explain zerolag correlations with processes other than axodendritic synaptic transmission, stating that both electric fields and magnetic fields accompany neural loop currents. However, the electric potential gradients of the EEG have been shown by [18] to be inadequate in vivo to account for the longrange of the observed coherent activity, largely owing to the shunting action of glia that reduce the fraction of extracellular dendritic current penetrating adjacent neurons and minimize ephaptic crosstalk among cortical neurons.
In this paper, to describe the LSF–phase–transitions, with embedded chaos, we use our adaptive path–integral (22) along the corresponding LSF–topology–change:
This paper extends the earlier establishment of the LSF model by introducing the study of chaos and phase transitions within this framework. This development is motivated – as was the original LSF model – by the potential for an improved theoretical basis for studying brain physiology, and thereby also for overcoming shortfalls in current artificial neural network function representation and approximation technologies.
2 Geometrical Chaos and Topological Phase Transitions
In this section we extend the LSF–formalism to incorporate geometrical chaos and associated topological phase transitions.
It is well–known that on the basis of the ergodic hypothesis, statistical mechanics describes the physics of manydegrees of freedom systems by replacing time averages of the relevant observables with ensemble averages. Therefore, instead of using statistical ensembles, we can investigate the Hamiltonian (microscopic) dynamics of a system undergoing a phase transition. The reason for tackling dynamics is twofold. First, there are observables, like Lyapunov exponents, that are intrinsically dynamical. Second, the geometrization of Hamiltonian dynamics in terms of Riemannian geometry provides new observables and, in general, an interesting framework to investigate the phenomenon of phase transitions [21, 35]. The geometrical formulation of the dynamics of conservative systems [26] was first used by [27] in his studies on the dynamical foundations of statistical mechanics and subsequently became a standard tool to study abstract systems in ergodic theory.
The simplest, mechanical–like LSF–action in the individual’s LSF–manifold has a Riemannian locomotion form [1]
(6) 
where is the ‘material’ metric tensor that generates the total ‘kinetic energy’ of cognitive (loco)motions defined by their configuration coordinates and velocities , with the motivational potential energy and the standard Hamiltonian
(7) 
where are the canonical (loco)motion momenta.
Dynamics of DOF mechanical–like systems with action (6) and Hamiltonian (7) are commonly given by the set of geodesic equations [32, 33]
(8) 
where are the Christoffel symbols of the affine Levi–Civita connection of the Riemannian LSF–manifold .
Alternatively, a description of the extrema of the Hamilton’s action (6) can be obtained using the Eisenhart metric [28] on an enlarged LSF spacetime manifold (given by plus one real coordinate ), whose arc–length is
(9) 
The manifold has a Lorentzian structure [35] and the dynamical trajectories are those geodesics satisfying the condition , where is a positive constant. In this geometrical framework, the instability of the trajectories is the instability of the geodesics, and it is completely determined by the curvature properties of the LSF–manifold according to the Jacobi equation of geodesic deviation [32, 33]
(10) 
whose solution , usually called Jacobi variation field, locally measures the distance between nearby geodesics; stands for the covariant derivative along a geodesic and are the components of the Riemann curvature tensor of the LSF–manifold .
Using the Eisenhart metric (9), the relevant part of the Jacobi equation (10) is given by the tangent dynamics equation [29, 21]
(11) 
where the only nonvanishing components of the curvature tensor of the LSF–manifold are
The tangent dynamics equation (11) is commonly used to define Lyapunov exponents in dynamical systems given by the Riemannian action (6) and Hamiltonian (7), using the formula [30]
(12) 
Lyapunov exponents measure the strength of dynamical chaos.
Now, to relate these results to topological phase transitions within the LSF–manifold , recall that any two high–dimensional manifolds and have the same topology if they can be continuously and differentiably deformed into one another, that is if they are diffeomorphic. Thus by topology change the ‘loss of diffeomorphicity is meant [35]. In this respect, the so–called topological theorem [22] says that non–analyticity is the ‘shadow’ of a more fundamental phenomenon occurring in the system’s configuration manifold (in our case the LSF–manifold): a topology change within the family of equipotential hypersurfaces
where and are the microscopic interaction potential and coordinates respectively. This topological approach to PTs stems from the numerical study of the dynamical counterpart of phase transitions, and precisely from the observation of discontinuous or cuspy patterns displayed by the largest Lyapunov exponent at the transition energy [30]. Lyapunov exponents cannot be measured in laboratory experiments, at variance with thermodynamic observables, thus, being genuine dynamical observables they are only be estimated in numerical simulations of the microscopic dynamics. If there are critical points of in configuration space, that is points such that , according to the Morse Lemma [31], in the neighborhood of any critical point there always exists a coordinate system for which
(13) 
where is the index of the critical point, i.e., the number of negative eigenvalues of the Hessian of the potential energy . In the neighborhood of a critical point of the LSF–manifold , (13) yields
which gives unstable directions which contribute to the exponential growth of the norm of the tangent vector [30].
This means that the strength of dynamical chaos within the individual’s LSF–manifold , measured by the largest Lyapunov exponent given by (12), is affected by the existence of critical points of the potential energy . However, as is bounded below, it is a good Morse function, with no vanishing eigenvalues of its Hessian matrix. According to Morse theory [31], the existence of critical points of is associated with topology changes of the hypersurfaces .
More precisely, let , be a smooth, bounded from below, finiterange and confining potential^{1}^{1}1These requirements for are fulfilled by standard interatomic and intermolecular interaction potentials, as well as by classical spin potentials.. Denote by , , its level sets, or equipotential hypersurfaces, in the LSF–manifold . Then let be the potential energy per degree of freedom. If there exists , and if for any pair of values and belonging to a given interval and for any then the sequence of the Helmoltz free energies – where ( is the temperature) and – is uniformly convergent at least in [the space of twice differentiable functions in the interval ], so that and neither first nor second order phase transitions can occur in the (inverse) temperature interval , where the inverse temperature is defined as [22, 35]
is one of the possible definitions of the microcanonical configurational entropy. The intensive variable has been introduced to ease the comparison between quantities computed at different values.
This theorem means that a topology change of the at some is a necessary condition for a phase transition to take place at the corresponding energy value. The topology changes implied here are those described within the framework of Morse theory through ‘attachment of handles’ [31] to the LSF–manifold .
In the LSF path–integral language [1], we can say that suitable topology changes of equipotential submanifolds of the individual’s LSF–manifold can entail thermodynamic–like phase transitions [37, 38, 39], according to the general formula:
The statistical behavior of the LSF–(loco)motion system (6) with the standard Hamiltonian (7) is encompassed, in the canonical ensemble, by its partition function, given by the phase–space path integral [33]
(14) 
where we have used the shorthand notation
The phase–space path integral (14) can be calculated as the partition function [36],
(15)  
where the last term is written using the so–called co–area formula [20], and labels the equipotential hypersurfaces of the LSF–manifold ,
Equation (15) shows that the relevant statistical information is contained in the canonical configurational partition function
Note that is decomposed, in the last term of (15), into an infinite summation of geometric integrals,
defined on the . Once the microscopic interaction potential is given, the configuration space of the system is automatically foliated into the family of these equipotential hypersurfaces. Now, from standard statistical mechanical arguments we know that, at any given value of the inverse temperature , the larger the number , the closer to are the microstates that significantly contribute to the averages, computed through , of thermodynamic observables. The hypersurface is the one associated with
the average potential energy computed at a given . Thus, at any , if is very large the effective support of the canonical measure shrinks very close to a single . Hence, the basic origin of a phase transition lies in a suitable topology change of the , occurring at some [36]. This topology change induces the singular behavior of the thermodynamic observables at a phase transition. It is conjectured that the counterpart of a phase transition is a breaking of diffeomorphicity among the surfaces , it is appropriate to choose a diffeomorphism invariant to probe if and how the topology of the changes as a function of . Fortunately, such a topological invariant exists, the Euler characteristic of the LSF–manifold , defined by [32, 33]
(16) 
where the Betti numbers are diffeomorphism invariants.^{2}^{2}2The Betti numbers are the dimensions of the de Rham’s cohomology vector spaces (therefore the are integers). This homological formula can be simplified by the use of the Gauss–Bonnet–Hopf theorem, that relates with the total Gauss–Kronecker curvature of the LSF–manifold
(17) 
where
is the invariant volume measure of the LSF–manifold and is the determinant of the LSF metric tensor . For technical details of this topological approach, see [34].
The domain of validity of the ‘quantum’ is not restricted to the microscopic world [19]. There are macroscopic features of classically behaving systems, which cannot be explained without recourse to the quantum dynamics. This field theoretic model leads to the view of the phase transition as a condensation that is comparable to the formation of fog and rain drops from water vapor, and that might serve to model both the gamma and beta phase transitions. According to such a model, the production of activity with longrange correlation in the brain takes place through the mechanism of spontaneous breakdown of symmetry (SBS), which has for decades been shown to describe longrange correlation in condensed matter physics. The adoption of such a field theoretic approach enables modelling of the whole cerebral hemisphere and its hierarchy of components down to the atomic level as a fully integrated macroscopic quantum system, namely as a macroscopic system which is a quantum system not in the trivial sense that it is made, like all existing matter, by quantum components such as atoms and molecules, but in the sense that some of its macroscopic properties can best be described with recourse to quantum dynamics (see [16] and references therein).
Phase transitions can also be associated with autonomous robot competence levels, as informal specifications of desired classes of behaviors for robots over all environments they will encounter, as described by Brooks’ subsumption architecture approach [45, 46, 47]. The distributed network of augmented finite–state machines can exist in different phases or modalities of their state–space variables, which determine the systems intrinsic behavior. The phase transition represented by this approach is triggered by either internal (a set–point) or external (a command) control stimuli, such as a command to transition from a sleep mode to awake mode, or walking to running.
3 Modelling Human Joint Action
Cognitive neuroscience investigations, including fMRI studies of human coaction, suggest that cognitive and neural processes supporting coaction include joint attention, action observation, task sharing, and action coordination [40, 41, 42, 43]. For example, when two actors are given a joint control task (e.g., tracking a moving target on screen) and potentially conflicting controls (e.g., one person in charge of acceleration, the other – deceleration), their joint performance depends on how well they can anticipate each other’s actions. In particular, better coordination is achieved when individuals receive realtime feedback about the timing of each other’s actions [43].
To model the dynamics of the two–actor joint action, we propose to associate each of the actors with an dimensional Riemannian LSF–manifold , that is a set of their own time dependent trajectories, and , respectively. Their associated tangent bundles contain their individual D (loco)motion velocities, and Further, following the general formalism of [1], outlined in the introduction, we use the modelling machinery consisting of: (i) Adaptive joint action at the top–master level, describing the externally–appearing deterministic, continuous and smooth dynamics, and (ii) Corresponding adaptive path integral (22) at the bottom–slave level, describing a wildly fluctuating dynamics including both continuous trajectories and Markov chains. This lower–level joint dynamics can be further discretized into a partition function of the corresponding statistical dynamics.
In particular, by extending and adapting classical Wheeler–Feynman action–at–a–distance electrodynamics [44] and applying it to human co–action, we propose a two–term joint action:
(18) 
The first term in (18) represents potential energy of the cognitive/motivational interaction between the two agents and .^{3}^{3}3Although, formally, this term contains cognitive velocities, it still represents ‘potential energy’ from the physical point of view. It is a double integral over a delta function of the square of interval between two points on the paths in their Life–Spaces; thus, interaction occurs only when this interval, representing the motivational cognitive distance between the two agents, vanishes. Note that the cognitive (loco)motions of the two agents and , generally occur at different times and unless when cognitive synchronization occurs.
The second term in (18) represents kinetic energy of the physical interaction. Namely, when the cognitive synchronization in the first term takes place, the second term of physical kinetic energy is activated in the common manifold, which is one of the agents’ Life Spaces, say .
Conversely, if we have a need to represent coaction of three actors, say , and (e.g., in charge of acceleration, – deceleration and steering), we can associate each of them with an D Riemannian Life–Space manifold, , , and respectively, with the corresponding tangent bundles containing their individual (loco)motion velocities, and Then, instead of (18) we have
(19)  
Due to an intrinsic chaotic coupling, the three–actor (or, actor, ) joint action (19) has a considerably more complicated geometrical structure then the bilateral co–action (18).^{4}^{4}4Recall that the necessary condition for chaos in continuous temporal or spatiotemporal systems is to have three variables with nonlinear couplings between them. It actually happens in the common D Finsler manifold , parameterized by the local joint coordinates dependent on the common time . That is, Geometry of the joint manifold is defined by the Finsler metric function defined by
(20) 
and the Finsler tensor defined by (see [32, 33])
(21) 
From the Finsler definitions (20)–(21), it follows that the partial interaction manifolds, and , have Riemannian structures with the corresponding interaction kinetic energies,
At the slave level, the adaptive path integral (see [1]), representing an dimensional neural network, corresponding to the adaptive bilateral joint action (18), reads
(22) 
where the Lebesgue integration is performed over all continuous paths and , while summation is performed over all associated discrete Markov fluctuations and jumps. The symbolic differential in the path integral (22) represents an adaptive path measure, defined as a weighted product
(23) 
Similarly, in case of the triple joint action, the adaptive path integral reads,
(24) 
with the adaptive path measure defined by
(25) 
4 Discussion
This paper has developed an adaptive path integral approach to modelling topological phase transition, chaos and joint action in the LSFmanifold. The traditional neural networks approaches are known for their classes of functions they can represent.^{5}^{5}5Here we are talking about functions in an extensional rather than merely intensional sense; that is, function can be read as input/output behavior [54, 55, 56, 57]. This limitation has been attributed to their lowdimensionality (the largest neural networks are limited to the order of dimensions [53]). The proposed path integral approach represents a new family of functionrepresentation methods, which potentially offers a basis for a fundamentally more expansive solution.
This new family of functionrepresentation methods is now capable of representing input/output behavior of more than one actor. However, as we add the second and subsequent actors to the model, the requirements for the rigorous geometrical representations of their respective LSFs become nontrivial. For a single actor or a two–actor co–action the Riemannian geometry was sufficient, but it becomes insufficient for modelling the –actor (with ) joint action, due to an intrinsic chaotic coupling between the individual actors’ LSFs. To model an –actor joint LSF, we have to use the Finsler geometry, which is a generalization of the Riemannian one. This progression may seem trivial, both from standard psychological point of view, and from computational point of view, but it is not trivial from the geometrical perspective.
The robustness of biological motor control systems in handling excess degrees of freedom has been attributed to a combination of tight hierarchical central planning and multiple levels of sensory feedback self–regulation that are relatively autonomous in their operation [50]. These two processes are connected through a top–down process of action script delegation and bottom–up emergency escalation mechanisms. There is a complex interplay between the continuous sensory feedback and motion/action planning to achieve effective operation in uncertain environments, such as movement on uneven terrain cluttered with obstacles.
Complementing Bernstein’s motor control principles is Brooks’ concept of computational subsumption architectures [45, 47], which provides a method for structuring reactive systems from the bottom up using layered sets of behaviors. Each layer implements a particular goal of the agent, which subsumes that of the underlying layers. Similar architectures have been proposed to account for the mechanism of cognitive and motor working memory, sequence learning and performance (see, e.g.,[11]). According to [45, 47], a robot’s lowest layer could be “avoid an object”, on top of it would be the layer “wander around”, which in turn lies under “explore the world”. The top layer in such a case could represent the ultimate goal of “creating a map”. In this configuration, the lowest layers can work as reflexive mechanisms, while the higher layers contain control logic implementing more abstract goals.
The substrate for this architecture comprises a network of finite state machines augmented with timing elements. A subsumption compiler compiles augmented finite state machine descriptions into a specialpurpose scheduler to simulate parallelism and a set of finite state machine simulation routines. The resulting networked behavior function can be described as:
The Bernstein weights, or Brooks nodes, in (23) are updated by the Bernstein loop during the joint transition process, according to one of the two standard neural learning schemes, in which the micro–time level is traversed in discrete steps:

A self–organized, unsupervised (e.g., Hebbian–like [51]) learning rule:
(26) where denote signal and noise, respectively, while new superscripts and denote desired and achieved micro–states, respectively; or

A certain form of a supervised gradient descent learning:
(27) where is a small constant, called the step size, or the learning rate, and denotes the gradient of the ‘performance hyper–surface’ at the th iteration,
where then .
Both Hebbian and supervised learning^{6}^{6}6Note that we could also use a reward–based, reinforcement learning rule [52], in which system learns its optimal policy:
Some specific problems that Brooks poses in [47] include: (i) how to combine many (e.g., more than a dozen) behavior generating modules in a way which lets them be productive and cooperative; (ii) how to automate the building of interaction interfaces between behavior generating modules, so that larger (and hence more competent) systems can be built; and (iii) how to automate the construction of individual behavior generating modules, or even to automate their modification. An aggregation of phase–transition–related order parameters within individual actors need to be triggered (either by an internal or external control mechanism) into collective alignment to be able to perform the joint action. The ‘guiding’ forces help to guide the better alignment or fine–tunning of individual LSFs for useful co–action outcomes to emerge. Using the combined LSF–geometry approach proposed in this paper should provide some useful answers to the above questions.
Here we remark that collective phase transitions, which have more coupled degrees of freedom than individual ones, necessitate more stringent constraints, to avoid/control a higherdimensional chaos. The sophisticated chaos–control techniques, including constraining contextual boundaries – choosing a target subspace of the joint LSFmanifold – and guiding force–fields, need to be defined to alow desired collective behaviors to emerge from both chaotic and nonchaotic sets of possible initial evolution alternatives.
5 Conclusion
Extending the LSF model to incorporate the notions of metastability, phase transitions and embedded geometrical chaos has enabled representation of increased complexity in goaldirected action, including the capability to represent joint action by two or more coactors. This capability remains consistent with modern neuroscience theorizing linking macrobehavioral metastability and phase transitions to microscopiclevel cortical neurodynamics.
There is a degree of correspondence between phase transition mechanisms for cognitive performance in humans and transitions between stable behavior states and competency levels in autonomous robots. The approach developed in this paper offers a theoretical framework to integrate observations and models of both individual and collective robot behaviors and competencies, capable of coping with the increased complexity of the real world. This framework can both guide future substantive empirical work into collective robot behaviors and be validated by it.
The new model developed in this paper offers substantial improvements over the geometrical properties regarding multipleactor systems, due to the chaotic coupling between the actors. We have also discussed how the proposed path integral represents a new family of function representation techniques that may expand on the range of function types that are currently afforded by standard neuralnetwork models. Specifically, we are interested here in biologically plausible function representation as a means for characterizing the input/output behavior of multiactor systems, and thus regard function representation in an extensional sense. The full realisation of these possibilities in practical applications is a subject of ongoing research.
References
 [1] V. Ivancevic, E. Aidman, Lifespace foam: A medium for motivational and cognitive dynamics. Physica A 382, 616–630, (2007)
 [2] K. Lewin, Field Theory in Social Science. Univ. Chicago Press, Chicago, (1951)
 [3] K. Lewin, Resolving Social Conflicts, and, Field Theory in Social Science. Am. Psych. Assoc., Washington, (1997)
 [4] M. Gold, A Kurt Lewin Reader, the Complete Social Scientist, Am. Psych. Assoc., Washington, (1999)
 [5] Werner, G., Metastability, criticality and phase transitions in brain and its models. Biosyst. 90(2), 496508, (2007)
 [6] Kelso, J.A.S., Bressler, S.L., Buchanan, S., DeGuzman, G.C., Ding, M., Fuchs, A., Holroyd, T., Phase transition in brain and human behavior. Phys. Lett. A 169, 134 144, (1992)
 [7] W. Erlhagen, E. Bicho, The dynamic neural field approach to cognitive robotics. J. Neu. Eng. 3, R36R54, (2006)
 [8] Bak, P. How Nature Works: The Science of SelfOrganized Criticality. Copernicus, New York, (1996)
 [9] Freeman, W.J., A fieldtheoretic approach to understanding scalefree neocortical dynamics. Biol. Cybern. 92, 350 359, (2005)
 [10] Chialvo, D.R., Critical brain networks. Physica A 340, 756 765, (2004)
 [11] Grossberg, S., Pearson, L.R., Laminar Cortical Dynamics of Cognitive and Motor Working Memory, Sequence Learning and Performance: Toward a Unified Theory of How the Cerebral Cortex Works. Tec. Rep. CAS/CNSTR08002, Boston Univ. Psych. Review, in press. (2008)
 [12] Amari, S., Dynamics of pattern formation in lateralinhibition type neural fields. Biol. Cybern. 27, 77–87, (1977)
 [13] Schöner, G., Dynamical Systems Approaches to Cognition. In: Cambridge Handbook of Computational Cognitive Modeling. Cambridge University Press. R. Sun (ed), (2007)
 [14] Freeman, W.J., Mass Action in the Nervous System. Academic Press, New York, (1975/2004)
 [15] Freeman, W.J., Neurodynamics. An Exploration of Mesoscopic Brain Dynamics. SpringerVerlag, London UK, (2000)
 [16] Freeman, W.J., Vitiello, G., Nonlinear brain dynamics as macroscopic manifestation of underlying manybody field dynamics. Phys. Life Rev. 3(2), 93–118, (2006)
 [17] Freeman, W.J., Origin, structure, and role of background EEG activity. Part 2. Amplitude. Clin. Neurophysiol. 115, 20892107, (2004)
 [18] Freeman, W.J., Baird, B., Effects of applied electric current fields on cortical neural activity. Chapter in: Schwartz E (ed.) Computational Neuroscience. Plenum, New York, 274287, (1989)
 [19] Umezawa, H., Advanced field theory: micro, macro and thermal concepts. Am. Inst. Phys. New York, (1993)
 [20] H. Federer, Geometric Measure Theory. Springer, New York, (1969)
 [21] L. Caiani, L. Casetti, C. Clementi, M. Pettini, Geometry of Dynamics, Lyapunov Exponents, and Phase Transitions. Phys. Rev. Lett. 79, 43614364, (1997)
 [22] R. Franzosi, M. Pettini, Theorem on the origin of Phase Transitions. Phys. Rev. Lett. 92(6), 060601, (2004)
 [23] C.N. Yang, T.D. Lee, Statistical Theory of Equations of State and Phase Transitions. I. Theory of Condensation. Phys. Rev. 87, 404, (1952)
 [24] H.O. Georgii, Gibbs Measures and Phase Transitions. Walter de Gruyter, Berlin, (1988)
 [25] D. Ruelle, Thermodynamic formalism. Encyclopaedia of Mathematics and its Applications, AddisonWesley, New York, (1978)
 [26] R. Abraham, J.E. Marsden, Foundations of mechanics. AddisonWesley, Redwood City, (1987)
 [27] N.S. Krylov, Works on the foundations of statistical mechanics. Princeton Univ. Press, Princeton, (1979)
 [28] L.P. Eisenhart, Dynamical trajectories and geodesics. Math. Ann. 30, 591606, (1929)
 [29] L. Casetti, C. Clementi, M. Pettini, Riemannian theory of Hamiltonian chaos and Lyapunov exponents. Phys. Rev. E 54, 5969 (1996)
 [30] L. Casetti, M. Pettini, and E.G.D. Cohen, Geometric Approach to Hamiltonian Dynamics and Statistical Mechanics. Phys. Rep. 337, 237341, (2000)
 [31] M.W. Hirsch, Differential Topology. Springer, New York, (1976)
 [32] Ivancevic, V., Ivancevic, T.: Geometrical Dynamics of Complex Systems. Springer, Series: MicroprocessorBased and Intelligent Systems Engineering, Vol. 31, (2006)
 [33] V. Ivancevic, T. Ivancevic, Applied Differfential Geometry: A Modern Introduction. World Scientific, Series: Mathematics, (2007)
 [34] J.A. Thorpe, Elementary Topics in Differential Geometry. (SpringerVerlag, New York, 1979).
 [35] M. Pettini, Geometry and Topology in Hamiltonian Dynamics and Statistical Mechanics. Springer, New York, (2007)
 [36] R. Franzosi, M. Pettini, L. Spinelli, Topology and phase transitions: a paradigmatic evidence. Phys. Rev. Lett. 84, 27742777, (2000)
 [37] Haken, H., Synergetics: An Introduction (3rd ed) Springer, Berlin, (1983)
 [38] Haken, H., Advanced Synergetics: Instability Hierarchies of SelfOrganizing Systems and Devices (3nd ed.) Springer, Berlin. (1993)
 [39] Haken, H., Principles of Brain Functioning: A Synergetic Approach to Brain Activity, Behavior and Cognition, Springer, Berlin, (1996)
 [40] L. Fogassi, P.F. Ferrari, B. Gesierich, S. Rozzi, F. Chersi, G. Rizzolatti, Parietal lobe: From action organization to intention understanding. Science, 29, 662667, (2005)
 [41] G. Knoblich, S. Jordan, Action coordination in individuals and groups: Learning anticipatory control. J. Exp. Psych.: Learning, Memory & Cognition, 29, 10061016, (2003)
 [42] R.D. NewmanNorlund, M.L. Noordzij, R.G.J. Meulenbroek, H. Bekkering, Exploring the brain basis of joint action: Coordination of actions, goals and intentions. Soc. Neurosci. 2(1), 48–65, (2007)
 [43] N. Sebanz, H. Bekkering, G. Knoblich. Joint action: bodies and minds moving together. Tr. Cog. Sci. 10(2), 7076, (2006)
 [44] J.A. Wheeler, R.P. Feynman, Classical Electrodynamics in Terms of Direct Interparticle Action. Rev. Mod. Phys., 21, 425433, (1949)
 [45] R.A. Brooks, A Robust Layered Control System for a Mobile Robot. IEEE Trans. Rob. Aut., 2(1), 1423, (1986)
 [46] R.A. Brooks, A robot that walks: Emergent behavior form a carefully evolved network, Neural Computation, 1(2) (Summer 1989) 253262.
 [47] R.A. Brooks, Elephants Don’t Play Chess. Rob. Aut. Sys. 6, 315, (1990)
 [48] N.A. Bernstein, The Coordination and Regulation of Movements. Pergamon, London, (1967)
 [49] N.A. Bernstein, Some emergent problems of the regulation of motor acts. In: H.T.A.Whiting (Ed.) Human Motor Actions: Bernstein Reassessed, 343–358. North Holland, Amsterdam, (1982)
 [50] N.A. Bernstein, M.L. Latash, M.T. Turvey (Eds), Dexterity and its development. Hillsdale, NJ, England: Lawrence Erlbaum Associates, (1996)
 [51] D.O. Hebb, The Organization of Behavior, Wiley, New York, (1949)
 [52] R.S. Sutton, A.G. Barto, Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, (1998)
 [53] Izhikevich, E.M., Edelman, G.M., LargeScale Model of Mammalian Thalamocortical Systems. PNAS, 105, 35933598, (2008)
 [54] Barendregt, H., The Lambda Calculus: Its syntax and semantics. Studies in Logic and the Foundations of Mathematics, North Holland, Amsterdam, (1984)
 [55] van Benthem, J., Reflections on epistemic logic. Logique & Analyse, 133134, 5 14, (1991)
 [56] Forster, T., Logic, Induction and the Theory of Sets. London Mathematical Society Student Texts 56, Cambridge Univ. Press, (2003)
 [57] Hankin, C., An introduction to Lambda Calculi for Computer Scientists, College Pub. (2004)