A new Horizon Europe project: CoreSense

We have signed a grant agreement with the EC to coordinate a new research project on intelligent robotics. The CoreSense project will develop a new hybrid cognitive architecture to make robots capable of understanding and being aware of what is going on. The project will start on October 1, 2022 will span four years (2022-2026) and joins six partners across europe in an effort to push forward the limits of robotic cognition.

Cognitive robots are augmenting their autonomy, enabling them to deployments in increasingly open-ended environments. This offers enormous possibilities for improvements in human economy and wellbeing. However, it also poses strong risks that are difficult to assess and control by humans. The trend towards increased autonomy conveys augmented problems concerning reliability, resilience, and trust for autonomous robots in open worlds. The essence of the problem can be traced to robots suffering from a lack of understanding of what is going on and a lack of awareness of their role in these situations. This is a problem that artificial intelligence approaches based on machine learning are not addressing well. Autonomous robots do not fully understand their open environments, their complex missions, their intricate realizations, and the unexpected events that affect their performance. An improvement in the capability to understand of autonomous robots is needed.

The CoreSense project tries to provide a solution to this need in the form of a AI theory of understanding, a theory of robot awareness, some enginering-grade reusable software assets to apply these theories in real robots. The project will build three demonstrations of its capability to augment resilience of drone teams, augment flexibility of manufacturing robots, and augment human alignment of social robots.

In summary, CoreSense will develop a cognitive architecture for autonomous robots based on a formal concept of understanding, supporting value-oriented situation understanding and self-awareness to improve robot flexibility, resilience and explainability.

There are six project partners:

Universidad Politécnica de Madrid – ES – Coordinator
Delft University of Technology – NL
Fraunhofer IPA – DE
Universidad Rey Juan Carlos – ES
PAL Robotics – ES
Irish Manufacturing Research – IR

Principal Investigator: Ricardo Sanz

Questions on Mind Theory

With Jaime Gómez

In the recent and not so recent past, several formal and abstract models have attempted to shed light on the coret topics of Mind and Brain, Robotics and Artificial Intelligence. This has created a vast proliferation of published information, which is currently lacking any one dominant model for understanding the mental processes.

The Journal of Mind Theory we try to create is an attempt to tackle these problems.

The JMT journal’s aim is to consolidate and explore these formal and abstract tools for modeling cognitive phenomena, creating a more cohesive and concrete formal approach to understanding the mind/brain, striving for precision and creating clarity in this topic of interest.

What follows is a list of questions posed by Jaime Gomez (JG) and answers from Ricardo Sanz (RS) on these issues.

The questions

JG: First off, for putting things in perspective, there seems to be some skepticism about usefulness of formal approaches. Is formal logic the best mode of thinking about mental processes? Are the grounds of validity of the laws of logic to be found in language, in conceptual structures, in the nature of representation, in the world, or where?

RS: Formal logic is an abstract framework and as such, the grounds of its validity are to be found in its own structure. The programs of Frege and Hilbert established this thread and the axiomatisations of Russel & Whitehead or Peano reflected this into logics and set theory; a program that was partially broken by Gödel results. The question is whence the formal can bear any strong relation to the real. My belief is that the answer is yes. The reason for believing this is a question of plain evidence: laws of physics seem to be in strong correlation with reality (cf. the bewilderment shown by Wigner). And it is not just a question of approximation —that we can always approximate any data set with an arbitrarily complex function— it is a question of how simple while precise the laws are in the formal side and in the real side.

The question whether formal logic is the best mode of thinking about mental processes, I think the answer is both yes and no. Yes, because mental processes are informational processes and as such, logics has a crucial role to play there. No, because there are other more powerful mathematical formalisms that fit better the descriptions we are looking for. The grounds of validity of the laws of logic or better, of mathematical physics, are to be found in the existence of isomorphic, shared structures between formalisms and reality.

JG: Are embodied and situated approaches more relevant than the use of formal tools for the modeling of biological phenomenon, in particular the mental processes?

RS: I don’t really understand what “embodied” and “situated” do mean. In a very precise sense all real systems are embodied and situated. All they have a body and are placed somewhere. For some authors, “embodied” does not mean just “having a body” but having an operation driven by a “mind” that is scattered through all the body. I cannot but agree with this spread mind idea in the understanding that minds are informational-control processes that are distributed through all the body. The problem is then not the question of “embodiment” but the very possibility of existence of “non-embodied” minds. There is no way of having a real system that is purely abstract. Abstractions are necessarily reified if they exist. Therefore, the embodied mind vs formal mind is a false dichotomy. AI systems based on inference engines are as embodied as any robot in a strong ontological sense. Formal tools are used to think about systems and then mapped into embodied and situated realizations.

A different consideration places the distinctions embodied-non embodied and situated-non-situated in the side of the thinker, scientist or engineer and not in the target system itself. This means that there is a way of thinking and building robots that may be labeled as “embodied and situated robotics”. The same can be said for the analysis of alive systems or for the theorizing. The problem here is very simple indeed. What can we say of a model of a system that puts the mind in the brain and not through all the body? What we can say about this kind of model will depend on the system being modeled: in this system the information and control processes happen in the brain, the model may be good; if there are information and control processes beyond the brain, the model is certainly bad.

So, there is no such thing as “embodied modeling”; there are just good models and bad models, and what the “embodied and situated” approach has discovered is a blatant aspect of systems: the dynamical phenomena -especially the one driven by information- can happen in all subsystems. This is nothing new but common understanding in all science and engineering and a central topic of control theory: controllers -minds- must necessarily take into account the dynamics of the body to properly control it. Thinking that a controller can move a robot arm to perform any task in the absence of bodily considerations is not just “non-embodied robotics” is simply bad engineering. Something that lurks here is the ignorance of a simple fact: given a concrete system, not all behaviors are dynamically possible and hence a working mind cannot actually decide upon the proper actions in the absence of the knowledge of the dynamics. This can be read, as “minds cannot be separated from bodies” as embodiers do or can be simply read as “controllers must take dynamics into account”. Nevertheless, this is not a new insight, but common trade in cybernetics and control engineering.

So the question of embodiment is just whether or not bodily dynamics is taken into account when acting. And the question of situatedness just whether or not environmental dynamics is taken into account when acting -the analysis being similar for that of embodiment.

This is so for both the artificial and the natural. The case of the biological phenomenon is a particular case of a this more general phenomenon of bodily/world dynamics being of relevance for bodily/world behavior. The “embodied and situated” thinking about biological phenomena or robot construction is hence just trying to avoid the naïve approaches of the illiterate.

JG: The “two cultures” conflict that C.P. Snow has pointed out looks far from being resolved, rather “the cultural panorama” seems to be more and more atomized, within each of the Snow’s cultures. The practitioners of science and engineering in the cognitive sciences seem to diverge and question the methods used and achieved by each other rather than reaching any joint consensus.

If the Sciences seek to understand the physical world and Engineering seeks to build better systems, is it justifiable to build artificial systems designed to realize tasks that are already easily accomplished by human beings? (We are referring here whether to build a humanoid robot that pours a cup of tea properly or walks straight sheds light on the motor sensor mechanisms that humans follow to carry out such tasks)

RS: The case of the two cultures is a case of misunderstanding of what the cultures are. They are not separated by educational reasons, but because they are not commensurable because they have very different purposes. In a rough analysis, the people in both worlds -the scientists and the humanists- try to make their living in a particular niche. In a sharper analysis the “scientific” culture has as single objective to make living easier (this is then decomposed in sub-objectives like understanding how the world works or moving water to our homes). The “humanities” culture has as objective the personal promotion of each author in a cultural context (in a sense it is mostly show business).

In another reading, humanities could be understood as the engineering of cultural assets of experiential value; however the lack of solid theories have driven them to create self-perpetuating myths like the many arts, religions, or political regimes. The arguable attempt of the humanities to build an understanding of the human -a scientific endeavor indeed- is devastated by the lack of objective decision making processes between theories. The proper way of understanding man is cybernetics (in McCulloch words, “understanding human understanding”) but the humanities tend in general to neglect the role that scientific knowledge about humans has to play in their business.

This is a difficult gap to fill because the problem of scientific theorizing about human thoughts and phenomenologies is daunting. Scientists are not willing to risk or waste their careers in such a minimal hope task and humanists do have the interest but lack the competences for the necessary work.

The question of whether it is justifiable to build artificial systems designed to realize tasks that are already easily accomplished by human beings can be answered in this Snowean gap context. The question of whether it makes sense to build a machine to better understand humans has a simple answer: yes. Our mathematical incompetence to solve -in formal- some human systems problems makes necessary the construction and experimentation to explore -in physical- the enormous range of design alternatives.

The theories of “the human” that we may have are of three kinds:

  • Rigorous mathematical theories -as those of physics- that we cannot solve analytically but in their simplest forms far from the complexities of a full-fledged human mind.
  • Literary theories from the humanities lacking the necessary intersubjectivity and positive character.
  • Executable models, that are reifications -usually in simplified form- of mathematical theories to be used in the performance of experiments.

Obviously the best to have are the rigorous mathematical theories -for the purposes of science, not for the self-promotional purposes of humanities- because they would be universally predictive. However they possibility of analytically solving billions of simultaneous Izhikevich neuron equations is well beyond reach. On the other side, the executable models only do a particular prediction -of no universal value- but at least give us something of objective value.

But the value of the executable models can only be such if their results are feed-back into the very theory that is the original backdrop of the executable model. Only n this way the construction of humanoid robots will prove of any scientific value. However, the lack of rigorous specifications of this theoretical backdrop and the model-associated simplifying assumptions convert most of the work on humanoid robotics in just a media show trying to make profit on human empathy for humans. These “researchers” are much worse than the humanists working in their self-promotion because they pretend to be doing real science -placing them between the bullshitters and plain liars of Frankfurt.

JG: The Hard Problem of consciousness or, how we explain a state of consciousness in terms of its neurological basis is a very controversial topic for which philosophers and scientists have different approaches and answers. Philosophers like Ned Block argues that the claim that a phenomenal property is a neural property seems just as mysterious—maybe even more mysterious–than the claim that a phenomenal property has a certain neural basis. Do you think the Hard Problem of consciousness is a problem, philosophical dilemma or scientific challenge at all?

RS: The problem of consciousness is no more and no less than a scientific problem. There some observed regularities and we still lack a positive universal law that captures all of them. This problem is said to be hard because of the apparent difference between first and third person experiences. But there is no such thing as a third person experience. The experience of dropping a bottle of wine from the top of the tower of Pisa is as first-personal as it is the experience when drinking the wine. The only issue at stake in first/third person science is abstract repeatability of experiences in controlled settings. With abstract repeatability, I mean the experience described in a level of abstraction that gets rid of unnecessary details. In the case of the drop, we can abstract from the concrete tint of the sunlight, the position of the earth in the orbit or the concrete number and nationalities of the other tourists in the tower. If we describe the experience at a certain abstraction level -e.g. the number of milliseconds a clock ticked- we can expect to obtain some laws (obviously if the world behaves in such a way).

The separation of what is relevant and what is not to achieve the required level of abstraction is hence the cornerstone of shareable experiences -i.e. the very nature of science and engineering. This is a problem for consciousness research because the human brain is very complex and not easily accessible. Time will come where a deep understanding of brain structure will be ready to be used in the systematic analysis of massive data coming from real time brain observation. Then we will be able to separate wheat from chaff and to establish rigorous correlations —i.e. scientific laws— between stimuli and qualia. The laws of redness will come. As will come laws of love and selfhood. There is no mystery here, just ignorance and limited experimental capability.

JG: Another problem usually referred in the philosophy of mind literature seems to be the Problem of Access Consciousness or How can we find out whether there is a conscious experience without the cognitive accessibility necessitated for reporting that conscious experience, since any evidence would have to derive from reports that themselves derive from that cognitive access. Do you think the Problem of Access of Consciousness is a problem, a philosophical dilemma or scientific challenge at all?

RS: This argument is permeated with two fallacies: the homunculus fallacy and the first person fallacy. This last refers to the false argumentation about the intrinsic difference between first person experience and third person experience. In the same sense that we can observe and measure someone digesting we will be able to observe and measure someone feeling. The second one comes from thinking that there is a part of our brain/mind that is “me” and the rest is something that I own and/or use. Obviously, language cannot tell us about all what is going in the brain (or the body as a whole). But when we use language to “talk with a person” what we are doing is indeed to “talk with a fragment of the person” but identifying that fragment with the person itself is a mistake. This means that in general, reports on consciousness are necessary fragmentary because the information available to the reporting mechanism is partial.

With the development of a general scientific theory of consciousness and the advances in experimental resources (see previous question) we will be in a situation of scientifically being able to tell -in third person lingo- when someone is having a particular experience. The path to follow will be similar to the present capability of saying when someone is suffering epilepsy or heart attack. Experimental signal will tell us if some phenomenon is happening and in what degree. No need for verbal reports will be then necessary.

JG: In the celebrated The Structure of Scientific Revolutions, T.S. Kuhn argued that science does not progress via a linear accumulation of new knowledge, but undergoes periodic revolutions or paradigm shifts. Three main stages could be distinguished in science. First, Prescience, which lacks a central paradigm, this is followed by normal science, when scientists attempt to validate observed facts within the paradigm. In the final stage, the failure of a result to conform to the paradigm is seen not as refuting the paradigm, but as the mistake of the researcher. Thus, as anomalous results continue to be produced, science reaches a crisis. At which point revolutionary science leads to a new paradigm, which subsumes the old-good results along with the anomalous results into one framework. Is the Kuhnian paradigm an inappropriate metaphor for the working of the human mind en soi même? Do you think that Kuhn’s account of the development of scientific paradigms provides significant insights into the current state of the cognitive sciences? Which phase of the three are we related to now?

RS: The search for a theory of mind is indeed the very elucidation of the nature of science: correlation between knowledge and reality.

The prescience phase could be equated to Pinker’s blank slate but I don’t think our minds start from scratch. Or genetic material takes us directly into a “normal-science phase” of the mind that corresponds with the normal operation of a brain when using established knowledge and finely tuning it to the concrete environment of the agent. Brain revolutions happen continuously in the mind/brain driven by mismatches between the real and the expected. They are true revolutions in the catastrophic sense of Thom and take the form of non-linear attractors as Freeman has showed us.

Concerning the application of Kuhn model to cognitive science, I think that all people agree on the common paradigm given by theoretical neuroscience. In this sense, we are in the phase of normal science but we have a big problem with the character of the anomalous results. For some researchers there are plenty of anomalous results or even topics not addressed by the theory -e.g. the qualia issue- that the established paradigm does not address -cf. Chalmers “hard problem”.

For other researchers -including myself- we are still lacking some pieces in the global picture of the theory but there are not such anomalous results. What we have is a critical incompetence in applying and deriving predictions from the theory when addressing problems of real scale. There are no anomalous results because there are no complete predictions concerning experiments with real systems. Projects like Blue Brain try to test this hypothesis by applying high performance computation to the simulation of big portions of the brain.

JG: Undeniably the Galilean distinction between primary and secondary properties supposed a great advance in science because it permitted scientists to work out the physical phenomena avoiding scholastic disquisitions or being distracted by issues that were perceived by the church authorities as eminently human and therefore divine features. Do you think this Galilean distinction between quantitative properties and the qualitative ones is still valid? How close we are to explaining the qualia in a quantitative manner?

RS: I don’t think this distinction is any longer valid nor useful. It is clear that in the past it helped focus on certain aspects of the nature that were at the same time easier and more politically correct. However, this is not the case now that we focus on very complex properties -like consciousness- and there are no religious issues at stake.

All the properties -whether primary or secondary- are measured in a process of interaction between an observed object and a measurement device. In a sense, the simpler the interaction process the less effect can be attributed to the measurement device and hence the measurement is closer to being a measurement of an intrinsic property of the object (with the necessary provisions for Heisenberg’s uncertainty). In this very sense, qualia are abstractions -higher level measurements- derived from interactions between the sensed object and the sensing agent -a grown-up device, indeed.

The explanation of human qualia in a quantitative manner is certainly coming; but is necessary to perform two previous steps:

  1. The formulation of a theoretical model of qualia of universal character -i.e. not chauvinistically anthropomorphic or animalmorphic. This is in due course in the theoretical consciousness community and will coalesce in some years.
  2. The development of detailed measurement devices of neural activity of higher spatial and temporal resolution able to observe concrete individual neuronal assemblies in vivo. This is a very difficult problem and may not be solved in many years. However, this second step may not be necessary if the theory of qualia is solid enough as to give precise accounts of all extant phenomena as to be accepted as a satisfactory explanation by the scientific community. This may be necessary for dealing with the irreducible believers on the special nature of consciousness -mysterians-, that may only be convinced after the prediction and confirmation of ad-hoc suitable experimental tests.

JG: Aristotle claimed that definitions supposed the existence of some primitive concepts that could not be defined otherwise we would never finish defining things. Do you find this approach “usable” in the current scientific paradigm? How does this claim of relate to our discussion here?

RS: The end of the apparent definition infinite regression may be a set of closed laws that bound magnitudes and have predictive power. This may be read as a self-sustaining network of definitions (as is the case of physics: f = m·a). My impression, however, is that the coming definitions in mind theory will be in terms of extant physics and information theory in a totally reductionistic sense.

JG: A real epistemological understanding requires attention, not only to the propositions known or believed, but also to knowing subjects and their interactions with the world and each other. All serious empirical inquirers-historians, literary scholars, journalists, artists, etc., as well as scientists–use something like the “hypothetical-deductive method,” how does someone’s seeing, hearing contribute to the warrant of a claim when key terms are learned by association with these observable circumstances?

RS: The theory of mind we are looking for will represent the convergence and resolution of ontology and epistemology into one and single theory. The theory of mind will indeed explain and predict how a knowing subject interacts with the world in a meaningful sense. They key here will be the provision of a theory of mind that is indeed a theory of science: how it is possible for knowledge of something to be correlated with the reality of this very something.The way on how this epistemological-ontological consilience is going to happen can be captured in a simple vision:Nature is organized and what actually happens rigorously follows laws. There are no surprises or miracles. The question of what are the exact nature of these laws -e.g. whether they are probabilistic or not- is irrelevant four our very theoretical purpose. The only requirements for a theory of mind is that they are predictive -to be used as anticipatory tools- and that they are knowable -i.e. can be captured in an information-control infrastructure. In this sense we can trust what someone has learnt -by building associations among observable circumstances- if we are able to discount from the learned laws the concrete, particularity-laden distortions coming from the individual processes of perception and action.

JG: How can works of imaginative literature or art convey truths they do not state? Could incorporating this non-formal more abstract trajectory possibly be useful? How does the precision sought by a logician differs from that sought by a novelist or poet?

RS: At the end, the problem of conveying truths is a problem of conveying a particular abstract form or structure. The vehicle can be directly abstract and tightly correlated with the aspects and complexities of the truth at hand (cf. Wigner comments of the effectiveness of mathematics in physics). But the vehicle can also be less abstract, more concrete and experiential and still convey the form that constitutes the truth to be transmitted.The discovery of truth in arts will hence try to get rid of the details of the medium and even the concrete message -remember MacLuhan’s analysis- and focus on the abstractions reified in the message. This implies a voyage from the minute details of the physicality into the transcendental forms of the hierarchical abstraction. The main difference between the logician and the artist is not that they try to convey different truths but that they have a different strategy for the packing of it. Logicians strive for the truth as it is; artists want also the truth but they enjoy more the process of unpacking it from the media.

Suggested Readings

Harry Frankfurt: On Bullshit
Eugene Izhikevich: Dynamical Systems in Neuroscience
Warren McCulloch: Embodiments of Mind
Marshall McLuhan: Understanding Media: The Extensions of Man
René Thom: Structural stability and Morphogenesis
Walter J. Freeman: Neurodynamics
Eugene P. Wigner: The Unreasonable Effectiveness of Mathematics in the Natural Sciences
Steven Pinker: The Blank Slate