IS4SI 2021

The 2021 Summit of the International Society for the Study of Information

September 12-19, 2021

IS4SI Summit

General Program of Plenary Sessions

2021/09

There are two Plenary Sessions (called here prime-time blocks) each day:

Block 1: 4:00-7:00 UTC and Block 2: 13:00-16:00 UTC.


In these two blocks of time in each day the organizers of Contributing Conferences were requested NOT TO SCHEDULE any activities, except for the allocated to the Conference block of prime-time. All Contributing Conferences have allocated time within prime-time, typically one three-hour prime-time block.


Contributing Conferences can schedule their sessions which do not belong to this schedule of plenary sessions on any day (September 12-19) and any time between the opening and closing of the Summit outside of the two blocks of the plenary prime-time.


The schedule below includes the abstracts of Keynote or Invited Speakers who were invited by the organizers of Contributing Conferences for their plenary contributions and by the organizers of the Summit without affiliation to any particular contributing conference. The schedules of the presentations, discussions, etc. not allocated for Plenary Session are planned and implemented by the organizers of Contributing Conferences. These schedules are not presented here, but they will be displayed on the IS4SI website.


The Schedule of Plenary Sessions presented here is intended to be final. However, in a case of unexpected and unavoidable need for revisions they will be announced in the Updates on the website of the Summit.


Several presentations, lectures, discussions in this plenary schedule are contributions to the Summit from Contributing Conferences. They are listed here and at the same time they are listed in the schedules of their respective conferences. The majority of presentations belong to the schedules of Contributing Conferences which frequently run in parallel sessions. These presentations are not listed here in the plenary program, but only in the programs of conferences. All participants are invited to join audiences of several Contributing Conferences.


All Plenary Sessions (and ONLY these sessions) will be ZOOM Meetings hosted by Marcin Schroeder. Invitation link will be provided later. Online parallel sessions will be hosted in the internet platforms (some on alternative platforms not necessarily based on ZOOM) decided by organizers of conferences, but the links to these sessions will be displayed on the website of the Summit.


This document is about plenary events of the 2021 IS4SI Summit. Information about the contributing conferences can be found on the website of the Summit (https://summit-2021.is4si.org ).


Some contributing conferences have their own web pages:

Dighum web page:: https://gsis.at/2021/08/10/is4si-2021-digital-humanism-workshop-programmed/

TFPI web page: Theoretical and Foundational Problems (TFP) in Information Studies (tfpis.com)

IWNC web page: https://www.natural-computing.com/#iwnc-13

SIS web page: Schedule for the Conference “Symmetry, Structure and Information” in theIS4SI 2021 Summit – The International Society for the Interdisciplinary Study of Symmetry (symmetry-us.com)

Web page courtesy of Aaron Sloman: IS4SI-IWMNC-MORCOM-DIGHUM-TFPI (bham.ac.uk)

MORCOM page with schedule: IS4SI_MORCOM_SCHEDULE.pdf - Google Drive


The abbreviations for the names of contributing conferences used here are as follows:

TFPI – Theoretical and Foundational Problems in Information Studies

BICA – Information in Biologically Inspired Computing Architectures

Dighum – Digital Humanism

SIS – Symmetry, Structure, and Information

MORCOM – Morphological Computing of Cognition and Intelligence

H&R – Habits & Rituals

IWNC – 13th International Workshop on Natural Computing

APC – Philosophy and Computing

ICPI – The 5th International Conference on Philosophy of Information

GFAI – Global Forum for Artificial Intelligence



Book of Abstracts for Plenary Presentations and Introductions to Discussions of the 2021 IS4SI Summit

(edited by Marcin J. Schroeder)

The list shows the titles of abstracts and names of authors in the chronological order of their presentation at plenary sessions. The list is followed by the abstracts in the same order.

1. Introduction to IS4SI Panel Discussion “What is the SI in IS4SI?” moderated by Marcin J. Schroeder

2. Autopoietic machines: Going beyond the half-brained AI and Church-Turing Thesis presented by Rao Mikkilineni

3. Research in the area of Neosentience, Biomimetics, and Insight Engine 2.0 by Bill Seaman

4. Mind, Nature, and Artificial Magic by Rossella Lupacchini

5. Non-Diophantine arithmetics as a tool for formalizing information about nature and technology by Michele Caprio, Andrea Aveni and Sayan Mukherjee

6. Ontological information - information as a physical phenomenon by Roman Krzanowski

7. Materialization and Idealization of Information by Mark Burgin

8. Paradigm Shift, an Urgent Issue for the Studies of Information Discipline by Yixin Zhong

9. Structural Analysis of Information: Search for Methodology, by Marcin J. Schroeder

10. Quality of information by Krassimir Markov

11. A QFT Approach to Data Streaming in Natural and Artificial Neural Networks by Gianfranco Basti and Giuseppe Vitiello

12. Arithmetic loophole in Bell's theorem: Overlooked threat to entangled-state quantum cryptography by Marek Czachor

13. Advanced NLP procedures as premises for the reconstruction of the idea of knowledge by Rafal Maciag

14. Toward a Unified Model of Cognitive Functions by Pei Weng

15. A Nested Hierarchy of Analyses: From Understanding Computing as a Great Scientific Domain, through Mapping AI and Cognitive Modeling & Architectures, to Developing a Common Model of Cognition by Paul Rosenbloom

16. The Development and role of Symmetry in Ancient Scripts by Peter Z. Revesz

17. Symmetry and Information: An odd couple (?) by Dénes Nagy

18. Antinomies of Symmetry and Information by Marcin J. Schroeder

19. Introduction to SIS Conference Panel Discussion by Dénes Nagy & Marcin J. Schroeder

20. Digital Humanism by Julian Nida-Rümelin

21. Humanism Revisited by Rainer E. Zimmermann

22. The Indeterminacy of Computation: Slutz, Shagrir, and the mind by B. Jack Copeland

23. Falling Up: The Paradox of Biological Complexity by Terrence W. Deacon

24. Almost disjoint union of Boolean algebras appeared in Punch Line by Yukio Pegio Gunji

25. Why don't hatching alligator eggs ever produce chicks? by Aaron Sloman

26. Morphogenesis as a model for computation and basal cognition by Michael Levin

27. Cross-Embodied Cognitive Morphologies: Decentralizing Cognitive Computation Across Variable-Exchangeable, Distributed, or Updated Morphologies by Jordi Vallverdú

28. Designing Physical Reservoir Computers by Susan Stepney

29. The Aims of AI: Artificial and Intelligent by Vincent C. Müller

30. Cognition through Organic Computerized Bodies. The Eco-Cognitive Perspective by Lorenzo Magnani

31. Digital Consciousness and the Business of Sensing, Modeling, Analyzing, Predicting, and Taking Action by Rao Mikkilineni

32. On Leveraging Topological Features of Memristor Networks for Maximum Computing Capacity by Ignacio Del Amo and Zoran Konkoli

33. Habits and Rituals as Stabilized Affordances and Pregnances: A Semiophysical Perspective by Lorenzo Magnani

34. A neurocomputational model of relative value processing: Habit modulation through differential outcome expectations by Robert Lowe

35. Capability and habit by Matthias Kramm

36. Collective Intentionality and the Transformation of Meaning During the Contemporary Rituals of Birth by Anna M. Hennessey

37. Habitual Behavior: from I-intentionality to We- intentionality by Raffaela Giovagnoli

38. Machines computing and learning? by Genaro J. Mart ́ınez

39. Computing with slime mould, plants, liquid marbles and fungi by Andy Adamatzky

40. Introduction to IWNC Panel Discussion by Marcin J. Schroeder

41. Exploring open-ended intelligence using patternist philosophy by Ben Goertzel

42. The Artificial Sentience Behind Artificial Inventors by Stephen Thaler

43. Potential Impacts of Various Inventorship Requirements by Kate Gaudry

44. Panel Commentary by Peter Boltuc

45. On Two Different Kinds of Computational Indeterminacy by Oron Shagrir, Philippos Papayannopoulos, and Nir Fresco

46. Cognitive neurorobotic self in the shared world by Jun Tani

47. The Future of Anthroposociogenesis – Panhumanism, Anthroporelational Humanism and Digital Humanism by Wolfgang Hofkirchner

48. The Philosophy – Science Interaction in Innovative Studies by Yixin Zhong

49. Information and the Ontic-Epistemic Cut by Joseph Brenner

50. A Chase for God in the Human Exploration of Knowledge by Kun Wu, Kaiyan Da, Tianqi Wu

51. The Second Quantum Revolution and its Philosophical Meaning by Hongfang L.

52. Information and Disinformation with their Boundaries and Interfaces by Gordana Dodig-Crnkovic

53. A Quantum Manifestation of Information by Tian’en Wang

54. Computation and Eco-Cognitive Openness-Locked Strategies, Unlocked Strategies, and the Dissipative Brain by Lorenzo Magnani

55. In what sense should we talk about the perception of other minds? by Duoyi Fei

56. An a Priori Theory of Meaning by Marcus Abundis

57. Some Problems of Quantum Hermeneutics by Guolin Wu

58. The fast-changing paradigm of war calls for great wisdom of peace by Lanbo Kang

59. Technologies, ICTs and Ambiguity by Tomáš Sigmund

60. The Data Turn of Scientific Cognition and the Research Program of Philosophy of Data by Xinrong Huang

61. Testimony and Social Evidence in the Covid Era by Raffaela Giovagnoli

62. Developments of research on the Nature of Life from the Information Theory of Individuality by Dongping Fan, Wangjun Zhang

63. On Information Interaction between the Hierarchy of the Material System by Zhikang Wang

64. Informational Aesthetics and the Digital Exploration of Renaissance Art by John Douglas Holgate

65. Practice, Challenges and Countermeasures of Accelerating the Development of new Generation of Artificial Intelligence in Xinjiang by Hong Chen

66. A Basic Problem in the Philosophy of Information Science:Redundant Modal Possible World Semantics by Xiaolong Wan

67. Paradigm Revolution Creates the General Theory of AI by Yixin Zhong

68. Intelligence Science Drives Innovation by Zhongzhi Shi

69. On the Essential Difference Between the Intelligence Body and the Program Body by He Huacan & He Zhitao

70. Human body networks mechanisms of the Covid-19 symptoms by Pin SUN, Rong LIU, Shui GUAN, Jun-Xiu GAO, and Chang-Kai SUN

71. The Development and Characterization of A New Generic Wearable Single-Channel Ear-EEG Recording Platform by Rong Liu

72. Brain Imitating Method for Social Computing - Illumination of Brain Information Processing System by Liqun Han

73. Research and Prospects of Artificial Intelligence in Traditional Chinese Medicine by Zixin Shu, Ting Jia, Haoyu Tian, Dengying Yan, Yuxia Yang, and Xuezhong Zhou

74. A Framework of "Quantitative ⨁ Fixed Image ⇒ Qualitative " induced by contradiction generation and Meta Synthetic Wisdom Engineering by Jiali Feng

75. Paradox, Logic and Property of Infinity by Jincheng Zhang

76. A Call for Paradigm Shift in Information Discipline by Zhong Yixin

IS4SI Summit General Program of Plenary Sessions

SEPTEMBER 12-19

SUNDAY, SEPTEMBER 12

The 2021 Summit of the International Society for the Study of Information

Block 1:

4:00-7:00 UTC

Sun 12th Sep

IS4SI

Marcin Schroeder

1) Opening with the Presidential Welcome and Short (5-10 minute) presentations of all conferences by organizers (60-90 minutes)

2) Discussion "What is SI in IS4SI?" (Moderator: Marcin J. Schroeder) (ca. 1 hour)

PANEL DISCUSSION

5:00-6:00 UTC

Sun 12th Sep

PANEL DISCUSSION (following short presentations of all Contributing Conferences)

1.“What is the SI in IS4SI?”

Moderated by Marcin J. Schroeder

Confirmed Panelists: Joseph Brenner, Mark Burgin, José Maria Diaz-Nafria, Gordana Dodig-Crnkovic, Wolfgang Hofkirchner, Pedro C. Marijuán, Yixin Zhong

(Moderator’s Introduction)

The question about the Study of Information (spoiler alert: yes, this is the SI in IS4SI) is highly non-trivial and at the same time there is an urgent need for the discussion addressing misconceptions surrounding information and its inquiries. The goal of such discussion is not to close SI into a compartment of the classification of human intellectual or practical activities by providing a formal definition, but rather to set it free from the limitations coming from the habits of thinking and the use of the word “information” in the narrow contexts of specialized disciplines.

To set SI free does not mean to give up the high standards of intellectual discipline or to object developments of coordinated programs of inquiry. There is need for continuing discussion of the ways information can be defined and related to other concepts, in particular concepts such as knowledge, communication, uncertainty, truth, complexity, etc. The fact that the concept of information is defined in many different ways without any sight of consensus in predictable future should not be a reason for despair. It is just the best evidence for its fundamental importance. Was there any non-trivial concept in science or philosophy with an universally accepted and permanent definition?

The diversity of conceptualizations of information and its presence in virtually all domains of inquiry from the mathematical and physical sciences to the social studies and the humanities are challenging, but at the same time they give a special position to SI as a way to achieve or at least to reasonably pursue a great synthesis of human knowledge. We know that information, as it is understood in physics, in computer science and the study of computation, in semiology has characteristics so similar to information as it is understood in biology and the study of life or other complex systems at the organismic or population level, or in the study of human organizations that there is very little risk that the customary use of the same term “information” in all these contexts is accidental. So, the danger of searching in vain for the synthesized description of reality with the concept of information as the main tool is negligible. The actual danger is rather in dominating this search by the methods and habits of thinking derived from specific disciplines of higher level of specialization and advancement and neglecting alternative perspectives.

I would like to ask the panelists to share their view of SI as it is or as it should be. These views may differ from those presented above, they may provide alternative perspectives, or they may amplify the importance of the few characteristics of SI already presented here. This is especially important when we interpret the question in a normative way: “What should be the SI in IS4SI?” For instance, someone could object the emphasis on the synthesis of knowledge and to defend the view that the present existential threats to humanity due to the climate change, destruction of the ecosystem, misuse of technology (especially information technology) for gaining political or economic power make it necessary to prioritize the direction of SI. There are many different ways in which priorities may be set.

There are some follow up questions which may be addressed instead of the one in the title of our discussion.

- How would you encourage young scholars to choose SI as the theme for their study or their future academic or professional career?

- How urgent or important is reaching a consensus on the definition of information or at least developing mutual understanding between different conceptualizations of information?

- How to prevent the perpetual misattribution of the term “information science” to narrow sub-disciplines of SI, such as computer science or communication engineering which due to their high level of specialization and little interest in the concept of information should not be considered representatives of SI?

- How should SI inform governmental policies, in particular educational policies? Some governmental agencies promote or enforce the naive idea that mandatory classes about computer programing in K12 curriculum will create an information competent society. Is it just a first step in right direction or rather waste of time and resources?

Plenary Sessions Contributed by Theoretical and Foundational Problems in Information Studies (TFPI) Conference

Block 2:

13:00-16:00 UTC

Sun 12th Sep

TFPI

Mark Burgin

13:00-13:35 UTC

Sun 12th Sep

2. Autopoietic machines: Going beyond the half-brained AI and Church-Turing Thesis

Rao Mikkilineni

Ageno School of Business, Golden Gate University, San Francisco, CA 94105, USA

Introduction

All living organisms are autopoietic and cognitive. Autopoiesis refers to a system with well-defined identity and is capable of reproducing and maintaining itself. Cognition, on the other hand, is the ability to process information, apply knowledge, and change the circumstance. The autopoietic and cognitive behaviors are executed using information processing structures that exploit physical, chemical and biological processes in the framework of matter and energy. These systems transform their physical and kinetic states to establish a dynamic equilibrium between themselves and their environment using the principle of entropy minimization. Biological systems have discovered a way to encode the processes and execute them in the form of genes, neurons, nervous system, the body and the brain etc., through evolutionary learning. The genome, which is the complete set of genes or the genetic material present in a cell, defines the blueprint that includes instructions on how to organize resources to create the functional components, organize the structure and the rules to evolve the structure while interacting with environment using the encoded cognitive processes. Placed in the right environment, the cell containing the genome executes the processes that manage and maintain the self-organizing and self-managing structure adopting to fluctuations. The mammalian neocortex and the reptilian cortical columns provide the information processing structures to assess risk and execute strategies to mitigate it. The genome and the networks of genes and neuronal structures are organized to function as a system with various components which have local autonomy but share information to maintain global stability with high degree of resiliency and efficiency in managing the resources.

General Theory of Information tells us that information is represented, processed and communicated using physical structures. The physical universe, as we know it, is made up of structures that deal with matter and energy. As Mark Burgin points out “Information is related to knowledge as energy is related to matter.” A genome in the language of GTI [2 - 4], encapsulates “knowledge structures” coded in the form of DNA and executed using the “structural machines” in the form of genes and neurons. It is possible to model the autopoietic and cognitive behaviors using the “structural machines” described in the GTI.

In addition, GTI also allows us to design and build digital autopoietic machines with cognitive behaviors building upon current generation information processing structures built using both symbolic computing and neural networks. The autopoietic and cognitive behavior of artificial systems function on three levels of information processing systems and are based on triadic automata [4 - 7]. The efficient autopoietic and cognitive behaviors employ the structural machines.

Following four papers presented in the Theoretical and Foundational Problems (TFP) in Information Studies (IS) provide a framework to design and build the new class of autopoietic and cognitive machines:

  1. Mikkilineni, Rao; The Science of Information Processing Structures and the Design of a New Class of Distributed Computing Structures

  2. Mikkilineni, Rao; and Burgin, Mark; Designing a New Class of Digital Autopoietic Machines

  3. Renard, Didier; Fitness in a change of era: Complex Adaptive Systems, Neocortex and a New Class of Information Processing Machines

  4. Morana, Giovanni; Implementing a Risk Predictor using an Autopoietic Machine


The Theory and Practice of Information Processing Structures

The structural machines supersede the Turing machines by their representations of knowledge and the operations that process information [2, 3]. Triadic structural machines with multiple general and mission-oriented processors. enable autopoietic behaviors.

1. From Turing Machines to Structural Machines [2, 3]:

Structural machines process knowledge structures which incorporate domain knowledge in the form of entities, their relationships and process evolution behaviors as a network of networks with each node defining functional behaviors and links defining the information exchange (or communication). The operations on the knowledge structure schema define the creation, deletion, connection and reconfiguration operations based on control knowledge structures. They are agnostic to what the functions of the nodes or what information is exchanged between them. This provides the composability of knowledge structures across domains in processing information. In contrast, the Turing machines process data structures which incorporate domain knowledge in the form of entities and relationships only and their process evolution behaviors are encapsulated in algorithms (programs) which operate of the data structures. Therefore, the Turing machine operations are domain knowledge specific and lacks composability across domains and increases complexity in processing information and its evolution.


2. Changing system’s behaviors using functional communication [3, 4]:

The behavioral changes are embedded in the knowledge structures and therefore functional communication or information exchange induces the behavioral changes in various entities in the knowledge structures. Changes are propagated through knowledge structures when events produce changes in arbitrary attributes of the system entities. This enables self-regulation of the system. In contrast to self-regulation, the external control causes the behavioral changes by the rules embedded in the algorithms outside the data structures and to execute the behavioral changes, the programs have to understand the domain knowledge of the data structures in order to perform operations on them.


3. Triadic structural automata and autopoietic behavior [3, 4]:

A triadic structural machine with hierarchical control processors provides the theoretical means for the design of auto-poietic automata allowing transformation and regulation of all three dimensions of information processing and system behavior – the physical, mental and structural dimension. The control processors operate on the downstream information processing structures, where a transaction can span across multiple distributed components by reconfiguring their nodes, links and topologies based on well-defined pre-condition and post-condition transaction rules to address fluctuations; for example, in resource availability or demand.


4. Providing global optimization using shared knowledge and predictive reasoning to deal with large fluctuations [5]:

The hierarchical control process overlay in the design of the structural machine, allows implementing 4E (embedded, embodied, enactive and extended) cognitive processes with downstream autonomous components interacting with each other and with their environment using system-wide knowledge-sharing, which allows global regulation to optimize the stability of the system as a whole based on memory and historical experience-based reasoning. Downstream components provide sensory observations and control using both neural network and symbolic computing structures.

We present utilization of this theory for building self-managing federated edge cloud network deploying autopoietic federated AI applications to connect people, things, and businesses, which can enable global communication, collaboration and commerce with high reliability, performance, security, and regulatory compliance.


References

[ 1] Conference on Theoretical and Foundational Problems (TFP) in Information Studies (IS), September 12 – 19, 2021, (On Line) as a part of IS4SI Summit 2021 (is4si.org). Theoretical and Foundational Problems (TFP) in Information Studies (tfpis.com)

[ 2] Burgin, M. Theory of Information: Fundamentality, Diversity and Unification, World Scientific: Singapore, 2010.

[ 3] Mark Burgin and Rao Mikkilineni, (2021) On the Autopoietic and Cognitive Behavior. EasyChair Preprint no. 6261. https://easychair.org/publications/preprint/tkjk

[ 4] Burgin, M. Triadic Automata and Machines as Information Transformers, Information, v. 11, No. 2, 2020, 102; doi:10.3390/info11020102

[ 5] Burgin, M., Mikkilineni, R. and Phalke, V. Autopoietic Computing Systems and Triadic Automata: The Theory and Practice, Advances in Computer and Communications, v. 1, No. 1, 2020, pp. 16-35

[ 6] Burgin, M. and Mikkilineni, R. From Data Processing to Knowledge Processing: Working with Operational Schemas by Autopoietic Machines, Big Data Cogn. Comput. 2021, v. 5, 13 (https://doi.org/10.3390/bdcc5010013)

[ 7] Mikkilineni, R. Information Processing, Information Networking, Cognitive Apparatuses and Sentient Software Systems. Proceedings 2020, 47, 27. https://doi.org/10.3390/proceedings2020047027


13:35-14:20 UTC

Sun 12th Sep

3. Research in the area of Neosentience, Biomimetics, and Insight Engine 2.0

Bill Seaman

Professor, Computational Media, Arts and Cultures;

Co-dir. Emergence Lab, Durham, NC. Duke. USA

Abstract

• Neosentience

The goal is to arrive at a model for an intelligent autonomous learning robotic system via transdisciplinary information processes and information exchanges. The long-term goal of this model is to potentially enable Neosentience to arise via the system’s functionality. Research related to this goal is accomplished through the use of an intelligent transdisciplinary database, search engine, a natural language API, a dynamic set of visualization modes, and a series of independent AI collaborators (what we call Micropeers) — The Insight Engine 2.0 (I_E).

Pragmatic benchmarks are used to define Neosentient robotic entities (as opposed to the Turing Test): the system can exhibit well defined functionalities: It learns (enactive approach and others like conversation theory); it intelligently navigates; it interacts via natural language; it generates simulations of behavior; it metaphorically “thinks” about potential behaviors before acting in physical space; it is creative in some manner; it comes to have a deep situated knowledge of context through multimodal sensing (the embodied, embedded approach); and it displays mirror competence. Seaman and Rössler have entitled this robotic entity The Benevolence Engine. They state that the inter-functionality of such a system is complex enough to operationally mimic human sentience. Benevolence can in principle arise in the interaction of two such systems. Synthetic emotions would also become operative within the system. The System would be benevolent in nature. The concept of Neosentience (coined by Seaman) was first articulated in the book Neosentience / The Benevolence Engine by Seaman and Rössler.1

• The 4 Es of Cognition

The goal is to enfold the Embodied, Embedded, Enactive and Extended approaches to understanding cognition in the human, and then seek to articulate the entailment structures that enable this set of dynamic interrelations to function. Because there are many different biological as well as machinic information systems involved in mapping and articulating such processes, this necessitates a new transdisciplinary holistic approach to biological study and its abstraction via biomimetics, to enable entailment structures to be re-applied in defining a model for a Neosentient system (a new branch of AI). The idea is to define a transdisciplinary holistic approach which seeks to examine dynamic, time-based Mind/Brain/Body/Sensing/ Environment relationalities.

• Information Processing Structures, Sentience, Resilience and Intelligence The initial goal is to make the Insight Engine function in such a way as to “point” to potential new research data across disciplinary boundaries by using advanced information processing, computational linguistics, a Natural language API, and additional forms of AI acting as Micropeers (AI collaborators) to enable intelligent bridging of research questions, and the development of new information paradigms through bisociation (after Arthur Koestler) and poly-association (Seaman). These I_E information systems support researchers, empowering them to access relevant transdisciplinary information from the database, to contribute to the higher order goal over time of articulating a functional Neosentient Model. Such a model is informed from many intellectual perspectives and transdisciplinary conversations facilitated by the I_E system, a listserv, and future information oriented conferences. The Insight Engine embodies a series of intelligent processing structures, visualization systems, the mapping of relationalities related to the corpus of papers, books, media objects, key words, abstracts, diagrams, etc. (initially textually structured with pattern recognition visual and sonic systems later integrated, that will help build and navigate the database) and help outline the articulation of a very new variety of Bio-algorithm – informed by 1 Rössler, O., Seaman W. (2011) Neosentience / The Benevolence Engine, Intellect Press, London. the human body. Bio-informatic processing structures are to be abstracted and then re-articulated in a bio-mimetic manner in the Neosentient model. This dynamic combinatoric, self-organizing system seeks to be resilient and interactive with the environment, building new knowledge up somewhat like humans do, through pattern-flows of multi-modal sense pertubations, as well as incorporating a layering of other potential learning systems. Meta-levels of self-observation and the development of language to articulate such contextual learning is central for the embodiment of the system.

• A New Combinatoric

N-dimensional Bio-algorithm Cognitive Behavior is approached through a series of information-oriented processes. Central is to define all of the entailment structures that inform the emergent arising of sentience in the human (new incomplete territory), and seek to abstract those into an autonomous robotic system. The system will bring together a series of technologies from the research of diverse scientists and cyberneticists, and the study of complex systems, to help map this time-based set of relationalities that bridge mind / brain / body — multi-modal sensing systems, and environment. The notion here is to devise a self-organising bio-algorithm of combinatoric algorithms by studying the body, that will be derived from mind / brain / body / environment relationalities, and the sentience/consciousness that arises out of the body's interoperative functionality. This would necessitate moving back to exploring the biomimetic as opposed to the purely functional aspects of AI production. No single discipline of science, the humanities and/or the arts can tackle such a difficult information-related problem set. A special transdisciplinary team of teams would need to arise out of the use of I_E. This overarching research team (or set of teams) would potentially consist of groups of specialists from a series of fields that would also learn enough about the other member fields to be able to talk across disciplines. Conversation would be central to the ongoing development of this variable Bio-algorithmic network. Perhaps an earlier version of this kind of thinking was witnessed in the Biological Computer Lab headed by Heinz von Foerster, 1958-1976. Historical items related to the topic areas would also be included in the database. Perhaps one first must define a set of Boundary Objects. This approach is articulated in Susan Leigh Star’s, 'The Structure of Ill-Structured Solutions: Boundary Objects and Heterogeneous Distributed Problem Solving', in M. Hubs and L. Gasser (eds), Readings in Distributed Artificial Intelligence (Menlo Park, CA: Morgan Kaufmann, 1989

• Research areas for the Insight Engine 2.0

Each research area will have a Micro-peer, these include the following (although new research areas will be added as needed): Neosentience; N-dimentional Combinatoric Bio-algorithm development; Bodily entailment structures; Mindful Awareness – self-observation; 2 nd -order Cybernetics; Neuroscience; Neuroscience and the arts; AI and the arts – Computational Creativity; Biomimetics; The Connectome; AI; AI and Ethics; EI; The Biological Computer Lab (Cybernetics and 2nd Order Cybernetics); Science Fiction; The History of AI; Bridge Building between disciplines; Transdisciplinarity – A Multi-perspective Approach to Knowledge Production; Information – new approaches; Approaches to Learning - Conversation Theory etc.; Robotics and situated knowledge; Computational Intuition; Android Linguistics (Donahue); related new forms of mathematics; synthetic emotions; embodied computation.

The Research team consists of Professor Bill Seaman, PhD, Computational Media, Arts and Cultures, Duke University; John Herr, Duke Office of Information Technology; Dev Seth, Computer Science student, Duke University; Ashley Kwon, Computer Science student Duke University, Quran Karriem, PhD student, CMAC, Duke University. Mingyong Chen, PhD student, UC San Diego


1 Rössler, O., Seaman W. (2011) Neosentience / The Benevolence Engine, Intellect Press, London.

14:20-14:45 UTC

Sun 12th Sep

4. Mind, Nature, and Artificial Magic

Rossella Lupacchini

Department of Philosophy and Communication Studies, University of Bologna, Italy

Abstract

The ambition to invent a machine for the ‘perfect imitation’ of mind appears to flow as a logical consequence from the ambition to invent a device for the ‘perfect imitation’ of nature. From perspective drawing to photography, the Western science of art has taken advantage of mechanical means, such as lenses and mirrors, to replicate our visual experience of nature. Its main concern has always been to capture the ‘magic’ of nature into the ‘synthetic instant’ of picture. Accordingly the main achievement of ‘visual art’ might be described as sight enhancing. In a similar way, the science of logic has taken outstanding advantage of computing machines to simulate our thinking experience. For the ‘art of reasoning’, however, the main goal appears to be nothing less than to capture the ‘nature’ of mind into artificial magic. How does it make sense to pursue it? To which extent can the cognitive experience due to artificial magic be regarded as life enhancing?

1. Mimesis: the demiurge's invention

2. Seeing, knowing, and creating

3. Imitation game: from Leonardo to Turing

De-constructing the mind of nature

Encoding the power of imagination

Ways of intelligence: living, mechanical

4. Light, matter, and will to form

Existence as a quantum phenomenon

Knowledge as a mind-nature entanglement

Information as a quantization of the meaning field

14:45-15:10 UTC

Sun 12th Sep

5. Non-Diophantine arithmetics as a tool for formalizing information about nature and technology

Michele Caprio, Andrea Aveni and Sayan Mukherjee

Duke University, Durham, NC, USA

Abstract

The theory of non-Diophantine arithmetics is based on a more general structure called an abstract prearithmetic. A generic abstract prearithmetic A is defined as A = (A, +A, A, A), where A ℝ+ is the carrier of A (that is, the set of the elements of A), A is a partial order on A, and +A and A are two binary operations defined on the elements of A. We conventionally call them addition and multiplication, but they can be any generic operation. Abstract prearithmetic A is called weakly projective with respect to a second abstract prearithmetic B = (B, +B, B, B) if there exist two functions g A → B and h B → A such that, for all a,b A, a +A b = h(g(a) +B g(b)) and a A b = h(g(a) B g(b)). Function g is called the projector and function h is called the coprojector for the pair (A,B). The weak projection of the sum a +B b of two elements of B onto A is defined as h(a +B b), while the weak projection of the product a B b of two elements of B onto A is defined as h(a B b). Abstract prearithmetic A is called projective with respect to abstract prearithmetic B if it is weakly projective with respect to B, with projector f −1 and coprojector f. We call f, that has to be bijective, the generator of projector and coprojector. Weakly projective prearithmetics depend on two functional parameters, g and h—one, f, if they are projective—and recover Diophantine arithmetic (the conventional arithmetic, called Diophantine from Diophantus, the Greek mathematician who first approached this branch of mathematics) when these functions are the identity. To this extent, we can consider nonDiophantine arithmetics as a generalization of the Diophantine one. A complete account on non Diophantine arithmetics can be found in the recent book by Burgin and Czachor [4]. In this work, we consider three classes of abstract prearithmetics, {AM}M1, {A-M,M}M1, and {BM}M0. These classes of prearithmetics are useful to describe some natural and computer science related phenomena for which the conventional Diophantine arithmetic fails. For example, the fact that adding one raindrop to another one gives one raindrop, or that putting a lion and a rabbit in a cage, one will not find two animals in the cage later on (cf. [2] and [5]). They also allow avoiding the introduction of inconsistent Diophantine arithmetics, that is, arithmetics for which one or more Peano axioms were at the same time true and false. For example, in [1] Rosinger points out that electronic digital computers, when operating on the integers, act according to the usual Peano axioms for ℕ plus an extra ad-hoc axiom, called the machine infinity axiom. The machine infinity axiom states that there exists M ℕ far greater than 1 such that M + 1 = M. Clearly, Peano axioms and the machine infinity axiom together give rise to an inconsistency, which can be easily avoided by working with the prearithmetics we introduce. In addition, {AM}M1 and {A-M,M}M1 allow to overcome the version of the paradox of the heap (or sorites paradox) stated in [3, Section 2]. The setting of this variant of the sorites paradox is adding one grain of sand to a heap of sand, and the question is, once a grain is added, whether the heap is still a heap. We show that every element AM’ of {AM}M1 is a complete totally ordered semiring, and it is weakly projective with respect to R+, the conventional Diophantine arithmetic of positive real numbers. Furthermore, we prove that the weak projection of any series n an of elements of ℝ+ = [0,) is convergent in each AM. This is an exciting result because it allows the scholar that needs a particular series to converge in their analysis to reach that result by performing a weak projection of the series onto AM, and then continue the analysis in AM. The second class, {A-M,M}M1, allows to overcome the paradox of the heap and is such that every element A-M’,M’ is weakly projective with respect to the conventional real Diophantine arithmetic R = (ℝ, +, , ℝ) . The weak projection of any non-oscillating series n an of terms in ℝ is convergent in A-M’,M’, for all M’ 1. The drawback of working with this class is that its elements are not semirings, because the addition operation is not associative. The last one, {BM}M0, is such that every element BM’ is a semiring and is projective with respect to the conventional real Diophantine arithmetic R = (ℝ, +, , ℝ). The weak projection of any non-oscillating series n an of terms in ℝ is convergent in BM’, for all M’ 0. The drawback of working with this class is that its elements do not overcome the paradox of the heap.


References

[1] Elemer E. Rosinger. On the Safe Use of Inconsistent Mathematics. Available at arXiv:0811.2405, 2008.

[2] Hermann von Helmholtz. Zahlen und Messen in Philosophische Aufsatze. Fues’s Verlag, Leipzig, pages 17–52, 1887.

[3] Mark Burgin and Gunter Meissner. 1 + 1 = 3: Synergy Arithmetic in Economics. Applied Mathematics, 08(02):133–144, 2017.

[4] Mark Burgin and Marek Czachor. Non-Diophantine Arithmetics in Mathematics, Physics and Psychology. World Scientific, Singapore, 2020.

[5] Morris Kline. Mathematics: The Loss of Certainty. Oxford University Press, New York, 1980


15:10-15:35 UTC

Sun 12th Sep

6. Ontological information - information as a physical phenomenon

Roman Krzanowski

The Pontifical University of John Paul II, Krakow, Poland

Abstract:

Ontological information is information conceived as a natural phenomenon, i.e., as an element of physical world. We will denote such information with the predicate “ontological” as in “ontological information”, as well as by the symbol “IO” and the indexed term “informationO.”. The properties attributed to ontological information in (Krzanowski, 2020) reflect its physical nature. We claim that ontological information is characterized by epistemic neutrality (EN), physical embodiment (PE), and formative nature (FN). The property of epistemic neutrality1 (EN) means that informationO has no meaning by itself. From specific ontological information, an agent may derive something (some value) that has significance for that agent’s existence or functioning. The same ontological information may result in a different meaning for different agents. Likewise, this information may have no meaning at all to some agents. However, an agent can in principle be any system, whether organic or artificial, if it senses ontological information or the organization of natural phenomena. Natural agents (i.e., biological systems) have been shaped by nature to perceive nature’s properties, including organizational properties, but artificial agents are of our own making of course, so in a sense, they also have biological origins. We are therefore creations of nature, which are not separated from it. We are built to interpret nature, not to falsify it, and evolution assumes this, because it is likely that organisms that fail to correctly perceive the environment will not survive. This is also the general idea for building our artificial agents. The property of physical embodiment (PE) means that informationO is a physical phenomenon. So, it may be conceptualized in a matter–energy-information complex2 (one that is indirectly implying Aristotelian hylemorphism), and it is fundamental to nature (i.e. whatever exists physically contains information). The claim that “ontological information is a physical phenomenon” means several things. Ontological information is not an abstract concept in the way that mathematical objects, ideas, or thoughts are abstract. Ontological information does not belong to the Platonic realm of Forms in either the classic or neo-Platonic sense. Ontological information is real, observable, and measurable. Thus, we can claim that information exists much like other physical phenomena exist, because they exhibit the same class of properties (quantifiability, operational properties) as physical phenomena do. Furthermore, it seems that whatever exists in a physical sense contains information, so there is no physical phenomenon without information.

1 A concept is “epistemically neutral” when it does not have intrinsic epistemic import, or in other words, it does not mean anything by itself.

2 The matter-energy-information complex has a status of a conjecture but not of a theory.

Finally, the property of formative nature (FN) means that information is responsible for the organization of the physical world, so information is expressed through structures/forms and the organization of things3, but information is not a structure itself. Organization is a fairly broad concept that may be, and is, interpreted as structure, order, form, shape, or rationality (if perceived by a cognitive entity). We do not posit that information is structure, although this has been claimed several times. The problem with such a statement is that we do not know precisely what a structure is and what kinds of structures we would associate with information, as well as how this would be achieved. Information is certainly not the visible structure or shape of an object, but we concede that the shape or structure of an object is how information discloses itself or how we sense its presence. Thus, the shape of a tea cup is not information, but information is being expressed in the shape of a tea cup. The more thoroughgoing discussion of informationO is provided in (Krzanowski, 2020; 2020a; 2020b). The interpretation of ontological information (in particular its causal potentiality) in the context of the general theory of information (GTI) developed in (Burgin, 2010) is provided in (Burgin and Krzanowski, 2021).

References

Burgin. M. (2010). Theory of Information. New York: World Scientific Publishing.

Burgin, M. R. Krzanowski. (2021). Levels of ontological information, Proceedings, IS4IS 2021. Krzanowski, R. (2020). Ontological Information. Investigation into the properties of ontological information. Ph.D. thesis. UPJP2. Available at http://bc.upjp2.edu.pl/dlibra/docmetadata?id=5024&from=&dirids=1&ver_id=&lp=2&QI=

Krzanowski, R. (2020a). What Is Physical Information? Philosophies. 5. 10.3390/philosophies5020010 Krzanowski, R. (2020b). Why can information not be defined as being purely epistemic? Philosophical Problems in Science (Zagadnienia Filozoficzne w Nauce). 68. P. 37-62.

3 The synonymity of the terms “structure”, “form”, “organization”, and “information” should not be accepted a priori despite the fact that these terms are often used synonymousl

15:35-15:50 UTC

Sun 12th Sep

7. Materialization and Idealization of Information

Mark Burgin

University of California, Los Angeles, CA, USA

Abstract:

Information is an important phenomenon in nature, society, and technology. This situation brought some researchers to the conclusion that information is physical (cf., for example, (Landauer, 2002)). At the same time, according to the general theory of information (GTI), information belongs to the ideal World of Structures, which is the scientific incarnation of the World of Plato Ideas or Forms (Burgin, 2011; 2017). This place of information looks contradictory to the assumption that information is physical and to the fact of the incessant presence of information in nature, society, and technology. The goal of this work is to solve this paradox explaining the connections between the ideal and material and further developing the approach to materialization introduced in (Burgin and Markov, 1991).

We begin with the global structure of the world. It is described by the Existential Triad of the World, which consists of three components: the Physical (Material) World, the Mental World, and the World of Structures (Burgin, 2010). The Physical (Material) World represents the physical reality studied by natural and technological sciences, the Mental World encompasses different forms and levels of mentality, and the World of Structures consists of various kinds and types of ideal structures.

While the Physical and Mental Worlds are accessible by human senses, the World of Structures can be achieved only the intellect as Plato predicted (Burgin, 2017). To better understand the World of Structures, it is helpful to perceive its necessity for the completion and elucidation of the interplay between two sensible Worlds. With the stipulation of the increase of sophistication of science and complexity of studied phenomena, the world of ideal structures becomes indispensable for correct understanding of the Physical and Mental Worlds. Starting with physicists, who understood the key role of abstract mathematics for physics, people will begin to comprehend necessity and expediency of the structural reality.

According to the Ontological Principle O2 of the GTI and its additional forms (Burgin, 2010), information plays the same role in the World of Structures as energy plays in the Physical (Material) World.

However, according to the Ontological Representability Principle of the GTI, for any portion of information I, there is always a representation Q of this portion of information for a system R. Often this representation is material, and as a result, being materially represented, information becomes physical. Consequently, a physical representation of information can be treated as the materialization of this information.

Moreover, according to the Ontological Embodiment Principle of the GTI, for any portion of information I, there is always a carrier C of this portion of information for a system R. This carrier is, as a 2 rule, material, and this even more makes information physical. A physical carrier of information can be also treated as the materialization of this information, or more exactly, the materialization of the second level.

Now we can see that the paradox of the existing impact of such an ideal essence as information in the physical reality is caused by the very popular confusion of information per se, its representations, and carriers.

The difference between a portion of information, its representation, and its carrier is demonstrated by the following example. Let us consider a letter/text written/printed on a piece of paper. Then the text is a representation of information in this text while the piece of paper is a carrier of this information. Note that the text is not information because the same information can be represented by another text.

In this context, the materialization of information has two meanings. First, materialization of information is the process of representing this information by a material object/system. Second, it is a material/physical representation of this information, that is, a result of the materialization process.

Note that material/physical representations of information can be natural or artificial. For instance, DNA is a natural representation and carrier of information while a computer memory is an artificial carrier of information and the state of a computer memory is an artificial representation of information

There is also the process of information idealization, which goes in the opposite direction and is reciprocal but not always inverse to the materialization of information. Both these processes are formally represented as named sets and chains of named sets. This allows utilization of named set theory as a tool for exploration of information materialization and idealization.

References

Burgin, M. Theory of Information: Fundamentality, Diversity and Unification, World Scientific, New York/London/Singapore, 2010

Burgin, M. (2011) Information in the Structure of the World, Information: Theories & Applications, v.18, No. 1, pp. 16 - 32

Burgin, M. (2017) Ideas of Plato in the context of contemporary science and mathematics, Athens Journal of Humanities and Arts, v. 4, No. 3, pp. 161 – 182

Burgin, M. and Markov, K. A formal definition of materialization, in Mathematics and Education in Mathematics, Sofia, 1991, pp. 175-179

Landauer, R. (2002) Information is Inevitably Physical, in Feynman and Computation: Exploring the limits of computers, Westview Press, Oxford, pp. 76-92

15:50-16:00 UTC

Sun 12th Sep

General discussion


MONDAY, SEPTEMBER 13

Theoretical and Foundational Problems in Information TFPI 2021

Block 1:

4:00-7:00 UTC

Mon 13th Sep

TFPI

Mark Burgin

4:00-4:25 UTC

Mon 13th Sep

8.Paradigm Shift, an Urgent Issue for the Studies of Information Discipline

Yixin Zhong

Beijing University of Posts and Telecommunications, Beijing 100876, China

Abstract:

1. The Definition of the Paradigm for a Scientific Discipline

The paradigm for a scientific discipline is defined as the integrity of scientific view and methodology for that discipline in which the scientific view defines what the essence of the discipline is while the methodology related defines how to determine the scientific approach to the studies of the discipline. Thus, the paradigm for a scientific discipline delimits the norm that the studies for that discipline should follow. As result, the studies of the category of a scientific discipline should employ its own paradigm. Therefore, the studies of a physical discipline should employ the paradigm for physical discipline whereas the studies of an information discipline should employ the paradigm for the information discipline.

2. The Role The Paradigm Plays in Scientific Studies

The paradigm as defined above plays the role that leads the studies of the related scientific discipline. As a matter of fact, whether the studies of the discipline would be successful or failure in practice will depends on if the paradigm employed for the discipline is correct or not. So, if the paradigm for the information discipline has been employed, the studies of the information discipline would make successes no matter how difficult the information discipline is. Otherwise, the studies of the information discipline will encounter a series of misunderstanding and setbacks.

3. The Real Situation Concerning The Paradigm in Information Discipline

It is a very surprising discovery through the investigation in depth that the paradigm employed for the study of information discipline has ever been the one for a physical discipline, see Table 1, not the one for an information discipline, see Table 2 below.


Table 1 Major Features for the Paradigm of Physical Discipline

  1. Scientific View:

    1. Object for study: Physical with no subjective factor

    2. Focus of study: The structure of physical system

    3. property of the object: Deterministic in nature

  2. Methodology

    1. General approach: Divide and conquer

    2. Means for description/analysis: Purely formal methods

    3. Means for decision-making: Form matching


Table 2 Major Features for the Paradigm of Information Discipline

Scientific View:

    1. Object for study: Info process within subject-object interaction

    2. Focus of study: To achieve the goal of double win(subject-object)

    3. property of the object: Non-deterministic in nature

  1. Methodology

    1. General approach: Methodology of Information Ecology

    2. Means for description/analysis: Form-utility-meaning trinity

    3. Means for decision-making: Understanding-based


The use of the paradigm for the physical discipline to the study of the information discipline is surely the roots for all problems related to the study of the information discipline. The major problems existed in the studies of the information discipline include, at least, the followings: (1) diversity without unity in theory, separation among the studies of information in various sectors, separation between the studies of information and the studies of intelligence, all due to the physical methodology of “divide and conquer”; (2) merely formal analysis for the studies of information, knowledge, and intelligence without considering the high importance of subject factors, also due to the physical methodology of “purely formal analysis”.

4. Conclusion

It is an appeal presented in the paper that the paradigm practically executed so far in the studies of the information discipline worldwide should be shifted as soon as possible.


4:25-4:50 UTC

Mon 13th Sep

9.Structural Analysis of Information: Search for Methodology

Marcin J. Schroeder

Global Learning Center, Tohoku University, Sendai, 980-8576, Japan

Abstract

The apparent terminological simplicity is the most deceiving disguise of complex concepts and questions whose real depth is obscured by our habits of thinking. There are many words which we use every day believing that we know well their meaning until someone asks us for their explanation. This applies to the question about the meaning of the concept of information. The term “information” belongs to the most often used in a myriad of contexts but the concept which it represents escapes all efforts to provide a commonly acceptable definition. There is nothing unusual about information being elusive. There have been many fundamental concepts generating never ending discussions. Maybe, as it was suggested already by E. C. Shannon, the most celebrated pioneer of the study of information, we need more than one concept of information. Another possibility is that the study of information understood in multiple and not always precisely defined ways and in very diverse contexts should continue and the future will show which definition provides the most adequate and inclusive description of information or how to integrate the multiple definitions into one acceptable for everyone.

However, if we want to maintain the identity and integrity of the study of information carried out in its further development in the absence of the uniform definition, we have to establish methodological tools not necessarily identical for all forms of inquiry of informational phenomena, but at least being consistent and preferably allowing comparisons of the results of inquiries. Thus, the methodological unity of the study of information even if it may not be complete should serve as a guiding ideal for its inquiries.

This work is not intended as a study of universal methodological tools for all possible forms of inquiry in diverse disciplines. Its main objective is to search for methodological tools for the study of information with the sufficient level of universality to relate studies of information within different disciplines. However, even with this much more restricted objective it is necessary to clarify some misunderstandings present in methodological analyses of practically all scientific disciplines and in all contexts.

The title of this contribution refers to the structural analysis of information as a distinctive methodological tool for two reasons. The first is that this form of inquiry is clearly underrepresented and inadequate in the study of information. The second, closely related reason is that there are many misconceptions about the distinction between different forms of inquiry with surprisingly little attention paid to the role of structural analysis not only in the study of information, but in the majority of scientific and intellectual domains.

The latter, more general issue that is present not only in the study of information can be identified in the representative example of the relationship between quantitative, qualitative, and structural methodologies. The popular conviction of the apparent complementary, dichotomic opposition of the first two methodologies is based on the misconceptions of the role of mathematics in general and of the numbers in particular which are perpetuated in virtually all scientific inquiries. This mistaken view of the two methodologies, their exclusive and universal role in all inquiries obscures the fact that they both are just instances of structural analysis, in which mathematics can offer methodological tools going well beyond the present toolkit.

The fallacy of the opposition and complementarity of the quantitative and qualitative methodologies has its source in the hidden assumptions that are very rarely recognized in the scientific practice. Another source is in the overextension of the mathematical concepts which have very specific and restricted meaning in mathematics to the applications in science where the conditions of their definitions are not satisfied or not considered.

An outstanding example of this type of confusion is in the use of the concept of measure, which frequently in scientific applications is understood as an arbitrary assignment of real numbers to some set of not always clearly defined objects. This use of the term measure is very far from the understanding of the concept of a measure in mathematics. It would have been just a terminological inconsistency with mathematics, not an error, if this non-mathematical concept of measure was not mixed up with the mathematical concept in making conclusions regarding the results of inquiry. Very often the meaning of the term measure is simply not clarified. Sometimes the intention of the use of the term is consistent with the measure theory, but there is nothing about the related concepts of the theory whose absence makes the central concept meaningless. The reference to a measure without any clarification of how it is defined has as its consequence the hidden import of the structure on which it has to be defined to retain its mathematical meaning (a sigma ortho-algebra of measurable subsets of the measure space). Thus, there is usually a hidden structure associated with the subject of each study which serves as a tool for inquiry, but which is excluded from the overt considerations.

If we decide to disregard the conditions in the mathematical concept of a measure and consider it simply as a real-valued function on some set S, then we define on S just an equivalence relation defined by the partition of S into subsets of elements with the equal values of the function. However, in this case we have a pure case of the qualitative methodology based on partitions of a set into equivalence classes which can be identified with qualities or properties of the elements of S, but which equally well can be identified with numerical values. This shows that the distinction between the quantitative and qualitative methodologies is fuzzy. In both methodologies we assume overtly, or most frequently covertly an essentially the same structure of an equivalence relation imposed on the universe of our study. More importantly, in both cases by imposing hidden mathematical structures on the subjects of our study we actually carry out structural analysis involving equivalence relations. As long as the concept of a measure is not the one from measure theory and a measure is simply an assignment of a numerical value the distinction between the two methodologies is rather conventional and is based on the way how equivalence relations are presented. The engagement of the mathematical concept of a measure adds to the consideration an additional, structure of a non-trivial ortho-lattice of measurable subsets.

This work goes beyond the critical review of the hidden but omnipresent elements of structural methodology in the study of information. There is a legitimate question about the positive, creative aspect of the recognition of the role of structural analysis. The source of the conceptual tools necessary for further development of the structural methodology of information can be identified in the invariance with respect to transformations, the main methodological strategy of physics and several other natural sciences. Surprisingly, this idea was completely missing in the work of Shannon, but was already present in the 1928 paper by R.V.L. Hartley cited by Shannon in the footnote to the first page. Hartley did not refer directly to structural analysis, but used invariance as a tool to derive and to interpret his simpler than Shannon’s formula for the amount of information.

4:50-5:15 UTC

Mon 13th Sep

10. Quality of information

Krassimir Markov

ITHEA®, Sofia, Bulgaria

Abstract:

Introduction. This paper is aimed to present the concept “Quality of information” in the frame of the General Information Theory (GIT). The development of GIT had started in the period 1977- 1980. The first publication on GIT, had been published in 1984 [Markov, 1984]. Further publications on GIT are pointed in [Markov et al, 2007].

Entity. In our examination, we consider the real world as a space of entities. The entities are built by other entities, connected with relationships. The entities and relationships between them form the internal structure of the entity they build.

Interaction. Building the relationship between the entities is a result of the contact among them. During the contact, one entity impacts on the other entity and vice versa. In some cases the opposite impact may not exist, but, in general, the contact may be considered as two mutually opposite impacts which occur in the same time. The set of contacts between entities forms their interaction.

Reflection. During the establishing of the contact, the impact of an entity changes temporally or permanently the internal structure and/or functionality of the impacted entity. In other words, the realization of the relationships between entities changes temporary or permanently their internal structure and/or functionality at one or at few levels. The change of the structure and/or functionality of the entity, which is due to impact of the other entity we denote with the notion "reflection". The entities of the world interact continuously. It is possible, after one interaction, another may be realized. In this case, the changes received by any entity, during the first interaction, may be reflected by the new entity. This means that the secondary (transitive) reflection exists. One special case is the external transitive self-reflection where the entity reflects itself as a secondary reflection during any external interaction. Some entities have an opportunity of internal self-reflection. The internal self-reflection is possible only for very high levels of organization of the entities, i.e. for entities with very large and complicated structure.

INFOS. Further we will pay attention to complex entities with possibilities for self-reflection. To avoid misunderstandings with concepts Subject, agent, animal, human, society, humanity, living creatures, etc., we use the abstract concept “INFOS” to denote every of them as well as all of the artificial creatures which has features similar to the former ones. Infos has possibility to reflect the reality via receptors and to operate with received reflections in its memory. The opposite is possible - via effectors Infos has possibility to realize in reality some of its (self-)reflections from its consciousness.

Information and Information Expectation. If the following diagram exists and if it is commutative, then it represents all reflection relations: 1) in reality: entities and their reflections, 2) in consciousness: mental reflections of real or mental entities; 3) between reality and consciousness: perceiving data and creating mental reflections. In the diagram: 1) in reality: “s” is the source entity and “r” is a reflection in the recipient entity; “e” is a mapping from s in r; 2) in Infos’ consciousness: “si” is a reflection of the source entity and “ri” is a reflection of the reflection of the “s”; “ei” is a mapping from si in ri.

“si” is called “information expectation” and “ri” is called “information” about “s” received from the reflection “r”. Commonly, the reflection “r” is called “data” about “s”.

Quality of information. “si” and “ri” may be coincident or different. In the second case, some “distance” between them exists. The nature of the distance may be different in accordance to the kind of reflections. In any case, as this distance is smaller so the information “si” is more qualitative. In other words, the “quality of information” is the measure of the distance between information expectation and the corresponded information.

Conclusion. This paper was aimed to introduce the concept “quality of information” from point of view of the General Information Theory. Formulas for computing of quantity and quality of information will be given in another paper.

References

Kr. Markov. A Multi-domain Access Method. Proc. of Int. Conf. "Computer Based Scientific Research". Plovdiv, 1984. pp. 558-563.

Kr. Markov, Kr. Ivanova, I. Mitov. Basic Structure of the General Information Theory. IJ ITA, Vol.14, No.: 1, 2007. pp. 5-19


5:15-5:40 UTC

Mon 13th Sep

11. A QFT Approach to Data Streaming in Natural and Artificial Neural Networks

Gianfranco Basti* and Giuseppe Vitiello**

* Faculty of Philosophy, Pontifical Lateran University, 00120 Vatican City

**Department of Physics “E. R. Caianiello”, University of Salerno, 84084 Fisciano (Salerno), Italy

Abstract:

During the last twenty years a lot of research has be done for the development of probabilistic machine learning algorithms, especially in the field of the artificial neural networks (ANN) for dealing with the problem of data streaming classification and, more generally for the real time information extraction/manipulation/analysis from (infinite) data streams. For instance, sensor networks, healthcare monitoring, social networks, financial markets, … are among the main sources of data streams, often arriving at high speed and always requiring a real-time analysis, before all for individuating long-range and higher order correlations among data, which are continuously changing over time. Indeed, the standard statistical machine learning algorithms in ANN models starting from their progenitor, the so-called backpropagation (BP)algorithm–based on the presence of the “sigmoid function” acting on the activation function of the neurons of the hidden layers of the net for detecting higher order correlations in the data, and the gradient descent(GD) stochastic algorithm for the (supervised) neuron weight refresh–are developed for static also huge bases of data (“big data”). Then, they are systematically inadequate and unadaptable for the analysis of data streaming, i.e., of dynamic bases of data characterized by sudden changes in the correlation length among the variables (phase transitions), and then by the unpredictable variation of the number of the signifying degrees of freedom of the probability distributions. From the computational standpoint, this means the infinitary character of the data streaming problem, whose solution is in principle unreachable by a TM, either classical or quantum (QTM).Indeed, for dealing with the data streaming infinitary challenge, the exponential increasing of the computational speed derived by the usage of quantum machine learning algorithms is not very helpful, either using “quantum gates” (QTM), or using “quantum annealing” (quantum Boltzmann Machine (QBM)),both objects of an intensive research during the last years. In the case of ANNs, the improvement given by the Boltzmann-Machine(BM) learning algorithm to GD is that BM uses “thermal fluctuations” for jumping out of the local minima of the cost function (simulated annealing),so to avoid the main limitation of the GD algorithm in machine learning. In this framework, the advantage of quantum annealing in a QBM is that it uses the “quantum vacuum fluctuations” instead of thermal fluctuations of the classical annealing for bringing the system out of swallow (local) minima, by using the “quantum tunnelling” effect. This outperforms the thermal annealing, especially where the energy (cost) landscape consists of high but thin barriers surrounding shallow local minima. However, despite the improvement that, at least in some specific cases, QBM can give for finding the absolute minimum size/length/cost/distance among a large even though finite set of possible solutions, the problem of data streaming remains because in this case this finitary supposition does not hold. Like the analogy with the coarse-graining problem in statistical physics emphasizes very well, the search for the global minimum of the energy function makes sense after the system performed a phase transition. That is, physically, after that a sudden change in the correlation length among variables, generally under the action of an external field, determined a new way by which they are aggregated for defining the signifying number of the degrees of freedom N characterizing the system statistics after the transition. In other terms, the infinitary challenge implicit in the data streaming is related with phase transitions so that, from the QFT standpoint, this is the same phenomenon of the infinite number of degrees of freedom of the Haag Theorem, characterizing the quantum superposition in QFT systems in far from equilibrium conditions. This requires the extension of the QFT formalism to dissipative systems, inaugurated by the pioneering works of N. Bogoliubov and H. Umezawa. The Bogoliubov transform, indeed, allows to map between different phases of the bosons and the fermions quantum fields, making the dissipative QFT – differently from QM and from QFT in their standard (Hamiltonian)interpretation for closed system – able to calculate over phase transitions. Indeed, inspired by the modeling of natural brains as many-body systems, the QFT dissipative formalism has been used to model ANNs[1, 2]. The mathematical formalism of QFT requires that for open (dissipative) systems, like the brain which is in a permanent “trade” or “dialog” with its environment, the degrees of freedom of the system (the brain), say 𝐴, need to be “doubled” by introducing the degrees of freedom 𝐴̃ describing the environment, according to the coalgebraic scheme: 𝐴 → 𝐴 × 𝐴̃. Indeed, Hopf coproducts (sums) are generally used in quantum physics to calculate the total energy of a superposition quantum state. In the case of a dissipative system, the coproducts represent the total energy of a balanced state between the system and its thermal bath. In this case, because the two terms of the coproduct are not mutually interchangeable like in the case of closed systems(where the sum concerns the energy of two superposed particles), we are led to consider the non-commutative q-deformed Hopf bialgebras, out of which the Bogoliubov transformations involving the 𝐴, 𝐴̃ modes are derived, and where the q-deformation parameter is a thermal parameter strictly related with the Bogoliubov transform[3]. These transformations induce phase transitions, i.e., transitions through physically distinct spaces of the states describing different dynamical regimes in which the system can sit. The brain is thus continuously undergoing phase transitions (criticalities) under the action of the inputs from the environment (à modes). The brain activity is therefore the result of a continual balancing of fluxes of energy (in all its forms) exchanged with the environment. The balancing is controlled by the minimization of the free energy at each step of time evolution. Since fluxes “in” for the brain (A modes) are fluxes “out” for the environment (à modes), and vice-versa, the à modes are the time-reversed images of the A modes (Wigner distribution), they represent the Double of the system. In such a way, by the doubling of the algebras – and then of the state spaces, and finally of the Hilbert spaces – the Hamiltonian canonical (closed) representation of a dynamic system can be recovered also in the case of a dissipative system, by inserting in the Hamiltonian the degrees of freedom of the environment (thermal bath).From the theoretical computer science (TCS) standpoint, this means that the system satisfies the notion of a particular type of automaton, the Labelled State Transition Machine (LTM). I.e., the so-called infinite-state LTM, coalgebraically interpreted and used in TCS for modelling infinite streams of data[2, 4]. Indeed, the doubling of the degrees of freedom (DDF) {𝐴, 𝐴̃} just illustrated and characterizing a dissipative QFT system acts as a dynamic selection criterion of admissible because balanced states (minimum of the free energy). Effectively, it acts as a mechanism of “phase locking” between the data flow (environment) and the system dynamics. Moreover, each system-environment entangled (doubled) state is univocally characterized by a dynamically generated code𝒩, or dynamic labelling (memory addresses). In our model, indeed, an input triggers the spontaneous breakdown of the symmetry (SBS) of the system dynamical equations. As a result of SBS, massless modes, called Nambu-Goldstone (NG) modes, are dynamically generated. The NG-bosons are quanta of long-range correlations among the system elementary components and their coherent condensation value 𝒩 in the system ground state (the least energy state or vacuum state |0⟩, that in our dissipative case is a balanced, or 0-sum energy state with T > 0) describes the recording of the information carried by that input, indexed univocally (labeled) in 𝒩. Coherence denotes that the long-range correlations are not destructively interfering in the system ground state[2]. The memory state turns out to be, therefore, a squeezed coherent state:|0(𝜃)⟩𝒩 = ∑𝑗 𝑤𝑗(𝜃) |𝑤𝑗 ⟩𝒩 , to which Glauber information entropy Q directly applies, with |𝑤𝑗 ⟩ denoting states of 𝐴 and 𝐴̃ pairs, θ is the time- and temperature-dependent Bogoliubov transformation parameter. |0(𝜃)⟩𝒩is, therefore, a time-dependent ground state at finite temperature T > 0; it is an entangled state of the modes 𝐴 and𝐴̃, which provides the mathematical description of the unavoidable interdependence between the brain and its environment. Coherence and entanglement imply that quantities relative to the A modes depend on corresponding ones of the à modes. To conclude, the natural implementation of such a quantum computational architecture for data streaming machine learning based on the DDF principle is by an optical ANN using the tools of optical interferometry, just as in the applications discussed in [3]. The fully programmable architecture of this optical chip allows indeed “to depict” over coherent light waves how many interference figures as we like, and overall to maintain stable in time their phase coherences, so to allow the implementation of quantum computing architectures (either quantum gates or squeezed coherent states) working at room temperature. In our application for data streaming analysis, the DDF principle can be applied in a recursive way, by using the mutual information as a measure of phase distance, like an optimization tool for minimizing the input-output mismatch. In this architecture, indeed, the input of the net is not on the initial conditions of the net dynamics, like in the ANN architecture based on statistical mechanics, but on the boundary conditions (thermal bath) of the system, so to implement the architecture of a net in unsupervised learning, as required by the data streaming challenge.

References

[1] E. Pessa e G. Vitiello, «Quantum dissipation and neural net dynamics,» Biochem. and Bioenerg., vol. 48, pp. 339-342, 1999.

[2] G. Basti, A. Capolupo e G. Vitiello, «Quantum Field Theory and Coalgebraic Logic in Theoretical Computer Science,» Prog. in Bioph. & Mol. Biol. Special Issue, Quantum information models in biology: from molecular biology to cognition, vol. 130, n. Part A, pp. 39-52, 2017.

[3] G. Basti, G. G. Bentini, M. Chiarini, A. Parini and al., "Sensor for security and safety applications based on a fully integrated monolithic electro-optical programmable microdiffractive device," in Proc. SPIE 11159, Electro-Optical and Infrared Systems: Technology and Applications XVI, Strasbourg, France, 2019, pp. 1115907 (1-12).

[4] J. J. M. Rutten, «Universal coalgebra: a theory of systems,» Theor. Comp Sc., vol. 249, n. 1, pp. 3-80, 2000.


5:40-6:05 UTC

Mon 13th Sep

12. Arithmetic loophole in Bell's theorem: Overlooked threat to entangled-state quantum cryptography

Marek Czachor

Institute of Physics and Computer Science, Gdańsk University of Technology, Gdańsk, Poland

Abstract:

Bell's theorem is supposed to exclude all local hidden-variable models of quantum correlations. However, an explicit counterexample shows that a new class of local realistic models, based on generalized arithmetic and calculus, can exactly reconstruct rotationally symmetric quantum probabilities typical of two-electron singlet states. Observable probabilities are consistent with the usual arithmetic employed by macroscopic observers, but counterfactual aspects of Bell's theorem are sensitive to the choice of hidden-variable arithmetic and calculus. The model is classical in the sense of Einstein, Podolsky, Rosen, and Bell: elements of reality exist and probabilities are modeled by integrals of hidden-variable probability densities. Probability densities have a Clauser-Horne product form typical of local realistic theories. However, neither the product nor the integral nor the representation of rotations are the usual ones. The integral has all the standard properties but only with respect to the arithmetic that defines the product. Certain formal transformations of integral expressions one finds in the usual proofs a la Bell do not work, so standard Bell-type inequalities cannot be proved. The system we consider is deterministic, local-realistic, rotationally invariant, observers have free will, detectors are perfect, hence the system is free of all the canonical loopholes discussed in the literature.


References

M. Czachor, Acta. Phys. Polon. A 139, 70 (2021)

M. Czachor, Found. Sci. 25, 971-985 (2020)

6:05-6:30 UTC

Mon 13th Sep

13. Advanced NLP procedures as premises for the reconstruction of the idea of knowledge

Rafal Maciag

Institute of Information Studies, Jagiellonian University, Krakow

Abstract:

The purpose of the presented reasoning is to show the natural, historical process of changing the disposition of knowledge from the classical situation described by Plato to the reconstructed situation, in which the disposer/owner/user can be any dynamic complex system that interacts with the environment. It can be assumed that the latter possibility has been at least partially implemented experimentally for language in the form of technical NLP procedures. The aforementioned process is the result of the simultaneous development of metamathematical reflection and the directly following and related process of developing the understanding of language as a representation of the world. Both of these processes stabilized the idea of the existence of world-independent descriptive and analytical systems, i.e. that do not meet the conditions of any reference or systems in which this reference is specific and indirect. The representative of the first possibility is meta-mathematics, the second - language. Such an interpretation of language opened the way to the emergence of various approaches of a generally constructivist character, i.e. variously defining the linguistic system's participation in representing reality, leading to the highlighting and emphasizing of its particularity and locality in a historical and spatial sense. It is an extensive reflection developing in two separate approaches: hermeneutic (philological) and based on the concept of discourse. The closing of this road and a kind of revolution should be considered the appearance of artificial systems generating original, intelligible, and meaningful text in NLP procedures e.g. GPT 3, which meets the previously loosened condition of containing knowledge in the light of the aforementioned linguistic reflection. Such a possibility is expressisverbis included in the theory of discourse. Since any text that is syntactically correct, intelligible and meaningful can be considered a container of knowledge in the light of text theory, the key question becomes the way and conditions of such knowledge existence and the source of its origin in the case of texts generated by machines, e.g. advanced NLP algorithms. This role can be fulfilled by the model of textual knowledge completely isolated from the human. Ultimately breaking this barrier opens the possibility of interpreting knowledge of a much broader nature. This situation requires a reinterpretation of knowledge and the way it exists, although it also updates old problems such as truth or meaning. The answer to this need may be the discursive theory of knowledge, which can also be generalized to the knowledge gathered and articulated in any non-linguistic way

6:30-7:00 UTC

Mon 13th Sep

General Discussion


CONTRIBUTIONS FROM

Symmetry, Structure, and Information Conference SIS 2021

NON PLENARY

7:30-8:00 UTC

Mon 13th Sep

TILINGS FOR CONJOINED ORIGAMI CRANES USING LESS SYMMETRIC QUADRILATERALS

Takashi YOSHINO

8:00-8:30 UTC

Mon 13th Sep

DEMONSTRATION OF THE CONEPASS TO CONSTRUCT THE GOLDEN ANGLE

Akio HIZUME

8:30-9:00 UTC

Mon 13th Sep

ARTISTIC INTUITION: HOW SYMMETRY, STRUCTURE AND INFORMATION CAN COLLIDE IN ABSTRACT PAINTING

Marina ZAGIDULLINA

9:00-9:30 UTC

Mon 13th Sep

FUTURE ETHNOMATHEMATICS FOR A ‘NEW BLETCHLEY’

Ted GORANSON

9:30-10:00 UTC

Mon 13th Sep

FRACTAL-LIKE STRUCTURES IN INDIAN TEMPLES

Sreeya GHOSH, Paul SANDIP and Chanda BHABATOSH

10:00-10:30 UTC

Mon 13th Sep

INNER ANGLES OF TRIANGLES IN PARAMETER SPACES OF PROBABILITY DISTRIBUTIONS

Takashi YOSHINO

CONTRIBUTIONS FROM

Information in Biologically Inspired Computing Architectures Conference (BICA)

Block 2:

13:00-16:00 UTC

Mon 13th Sep


BICA

David Kelley

INVITED LECTURE

13:00-14:00 UTC

Mon 13th Sep

14. Toward a Unified Model of Cognitive Functions

Pei Weng

Temple University

Abstract:

NARS (Non-Axiomatic Reasoning System) provides a unified model for cognitive functions. The system is designed to work in situations where it has insufficient knowledge and resources with respect to the problems to be solved, so it must be adaptive and use whatever available to get the best solutions obtainable at the moment. NARS represents various types of knowledge in a conceptual network that summarizes the system’s experience, and constantly self-organizes the network to better meet the demands. Different aspects of this uniform self-organizing process can be seen as cognitive functions, such as reasoning, learning, planning, predicting, creating, guessing, proving, perceiving, acting, communicating, decision making, problem solving, etc. NARS has been mostly implemented, and the system has produced preliminary results as expected.

INVITED LECTURE

14:00-15:00 UTC

Mon 13th Sep

15. A Nested Hierarchy of Analyses: From Understanding Computing as a Great Scientific Domain, through Mapping AI and Cognitive Modeling & Architectures, to Developing a Common Model of Cognition

Paul Rosenbloom

USC Institute for Creative Technologies

Abstract:

The hierarchy of disciplines that spans computing, AI and cognitive modeling, and cognitive architectures is analyzed in tandem to yield insights into each of these disciplines individually and to jointly illuminate the final, most focused, one. Computing has the widest scope, as characterized here in terms of a Great Scientific Domain that is akin to the physical, life and social sciences. Once this notion is introduced, the field is broken down according to the domains involved in different segments of it and the types of relations that exist among these domains. AI, cognitive modeling and (biologically inspired) cognitive architectures are, in particular, characterized in this manner. With the focus then narrowed down to these three areas, an analysis of them in terms of four general dichotomies and their cross-products induces maps over their underlying technologies that yield both general insight into the contours of the areas themselves as well more specific insight into the structure of cognitive architectures. With the focus now further narrowed to just this latter topic, a Common Model of Cognition is presented that abstracts away from the contents of any particular architecture toward a community consensus concerning what must be in any architecture that is to support a human-like mind.(Non-Axiomatic Reasoning System) provides a unified model for cognitive functions. The system is designed to work in situations where it has insufficient knowledge and resources with respect to the problems to be solved, so it must be adaptive and use whatever available to get the best solutions obtainable at the moment. NARS represents various types of knowledge in a conceptual network that summarizes the system’s experience, and constantly self-organizes the network to better meet the demands. Different aspects of this uniform self-organizing process can be seen as cognitive functions, such as reasoning, learning, planning, predicting, creating, guessing, proving, perceiving, acting, communicating, decision making, problem solving, etc. NARS has been mostly implemented, and the system has produced preliminary results as expected.

PANEL DISCUSSION

15:00-16:00 UTC

Mon 13th Sep

PANEL DISCUSSION Moderated by Peter Boltuc (University of Illinois, Springfield & Warsaw School of Economics)

Panelists: David Kelly (AGI Laboratory, Seattle, USA) and Roman Yampolski (University of Louisville)

Abstract:

NARS (Non-Axiomatic Reasoning System) provides a unified model for cognitive functions. The system is designed to work in situations where it has insufficient knowledge and resources with respect to the problems to be solved, so it must be adaptive and use whatever available to get the best solutions obtainable at the moment. NARS represents various types of knowledge in a conceptual network that summarizes the system’s experience, and constantly self-organizes the network to better meet the demands. Different aspects of this uniform self-organizing process can be seen as cognitive functions, such as reasoning, learning, planning, predicting, creating, guessing, proving, perceiving, acting, communicating, decision making, problem solving, etc. NARS has been mostly implemented, and the system has produced preliminary results as expected.

TUESDAY, SEPTEMBER 14

CONTRIBUTIONS FROM

Symmetry, Structure, and Information Conference SIS 2021

Block 1:

4:00-7:00 UTC

Tue 14th Sep

Symmetry - SIS

Denes Nagy

INVITED LECTURE

4:00-5:00 UTC

Tue 14th Sep

16. THE DEVELOPMENT AND ROLE OF SYMMETRY IN ANCIENT SCRIPTS

Peter Z. REVESZ

Dep. Computer Science and Engineering, University of Nebraska-Lincoln (United States of America)

Abstract:

Many ancient scripts have an unexpectedly high number of signs that contain a type of symmetry where the left and right sides of the signs are mirrored along a central vertical line. For example, within our English alphabet, which is a late derivative of the Phoenician alphabet, the following letters have this type of symmetry: A, H, I, M, N, O, T, U, V, W, X and Y. That is, a total of 12/26 = 46.2% of the letters of the English alphabet has this type of symmetry. Similarly, the ancient Minoan Linear A script, which existed from about 1700 to 1450 BC, contains the following mirrored signs:





These 42 signs are about half of the most frequent signs in the Linear A script, which are estimated to be about 90 signs. In this paper we try to answer the question of “Why is there such a high percent of mirrored signs in ancient scripts?”

We believe that the unexpectedly high number of symmetric signs is due to a development of writing that is called boustrophedonic, or literally as ‘the ox goes’ meaning that at the end of a line the next line continues right below the end and goes in the opposite direction. Hence a left-to-right line is followed by a right-to-left line, which is again followed by a left-to-right line and so on. This is reminiscent of how oxen plow a plot of land. The main problem with boustrophedonic writing is that when we look at a particular line, we do not automatically know which way it should be read, unlike in modern English texts, where every line is read from left-to-right. As a modern example, suppose we would like to write ‘GOD’ in a row that is to be read from right-to-left. This looks like an easy task that can be done by simply writing: ‘DOG’. The problem is that the reader may not recognize that the row needs to be read from right-to-left, hence ‘God’ becomes ‘dog’ for the reader. Ancient scribes compensated for this problem by vertically mirroring any nonsymmetric letter so that the orientation of the words would indicate the direction. Using this concept, instead of ‘DOG’, they would have written:

While boustrophedonic writing with mirroring of asymmetric signs is an attractive solution, and it occurs also in the Mycenaean Linear B script and the Indus Valley Script, it causes the problem of having to know and correctly write the mirrored versions of the asymmetric signs. Many people make mistakes when writing mirrored letters and can read mirrored letters much slower than ordinary texts. We believe that these factors combined with the observation that only a few frequently occurring asymmetric signs are enough to indicate the writing direction led to the development of symmetric forms for most signs. We test this hypothesis by considering earlier scripts where there are no examples of boustrophedonic writings. For example, the Cretan Hieroglyphic script, a predecessor of the Linear A script, contains significantly fewer symmetric signs, and only the following signs of the Phaistos Disk, which may belong to an even earlier layer of scripts, are symmetric signs:



That is, only 13/45 = 28.9% of the Phaistos Disk signs are symmetric. Hence, on the island of Crete, the percent of scripts signs with symmetry nearly doubled within a few centuries, showing the importance of symmetry in writing


5:00-5:30-6:00 UTC

Tue 14th Sep

ENANTIOMORPHIC TALKS ON SYMMETRY (Contributions from Symmetry, Structure and Information Conference)

5:00-5:30-6:00 UTC

Tue 14th Sep

17. Symmetry and Information: An odd couple (?)

Dénes Nagy

International Society for the Interdisciplinary Study of Symmetry (Hungary and Australia)

Symmetry (from the Greek syn + metron, “common measure”), structure (from the Latin structūra, “fitting together, building”), and information (from the Latin īnfōrmātiō, “formation, conception, education”) are scholarly terms that have ancient roots, but gained new meanings in modern science. It is also common that all of these played important roles in interdisciplinary connections, linking even science and art.

We argue that symmetria - asymmetria could have played a relevant role at the birth of mathematics as an abstract science with a deductive methodology (we present a partly new hypothesis which unites those ones by Szabó and by Kolmogorov), then we discuss the related, but different modern meaning-family of symmetry.

(1) The structural approach gained special importance in geometric crystallography from the mid-19th century (14 Bravais-lattices), which, using symmetry considerations, led to a major breakthrough in the 1890s by presenting the complete list all of the possible ideal crystal structures, that is the 230 space symmetry groups (Fedorov, Schoenflies, and Barlow). In the early 20th century, the focus on structures in linguistics (Saussure) also inspired the later developments in social sciences, the intensive study of relations and structuralism as a methodology (Lévi-Strauss, Piaget, and others).

From the mid-1930s, a group of French mathematicians used a similar path in order to present “new math” (Bourbaki group).

(2) Theory of communication led to the study of information in mathematical-statistical context and finally a method for measuring information (Hartley, Shannon, Wiener), which became useful for the emerging computer science in the mid-20th century. Then information theory was also used for the study of aesthetical questions (Moles, Bense).

Looking back, we may see interesting changes:

- The original Greek concept of symmetria was related to measurement, but the usual modern understanding of symmetry implies rather a yes/no question: an object or a process is either symmetric or not. We argue that it is important to go back to the roots and to consider symmetry-measures. In fact, the concept of dissymmetry (as the lack of some possible elements of symmetry) gained special importance in structural chemistry (Pasteur), theoretical physics (P. Curie), and crystallography (Shubnikov and Koptsik), pointed out to such a direction.

- The concept information was originally not related to measurement, but the modern mathematical approach introduced measures in bits, with the number of yes/no questions (Hartley, Shannon). On the other hand, the meaning of information was lost in the case of the mathematical-statistical approach. There were important works related to the meaning of information (MacKay, Bar-Hillel and Carnap, Shreider). It would be important to unite these two and modify them according to the new needs. Quantum computing needs a new information theory related not to bit, but to qubit; here we may need symmetry considerations (cf., Bloch sphere representation).

In some sense, the “odd couple” of symmetry (as an ordering principle) and information (knowledge based on measurements) came together for solving the Maxwell-demon problem. In this thought experiment, which seemingly violates the law of entropy, the demon as the doorman between the two chambers of a closed container filled with gas, introduces new order by separating the high-speed and the low-speed gas molecules by opening the door always in due time. This method would solve our heating and cooling problem in everyday life. The demon, however, should use information, specifically measuring the speed of molecules for the purpose of his work (Szilard). Thus, it would be a very expensive heating and cooling. Another example where symmetry and information work together: The vertices of some regular and semi-regular polyhedra inscribed into a sphere present the centers of circles in the case of densest packing of a given number of equal circles on this sphere (Tammes problem), which is important for spherical coding. The term information asymmetry is well-established in economic science. The fact that it may create an imbalance of power in transactions and, in the worst case, market failure led to various studies and actually the Nobel-prize of three economists (Akerlof, Spence, and Stiglitz).

We suspect that some generalized symmetry and information concepts, which are required by the recent developments in science and art, may help each other.

5:00-5:30-6:00 UTC

Tue 14th Sep

18. Antinomies of Symmetry and Information

Marcin J. Schroeder

IEHE, Tohoku University, Sendai, Japan

This is a proposal of the resolution of several apparent antinomies within the studies of information, symmetry, and of the mutual relationship between symmetry and information. A selection of examples of such antinomies is followed by a nut-shell overview of their solution.

The earliest example of the opposition in the views on information can be found in the critical reaction to the claim of Shannon’s foundational work denying importance of the semantic aspects of communication. This denial exposed his work to the objection that it is not about information. The issue was never completely resolved, although it faded with increased popularity of naive claims that the problem disappears when we demand in the definition that whatever information is, it has to be true.

The relationship between the measure of information given by Shannon in the form of entropy and the measure called negentropy introduced by Schrödinger as a magnitude which although being non-negative has its value opposite to the non-negative entropy is antinomial. This curious pairing, although apparently sufficiently harmless not to attract much attention, is a tip of the iceberg of much deeper internal opposition in the view of information. Shannon’s view of information is tied to the recipient of a message i.e. it is observer’s view. Schrödinger’s negentropy is a numerical characteristic of the acquired freedom in forming organized structure within the system.

An example representing antinomies of symmetry has a form of an opposition of two oppositions. One of them is between the artificial, intentional character of symmetry associated with human aesthetical preference, and the natural character of asymmetry associated with spontaneous, unconstraint generation of forms. The other, reversed opposition is provided by the biological evolution in which the steps in the transition to higher form of life are marked by diverse forms of breaking symmetry leading from the highly symmetric proto-organismic simple systems to the complex human organism with its asymmetric functional specialization.

Finally, there is an example of the opposition in views on the relationship between information and symmetry with its main axis between the claim that information has its foundation in asymmetry and the view that physics is essentially a study of symmetries, so if information is physical we should base its study on the analysis of its symmetries. The former position originates in the Curie Principle that symmetric causes cannot have asymmetric effects justifying the focus on asymmetry, as it can guide us to the actual causes of phenomena. The early expression of Bateson’s metaphor of “information as a difference which makes difference” was in his explanation of the rules of biological asymmetry.

The elimination of these and other antinomies is based on the recognition of the two manifestations of information, selective and structural. The latter requires involvement of symmetry understood as invariance with respect to groups of transformations. The key point is that the apparent antinomies of information, symmetry, and of their relationship are consequences of the fallacious idea of asymmetry, which obscures the relations and transitions between diverse forms of symmetry.

PANEL DISCUSSION

6:00-7:00 UTC

Tue 14th Sep

PANEL DISCUSSION (Contributions from Symmetry, Structure and Information Conference)

19. Moderators’ Introduction to the Panel Discussion

Moderated by Dénes Nagy & Marcin J. Schroeder

Confirmed Panelists: Ted Goranson, Peter Revesz, Vera Viana, Takashi Yoshino

The theme of this discussion and the conference is Symmetry, Structure, and Information. Each of these three ideas escapes a commonly accepted definition. On the other hand, if you ask a passerby whether he or she understands the words symmetry, structure, information most likely the answer would be “sure”. In a very unlikely case, the answer could be “not at all, but I really would like to understand symmetry” showing that the person knows a lot. Then invite him or her to attend the Congress on Symmetry in Porto (https://symmetrycongress.arq.up.pt/) next July.

We can expect a question about the objectives of discussing the triad of symmetry, structure, and information. After all, if we add one more idea of complexity then we have a collection of the most elusive and at the same time most important notions of the philosophical and scientific inquiries. Isn’t it better to focus on each of them separately and only after we have clear results of such inquiries to attempt synthesis? This is the main question addressed to the panelists and the audience.

This question can be reformulated or complemented by the question about the importance, or its lack, of the mutual relationships between the ideas in the leitmotif of the conference. This includes importance for philosophical, theoretical, or practical reasons.

Finally, we can consider the question about what is missing from the picture which is painted by the title with the three ideas only. What ideas, notions, concepts we should include, or even we should give priority in our inquiries of symmetry?

Contribution from Digital Humanities (Dighum) Conference

Block 2:

12:00-16:00 UTC

Tue 14th Sep


Dighum

Wolfgang Hofkirchner & Hans-Jörg Kreowski

KEYNOTE SPEECH

12:00-13:00 UTC

Tue 14th Sep

20. Digital Humanism

Julian Nida-Rümelin

Munich University, Germany

Digital Humanism, as I understand it, defends the human condition against transhumanistic transformations and animistic regressions. The core element of humanism is the idea of authorship: humans are the authors of their lives, they are responsible for what they believe and desire, reasonable insofar as they are affected by reasons, free insofar as they can evaluate and choose.

Humanism in ethics and politics strives at extending human authorship by formation and social policy. Digitization changes the technological conditions of humanist practice, but does not transfrom humans in cyborgs or establish machines as persons. Digital humanism rejects transhumanistic an animistic perspectives alike, it rejects the idea of homo deus, the human god that creates e-persons as friends as possibly one day as enemies.

In my talk I will outline the basic ideas of digital humanism and draw some ethical and political conclusions.

Biographic Note:

Julian Nida-Rümelin is a well-known philosopher, he teaches at Munich university. He was president of the German Philosophical Association and is member of the American Philosophical Association and the European Academy of Sciences and Arts among others. He is Honorary Professor at the Humboldt University in Berlin. He was visiting professor at Minneapolis, St Gallen, Cagliari, Trieste, Rome (CNR), Turin et al.

Nida-Rümelin was Stateminister for Culture and Media in the first cabinet of chancellor Gerhard Schröder.

His main areas of interest are: theory of rationality (practical and theoretical), ethics and political philosophy. He has published more than hundred scientific articles in these fields and several books.

Nida-Rümelin publishes also outside academia on topics like: economics and ethics, philosophy of architecture, digitisation. His book on „Digital Humanism“, written together with his wife Nathalie Weidenfeld, was awarded „The best political book of the year 2018“ in Austria.

KEYNOTE SPEECH

13:00-14:00 UTC

Tue 14th Sep

21. Humanism Revisited

Rainer E. Zimmermann

Institute for Design Science Munich e.V. / Clare Hall, UK - Cambridge

For a long while by now, we live in an inflationary world of „-isms“, at least as far as the intellectual discourse is being concerned. Very often, this apparently generic designation (actually of Greek origin), mainly owed to the alleged conceptual strife for precision, notably in the analytic philosophy of Anglo-Saxon descent, is neither helpful nor even sufficiently redundant, if not superfluous al- together in the first place. In particular, if accompanied by another fashionable adjective. (Unfortunately, I have to admit that I myself have once introduced such a construction, when talking of “transcendental materialism” – but sometimes there is no other way available in order to achieve a minimal amount of clarification. This is probably the exception from the rule.) It turns out after all that most of the time, the meticulous differentiation of concepts is more appropriate to veil clarity and pretend an ostensive depth of reflexion rather than to access an actual gain in acquired knowledge.

This having said, we cannot deny however that the concept of “humanism” is indeed one of the oldest and most omnipresent concepts, but also one of the most iridescent and enigmatic concepts aiming at a designation of species while be- longing to the afore-mentioned set of –isms. Nevertheless, as far as it goes, it is also a concept of considerable proper strength when pointing to fundamental components of what can be understood as a kind of basic ethics. In fact, human- ism shares with ethics the disadvantage of being usually ill-defined and a source of misunderstandings. Hence, in order to avoid the re-invention of what is al- ready known und sufficiently understood, it is always useful to ask for the con- ceptual origins of the concept in question. And this is what we will do in this present talk: We will look for the Greek and Roman origins, trace the development within the Renaissance framework, and turn then to more recent aspects. In the end, we will find that the origins of humanism provide a suitable entry into the epistemological foundations of living an adequately reflected life, despite the underlying suspicion of triviality that is always connected with the classificatory utilization of –isms. We also find that it is quite unnecessary to (re-)formulate new versions of humanism, because essentially, the mentioned origins stay structurally invariant through space and time. (The same is actually true for ethics.)

PANEL DISCUSSION

14:15-15:45 UTC

Tue 14th Sep

PANEL DISCUSSION (Contribution from Dighum Conference)

Digital Humanism – How to shape digitalisation in the age of global challenges?

Panelists: Kirsten Bock, Yagmur Denizhan, José María Díaz Nafría, Rainer Rehak


WEDNESDAY, SEPTEMBER 15

The 2021 Summit of the International Society for the Study of Information.

Block 1:

4:00-7:00 UTC

Wed 15th Sep

IS4SI

Marcin Schroeder

  1. Keynote Jack Copeland 4:00-5:00 UTC

  2. Keynote Terry Deacon 5:00-6:00 UTC

  3. Keynote Yukio-Pegio Gunji 6:00-7:00 UTC

INVITED LECTURE

4:00-5:00 UTC

Wed 15th Sep

22. The Indeterminacy of Computation: Slutz, Shagrir, and the mind

B. Jack Copeland

University of Canterbury, Christchurch, New Zealand

Some computational systems have the counterintuitive property that it is indeterminate what mathematical function they compute. One might say that such a system simultaneously performs multiple computations, one or another of which may be selected by a second system accessing the first, or by a number of systems in a milieu of selecting systems surrounding the first. This talk outlines the potential role the concept of the indeterminacy of computation has to play in the philosophy of information and emphasizes its importance. I begin by examining the concept’s history. It seems first to have emerged in the work of American electronic engineer Ralph Slutz, during the boom in computer development following the second world war. Decades then passed, with only one or two tiny bursts of interest shown in the concept by philosophers — until, around 2000, Israeli philosopher Oron Shagrir reinvented the concept and developed it in a series of recent important papers. In this overview of what is now an emerging field, I introduce a system of levels useful for describing computationally indeterminate systems, together with the concept of ‘computational multi-availability’ and the associated ‘trough model’ for exploiting computational indeterminacy. Turning to potential applications of computational indeterminacy, I sketch the role the concept can play in engineering and also in the philosophy of mind.

INVITED LECTURE

5:00-6:00 UTC

Wed 15th Sep

23. Falling Up: The Paradox of Biological Complexity

Terrence W. Deacon

UC Berkeley

There is an unexpected twist to the evolution of the complexity of biological information. A survey of living complexities at many levels suggest that it is often a spontaneous loss of capacity, a breakdown of individuation, and decreased complexity at one level that serendipitously contributes to the emergence of a more complex collective integrity at a higher level of scale, such as from individual cells to multi celled organisms like ourselves. This points to a critical nonDarwinian process that is the inverse of a progressive improvement of adaptation.

I will provide evidence gleaned from a wide range of phenomena to demonstrate that evolutionary complexification often results from a tendency to simplify, to do less, to shift the burden elsewhere if possible. It is an expression of Life’s least work principle. Life just backs into ever more intricate webs of dependency as it explores ways to avoid work. And this web of interdependencies only becomes more entangled with time—producing a complexity ratchet.

In particular, cases of hierarchic complexification may result from displacement or externalization of functional information onto some outside influence, whether environmental or social. This reduces the selection maintaining the corresponding intrinsically provided information, which consequently becomes susceptible to spontaneous degeneration. With its degeneration there is increasing selection to maintain access to this extrinsic source. As a result duplication, displacement, degeneration, and complementation can build recursively, level upon level, from molecular information to organism adaptations to mental and social cognition. The result is that what we call 'information' tends to spontaneously complexity in depth, with higher levels dependent on and emergent from lower levels, thus making a single level concept of information increasingly inadequate for use in biology.

INVITED LECTURE

6:00-7:00 UTC

Wed 15th Sep

24. Almost disjoint union of Boolean algebras appeared in Punch Line

Yukio Pegio Gunji

Department of Intermedia Art and Science, School of Fundamental Science and Technology, Waseda University, Tokyo, Japan

While humor is one of the most intriguing topics in human behaviors, there is little mathematical research with respect to the universal structure of humor. Recently, quantum psychology attempts to describe how humor is arisen from the uncertainty. Although quantum theory is sufficient condition, it is not necessary condition to describe humor. Instead of starting from quantum theory, we start from describing a sequence of humor text in stand-up comedy. The relation between preceding and subsequent words is expressed as a binary relation, which leads to a lattice by rough set approximation techniques. We here show the binary relation found in stand-up comedies entails a lattice called almost disjoint union of Boolean algebras which is general form of orthomodular lattice. It implies that quantum-like structure can be obtained even if we start not from quantum theory.

In a binary relation, a cat is distinguished from non-cat in a focused context, and i.e., there is no relation between cat and non-cat in a sub-relation. However, a cat is mixed up with non-cat outside the focused context, and i.e., there is a relation between a cat and non-cat. The ambiguity of relation and no relation implies uncertainty with respect to indication. Since each element in a focused context is distinguished from any other elements, focused context is expressed as a diagonal relation. If there are two contexts, 2 by 2 and 3 by 3 in a 5 by 5 symmetrical relation, the 5 by 5 relation consists of 2 by 2 and 3 by 3 diagonal relations and relations between any other pairs outside the diagonal relations. By using a rough set lattice approximation, fixed points with respect to upper and lower approximation based on the binary relation entails a lattice. In that case of 2 by 2 and 3 by 3 diagonal relations, each diagonal relation entails Boolean algebra, and the relations between any other pairs outside the diagonal relations glue Boolean algebras at the top and bottom, and that entails almost disjoint union of Boolean algebras.

We here define subjective probability for a lattice which satisfies that if A B then P(A) P(B). This probability reveals that the probability of an element appearing before punch line is very low and the probability of an element at the punch line is very high. It implies that tension in audience increases before the punch line since the audience cannot understand the event which can rarely happen, and that the tension is relaxed and released at the punch line since the audience faces the event that can frequently happen. Humor is here explained, based on quantum-like structure, not starting from quantum theory.

Block 2:

13:00-16:00 UTC

Wed 15th Sep

IS4SI

Gordana Dodig-Crnkovic

  1. Keynote Aaron Sloman 13:00-14:00 UTC

  2. Keynote Michael Levin 14:00-15:00 UTC

  3. Discussion 15:00-16:00 UTC (Moderator: Gordana Dodig-Crnkovic)

KEYNOTE LECTURE

13:00-14:00 UTC

Wed 15th Sep

25. Why don't hatching alligator eggs ever produce chicks?

Aaron Sloman

School of Computer Science, University of Birmingham, UK

[Retired, honorary professor of AI and Cognitive Science]

How does a child understand the impossibility of separating linked rings?

Neither ancient forms of spatial reasoning, used by mathematicians and engineers centuries before Euclid, nor spatial abilities of intelligent species such as squirrels, crows, elephants, pre-verbal humans, and newly hatched creatures, like the young avocets in this video clip from a BBC Springwatch programme: https://www.cs.bham.ac.uk/research/projects/cogaff/movies/avocets/avocet-hatchlings.mp4 can be explained by fashionable neural net theories, since neural nets cannot be trained inside eggs, and they cannot represent, let alone prove, spatial impossibility or necessity. As Immanuel Kant pointed out in 1781, necessity and impossibility are not very high and very low probabilities. Recently developed logic-based formal reasoning mechanisms can't explain abilities of ancient humans, pre-verbal toddlers, and other intelligent species. The only remaining possibility seems to be that hitherto unnoticed chemistry-based mechanisms, required for biological assembly, also underpin these complex, species-specific, forms of intelligence. Different hatchlings, such as baby alligators or turtles, have very different physical forms and very different capabilities. What chemical processes in eggs can determine both complex physical forms (including intricate internal physiology) and complex physical behaviours, unmatched by current robots? Production, within each individual, of bones, tendons, muscles, glands, nerve fibres, skin, hair, scales, or feathers, and also intricate networks of blood vessels, nerve-fibres and other physiological structures, are clearly chemistry-based, and far more complex than chemistry based behaviours of shape changing organisms, such as slime molds. The combination of complexity, compactness, energy-efficiency, and speed of production of processes in an egg are also unmatched by human designed assembly-lines. Early stages of gene expression are well understood, but not the later processes producing species-specific forms of intelligence in eggs. How are these extraordinarily complex assembly processes controlled? I'll suggest that they use virtual machines with hitherto unknown, non-space occupying mechanisms, whose construction needs to be boot-strapped via multi-layered assembly processes far more complex than anything achieved in human designed assembly plants, yet using far less matter and energy in their operation. Developing explanatory theories will need new forms of multi-disciplinary collaboration, with profound implications for theories of brain function, replacing current theories that cannot explain ancient mathematical discoveries. The mechanisms must be primarily chemistry-based, since neurons develop relatively late. We need an entirely new form of brain science giving far more credit to chemical processes whose computational powers exceed those of both digital computers and neural nets. Is that why Alan Turing was exploring chemistry-based morphogenesis shortly before he died?

For more details see:

https://www.cs.bham.ac.uk/research/projects/cogaff/misc/sloman-morcom.html

KEYNOTE LECTURE

14:00-15:00 UTC

Wed 15th Sep

26. Morphogenesis as a model for computation and basal cognition

Michael Levin

Tufts Center for Regenerative and Developmental Biology, Tufts University

Embryos and regenerating systems produce very complex, robust anatomical structures and stop growth and remodeling when those structures are complete. One of the most remarkable things about morphogenesis is that it is not simply a feed-forward emergent process, but one that has massive plasticity: even when disrupted by manipulations such as damage or changing the sizes of cells, the system often manages to achieve its morphogenetic goal. How do cell collectives know what to build and when to stop? In this talk, I will highlight some important knowledge gaps about this process of anatomical homeostasis that remain despite progress in molecular genetics. I will then offer a perspective on morphogenesis as an example of a goal-directed collective intelligence that solves problems in morphospace and physiological space. I will sketch the outlines of a framework in which evolution pivots strategies to solve problems in these spaces and adapts them to behavioral space via brains. Neurons evolved from far more ancient cell types that were already using bioelectrical network to coordinate morphogenesis long before brains appeared. I will show examples of our work to read and write the bioelectric information that serves as the computational medium of cellular collective intelligences, enabling significant control over growth and form. I will conclude with a new example that sheds light on anatomic plasticity and the relationship between genomically-specified hardware and the software that guides morphogenesis: synthetic living proto-organisms known as Xenobots. In conclusion, a new perspective on morphogenesis as an example of unconventional basal cognition unifies several fields (evolutionary biology, cell biology, cognitive science, computer science) and has many implications for practical advances in regenerative medicine, synthetic bioengineering, and AI.

PANEL DISCUSSION

15:00-16:00 UTC

Wed 15th Sep

PANEL DISCUSSION – DIALOGUE

Moderated by Gordana Dodig-Crnkovic

Panelists: Aaron Sloman, Michael Levin


THURSDAY, SEPTEMBER 16

Contributed by Morphological Computing of Cognition and Intelligence Conference MORCOM 2021

Block 1:

4:00-7:00 UTC

Thu 16th Sep

MORCOM


Gordana Dodig-Crnkovic/Marcin Schroeder

KEYNOTES

4:00-4:20 UTC

Thu 16th Sep

27. Cross-Embodied Cognitive Morphologies: Decentralizing Cognitive Computation Across Variable-Exchangable, Distributed, or Updated Morphologies

Jordi Vallverdú

Universitat Autònoma de Barcelona, Catalonia, Spain

Most of the bioinspired morphological computing studies have departed from a human analysis bias: to consider cognitive morphology as encapsulated by one body, which of course can have enactive connections with other bodies, but that is defined by clear bodily boundaries. Such complex biological inspiration has been directing the research agenda of a huge number of labs and institutions during the last decades. Nevertheless, there are other bioinspired examples or even technical possibilities that go beyond biological capabilities (like constant morphological updating and reshaping, which asks for remapping cognitive performances). And despite the interest of swarm cognition (which includes superorganisms of flocks, swarms, packs, schools, crowds, or societies) in such non-human-centered approaches, there is still a biological constraint: such cognitive systems have permanent bodily morphologies and only interact between similar entities. In all cases, and even considering amazing possibilities, such as the largest living organism on Earth, specific honey fungus Armillaria solidipes measuring 3.8 km across in the Blue Mountains in Oregon, it hasn’t been put over the table the possibility of thinking about cross-morphological cognitive systems. Nests of intelligent drones as a single part of AI systems with other co-working morphologies, for example. I am therefore suggesting the necessity of thinking about cross-embodied cognitive morphologies, more dynamical and challenging than any other existing cognitive system already studied or created.

INVITED SPEAKERS

4:20-4:40 UTC

Thu 16th Sep

28. Designing Physical Reservoir Computers

Susan Stepney

University of York, UK

Abstract:

Computation is often thought of as a branch of discrete mathematics, using the Turing model. That model works well for conventional applications such as word processing, database transactions, and other discrete data processing applications. But much of the world's computer power resides in embedded devices, sensing and controlling complex physical processes in the real world. Other computational models and paradigms might be better suited to such tasks. For example is the reservoir computing model, which can be instantiated in a range of different material substrates. This approach can support smart processing `at the edge', allow a close integration of sensing and computing in a single conceptual model and physical package.

As an example, consider an audio-controlled embedded device: it needs to sense sound input, compute an appropriate response, and direct that response to some actuator such as an electrical motor. We can have an unconventional solution using reservoir computing, which exploits the dynamics of a material to perform computation directly. One form of MEMS (microelectromechanical system) device is a microscopic beam that oscillates when it is accelerated and outputs an electrical signal. This kind of device is used in a car's airbag as an accelerometer to detect crashes. Such a device might be used in an audio-controlled system as follows. The incident sound waves make the beam vibrate (in an analogous way to how they make a microphone's diaphragm vibrate). This vibrating beam can be configured as a reservoir computer, where the non-linear dynamics of the complex vibrations are used directly to compute and classify the audio input. The electrical output from the device is this classified response, sent directly to the motor. Here, the sensor and the computer are the very same physical device, which also performs signal transduction (from sound input to electrical output), with no power-hungry conversion between analogue and digital signals, and no digital computing.

Such systems, implementable in a wide range of materials, offer huge potential for novel applications, of smart sensors, edge computing, and other such devices, reducing, and in some cases potentially eliminating, the need for classical digital central resources. Many novel materials are being suggested for such uses, leading to interdisciplinary collaborations between materials scientists, physicists, electronic engineers, and computer scientists. Before such systems can become commonplace, multiple technical and theoretical issues need to be addressed.

In order to ensure that these novel materials are indeed computing, rather than simply acting as physical objects, we need a definition of physical computing. I describe one such definition, called Abstraction-Representation Theory, and show how this framework can then be exploited to help design correctly functioning physical computing devices.

INVITED SPEAKERS

4:40-5:00 UTCThu 16th Sep

29. The Aims of AI: Artificial and Intelligent

Vincent C. Müller

TU/e (& U Leeds, Turing Institute)

Abstract:

Explanation of what ‘artificial’ means, esp. in contrast to ‘living’. First approximation of what ‘intelligent’ means, esp. in contrast to a discussion of the Turing Test: Do not focus on ‘intellectual intelligence’; do not focus on the human case; do not rely on behaviour alone. Intelligence vs. rational behaviour, e.g. instrumental vs. general intelligence. Formulation of an aim for full-blown AI – a computing system with the ability to successfully pursue its goals. This ability will include perception, movement, representation, rational choice, learning, as well as evaluation and revision of goals - thus morphology will contribute to the orchestration of intelligent behaviour in many but not all these cognitive functions.

5:10-5:30 UTC

Thu 16th Sep

30. Cognition Through Organic Computerized Bodies. The Eco-Cognitive Perspective

Lorenzo Magnani

University of Pavia, Pavia, Italy

Abstract:

Eco-cognitive computationalism sees computation in context, exploiting the ideas developed in those projects that have originated the recent views on embodied, situated, and distributed cognition. Turing’s original intellectual perspective has already clearly depicted the evolutionary emergence in humans of information, meaning, and of the first rudimentary forms of cognition, as the result of a complex interplay and simultaneous coevolution, in time, of the states of brain/mind, body, and external environment. This cognitive process played a fundamental heuristic role in Turing’s invention of the universal logical computing machine. It is by extending this eco-cognitive perspective that we can see that the recent emphasis on the simplification of cognitive and motor tasks generated in organic agents by morphological aspects implies the construction of appropriate mimetic bodies, able to render the accompanied computation simpler, according to a general appeal to the “simplexity” of animal embodied cognition.

Hence, in computation the morphological features are relevant. It is important to note that, in the case of morphological computation, a physical computer does not need to be intelligently conceived: it can be naturally evolved. This means that living organisms or parts of organisms (and their artefactual copies) can potentially execute information processing and can potentially be exploited to execute their computations for us. It is by further deepening and analyzing the perspective opened by these novel fascinating approaches that we see ignorant bodies as domesticated to become useful “mimetic bodies” from a computational point of view, capable to carry cognition and intelligence. This new perspective shows how the computational domestication of ignorant entities can originate new variegated unconventional cognitive embodiments, so joining the new research field of the so-called natural computing. Finally, I hope it will become clear that eco-cognitive computationalism does not aim at furnishing a final and fixed definition of the concept of computation but stresses the historical and dynamical character of the concept.

5:30-5:50 UTC

Thu 16th Sep

31. Digital Consciousness and the Business of Sensing, Modeling, Analyzing, Predicting, and Taking Action

Rao Mikkilineni

Golden Gate University, US

Abstract:

“In brief, neither qualia nor free will seems to pose a serious philosophical problem for the concept of a conscious machine. …. The richness of information processing that an evolved network of sixteen billion cortical neurons provides lies beyond our current imagination. Our neuronal states ceaselessly fluctuate in a particularly autonomous manner, creating an inner world of personal thoughts. Even when confronted with identical sensory inputs, they react differently depending on our mood, goals, and memories.”

Stanislas Dehaene (2014) “Consciousness and the Brain: Deciphering How the Brain Codes our Thoughts” Penguin Books, New York. P 265

Preamble:

Recent advances in various disciplines of learning are all pointing to a new understanding of how information processing structures in nature operate and not only this knowledge may yet help us to solve the age-old philosophical question of “mind-body dualism” but also pave a path to design and build self-regulating automata with a high degree of sentience, resilience and intelligence.

Classical computer science with its origins from the John von Neumann’s stored program implementation of the Tring machine has given us tools to decipher the mysteries of physical, chemical, and biological systems in nature. Both symbolic computing and neural network implementations have allowed us to model and analyze various observations (including both mental and physical processes) and use information to optimize our interactions with each other and with our environment. In turn, our understanding of the nature of information processing structures in nature using both physical and computer experiments is pointing us to a new direction in computer science going beyond the current Church Turing thesis boundaries of classical computer science.

Our understanding of information processing structures and their internal and external behaviors causing their evolution in all physical, chemical and biological systems in nature are suggesting the need for a common framework where function, structure and fluctuations of these systems composed of many autonomous components interacting with each other under the influence of physical, chemical and biological forces. As Stanislas Dehaene (Stanislas Dehaene (2014) “Consciousness and the Brain: Deciphering How the Brain Codes our Thoughts” Penguin Books, New York. P 162) points out “What is required is an overreaching theoretical framework, a set of bridging laws that thoroughly explain how mental events relate to brain activity patterns. The enigmas that baffle contemporary neuroscientists are not so different from the ones that physicists resolved in the nineteenth and twentieth centuries. How, they wondered, do the macroscopic properties of ordinary matter arise from a mere arrangement of atoms? Whence the solidity of a table, if it consists almost entirely of a void, sparsely populated by a few atoms of carbon, oxygen, and hydrogen? What is a liquid? A solid? A crystal? A gas? A burning flame? How do their shapes and other tangible features arise from a loose cloth of atoms? Answering these questions required an acute dissection of the components of matter, but this bottom-up analysis was not enough; a synthetic mathematical theory was needed.”

Fortunately, our understanding of the theory of structures and information processing processes in nature points a way for a theoretical frame work that allows us to:

  1. Explain the information processing architecture gleamed from our studies of physical, chemical and biological systems to articulate how to model and represent cognitive processes that bind the brain-mind-body behaviors and also,

  2. Design and develop a new class of digital information processing systems that are autopoietic. An autopoietic machine is capable of “of regenerating, reproducing and maintaining itself by production, transformation and destruction of its components and the networks of processes downstream contained in them.”

All living systems are autopoietic and have figured out a way to create information processing structures that exploit physical and chemical processes to manage not only their own internal behaviors but also their interactions with their environment to assure their survival in the face of constantly changing circumstances. Cognition is an important part of living systems and is the ability to process information through perception using different sensors. Cognitive neuroscience has progressed in “cracking open the black box of consciousness ” to discern how cognition works in managing information with neuronal activity. Functional magnetic resonance imaging used very cleverly to understand the “function of consciousness, its cortical architecture, its molecular basis, and even its diseases” allows us now to model the information processing structures that relate cognitive behaviors and consciousness.

In parallel, our understanding of the genome provides insight into information processing structures with autopoietic behavior. The gene encodes the processes of “life” in an executable form, and a neural network encodes various processes to interact with the environment in real time. Together, they provide a variety of complex adaptive structures. All of these advances throw different light on the information processing architectures in nature.

Fortunately, a major advance in new mathematical framework allows us to model information processing structures and push the boundaries of classical computer science just as relativity physics pushed the boundary of classical Newtonian physics and statistical mechanics pushed the boundaries of boundaries of thermodynamics by addressing function, structure and fluctuations in the components constituting the physical and chemical systems. Here are some of the questions we need to answer in the pursuit of designing and implementing an autopoietic machine with digital consciousness:

  • What is Classical Computer Science?

  • What are the Boundaries of Classical Computer Science?

  • What do We learn from Cognitive Neuroscience about The Brain and Consciousness?

  • What do we Learn from the Mathematics of Named Sets, Knowledge Structures, Cognizing Oracles and Structural Machines?

  • What are Autopoietic Machines and How do they Help in Modeling Information Processing Structures in Nature?

  • What are the Applications of Autopoietic Digital Automata and how are they different from the Classical Digital Automata?

  • Why do we need to go beyond classical computer science to address autopoietic digital automata?

  • What are knowledge structures and how are they different from data structures in classical computer science?

  • How are the operations on the schema representing the data structures and knowledge structures differ?

  • How do “Triadic Automata” help us implement hierarchical intelligence?

  • How does an Autopoietic Machine move us to Go Beyond Deep Learning to Deep Reasoning Based on Experience and Model-based Reasoning?

  • What is the relationship between information processing structures in nature and the digital information processing structures?

  • What are the limitations of digital autopoietic automata in developing same capabilities of learning and reasoning as biological information processing structures?

  • How do the information processing structures explain consciousness in living systems and can we infuse similar processes in the digital autopoietic automata?

In a series of blogs, we will attempt to search the answers for these questions and in the process, we hope to understand the new science of information processing structures, which will help us build a new class of autopoietic machines with digital consciousness.

However, as interesting as the new science is, more interesting is the new understanding and the opportunity to transform current generation information technologies without disturbing them with an overlay architecture just like the biological systems evolved an overlay cognitive structure to provide global regulation while keeping local component autonomy intact while coping with rapid fluctuations in real-time. We need to address following questions:

  • How are the knowledge structure different from current data structures and how will database technologies will benefit from autopoiesis to create a higher degree of sentience, resilience, and hierarchical intelligence at scale?

  • Will the operations on knowledge structure schemas improve the current database schema operations and provide higher degree of flexibility and efficiency?

  • Today, most databases manage their own resources (memory management, network performance management, availability constraints etc.) which increase complexity and lower efficiency. Will autopoiesis simplify the distributed database resource management complexity and allow application workloads become PaaS and IaaS agnostic and provide location independence?

  • Can we implement autopoiesis without disturbing current operation and management of information processing structures?

5:50-6:10 UTC

Thu 16th Sep

32. On Leveraging Topological Features of Memristor Networks for Maximum Computing Capacity

Ignacio Del Amo and Zoran Konkoli

Chalmers University of Technology, Sweden

Abstract::

Memristor networks have been suggested as a promising candidate for achieving efficient computation for embedded low-power information processing solutions. The goal of the study was to determine the topological features that control the computing capacity of large memristor networks. As an overarching computing paradigm, we have use reservoir computing approach. A typical reservoir computer consists of two parts. First, a reservoir transforms a time-series data into the state of the network. This constitutes the act of computation. Second, a readout layer is used to label the state of the network which produces the final output of the computation. The reservoir was implement using a cellular automata model of a memristor network. The ideas were tested on a binary classification problem with the goal of determining whether a protein sequence is toxic or not.

DISCUSSION

6:20-7:20 UTC

Thu 16th Sep

DISCUSSION (Contribution from MORCOM Conference)


PLENARY PRESENTATIONS Contributed by Habits & Rituals Conference 2021

Block 2:

13:00-16:00 UTC

Thu 16th Sep


Habits & Rituals

Raffaela Giovagnoli

13:00-13:30 UTC

Thu 16th Sep

33. Habits and Rituals as Stabilized Affordances and Pregnances A Semiophysical Perspective

Lorenzo Magnani

Department of Philosophy and Computational Philosophy Laboratory, University of Pavia, 27100 Pavia, Italy;

Abstract:

The externalization/disembodiment of mind is a significant cognitive perspective able to unveil some basic features of abduction and creative/hypothetical thinking; its success in explaining the semiotic interplay between internal and external representations (mimetic and creative) is evident. This is also clear at the level of some intellectual issues stressed by the role of artifacts in ritual settings, in which interesting cases of creative affordances are also at play. I will stress the abductive activity of creating some external artifacts or symbols in ritual events able to provide what we can call stabilized affordances. I contend that these ritual artifacts and events, and the habits they promote, can be usefully represented as endowed with stabilized affordances that “mediate”, and make available, the story of their origin and the actions related to them, which can be learned and/or re-activated when needed. In a semiophysical perspective these stabilized affordances can be seen as pregnant forms [1]. Consequently certain ritual artifacts (that in turn are intertwined with habits) afford meaning as kinds of “attractors”, as states of the eco-cognitive dynamical system in which individuals and collectives operate: they are states into which the eco- cognitive system repeatedly falls, states that are consequently stationary.

An example of ritual artifacts which can be considered “transformers of energy” can be seen in the behavior of some primitive people. They are formed by a process of semiotic delegation of meanings to external natural or artificial objects, for final practical purposes, through the building of external representations capable to afford humans. To make an example, a ritual artifact can be an analogue of female genitals, which, through a reiterated dance, affords a pregnant habit shared by a collective- in turn mimicking the sexual act, suggesting that the hole is in reality a vulva, and refers to the implementation of some agriculture [2]. Another example refers to the violent scapegoating of animals in sacrificial rituals - like in Abel’s sacrifice of an animal and Abraham’s sacrifice of a ram in place of his son, which are strongly related to the moral and religious pregnant meanings proper of monotheistic traditions. In the case of sacrifices of living organisms, Thom usefully observes that the ritual (and its consequent habit) is also related to the desire of “modifying/distorting” the regular space time situations, so they - paradoxically - aim at affording the environment in a desired way:

In order to realize these excited forms [of the regular space-time] it is necessary to breathe into the space a supplementary “energy”, or a “negentropy” which will channel a multitude of local fluctuations in a prescribed manner. Such was the aim of rituals and magical procedures, which frequently involved the sacrifice of living animals. It is as if the brutal destruction of a living organism could free a certain quantity of “negentropy” which the officiant will be able to use in order to realize the desired distortions of space-time. We can see how little the conceptual body of magic differs basically, from that of our science. Do we not know in the theory of the hydrogen atom for example, that the energy level of a stationary state of the electron is measured by the topological complexity of the cloud which this electron makes round the nucleus? In the same way, certain quantum theorists such as Wheeler, tried to interpret quantum invariants in terms of the topology of space-time. And, in General Relativity, the energy density of the universe is interpreted as a geometric curvature (pp. 135–136).

In the case of rituals of initiation, the target is similar, but not related to modify something external such as some regular states of the space-time, but the envisaged distortion assumes the form of the changing of something internal, that is the desires, conferring on them in this way a function through which the subject’s being identifies itself or announces itself as such, through which the subject fully become a man, but also a woman (cf. for example the case of mutilations of female genitals, that serves to orientate desires and so to form new individual habits).

The ritual artifact and event make possible and promote through appropriate affordances the related inferential cognitive processes embedded in the rite. Once the representations at play are built by the related human collective, they can - completely or partially - afford in a sensory way, and they are learnt, if necessary: indeed, the collective ritual also plays a pedagogical role addressed to other individuals not previously involved in its construction and who ignore or partially ignore the full outcome of the ritual (and of the related habits). They can in turn manipulate and reinternalize the meanings semiotically embedded in the artifact, to complete the process of being appropriately afforded.

The whole process of building ritual artifacts - configured as attractors that favor habits - is occurring thanks to what I have called manipulative abduction. When ritual artifacts are created for the first time this happens thanks to an abductive creative social process.

However, when meanings are subsequently picked up through the stabilized affordances involved by the symbolic features of the ritual artifacts or events and suitably reproduced, they are no longer creative, and firmly favors pregnant habits, at least from the point of view of the affected collectives. Of course, it can still be seen as a “new” creative entity from the perspective of individuals who are afforded for the first time, to the aim of getting new cognitive achievements and learning.

In sum, it is possible to infer (abduce) from the ritual artifacts and events - thanks to the fact they offer stable affordances - the original meanings that generated them, and thus the clear and reliable cognitive functions which can in turn trigger related responses (also addressed to possible embodied and motor outcomes). They yield information about the past, being equivalent to the story they have undergone. The available affordances are reliable “external anchors” (indexes) of habits and assist abducibility (and so “recoverability”) of relevant meanings and of both “psychic” and “motor” actions.

I have contended above that the human mind is unlikely to be a natural home for complicated concepts, because such concepts do not exist in a definite way in the available environment. For example, humans always enriched the environment by resorting to “external” magical formalities and religious ceremonies, which can release deep emotion and cognitive forces. It was (and it is) indeed necessary to “disembody” the mind, and after having built a ritual artifact or event through the hybrid cognitive internal/external interplay of representations, it is possible to pick the new meanings up, once they are available and afforded out there.

The activity of delegation to external objects of cognitive value through the construction of ritual artifacts and events is certainly semiotic in itself, as the result of the emergence of new intrinsic afforded meanings, expressed by what Jung [2], for example, calls a symbol. Jung also nicely stresses the protoepistemic role that can be played by magical ritual/artifactual externalizations in creative reasoning, and he is aware that these magical externalizations constitute the ancestors of scientific artifacts, like those—mainly explicit—concerning the discovery of new geometrical properties through external diagrams: Jung says “Through a sustained playful interest in the object, a man may make all sorts of discoveries about it which would otherwise have escaped him. Not for nothing is magic called the ‘mother of science’ “([1], p. 46).

Finally, I will quickly refer to the following important issue: abduced pregnances in many rituals mediate salient signs and work in a triple hierarchy: feelings, actions, and concepts. They are partially analogous to Peirce’s “habits” and, in some cases also involve both proto-morality and morality, which obviously also consist in habits, that is, various generalities as pregnant responses to some signs. To make an example ritual sacrifices are always related to some moral meanings, as I have indicated above.


References

1. Thom, R., Esquisse d’une sémiophysique. InterEditions: Paris, 1988. Translated by V. Meyer, Semio-Physics: A Sketch, Addison Wesley: Redwood City, CA, 1990.

2. Jung, C.G. On psychic energy. In The Collected Works of C. G. Jung, 2nd ed.; Translated by Hull, R.F.C.; Princeton University Press: Princeton, NJ, USA, 1972; Volume 8, pp. 3–66.

13:30-14:00 UTC

Thu 16th Sep

34. A neurocomputational model of relative value processing: Habit modulation through differential outcome expectations

Robert Lowe

Department of Applied IT, University of Gothenburg, Sweden

Abstract:

Animal and human learning is classically conceived in terms of the formation of stimulus and response associations. Such associations, when trained to excess, in turn, induce the formation of habits. Animal learning models and Reinforcement learning algorithms most typically valuate stimuli or states of the world in terms of scalar (singular) value, which conflates potentially multiple dimensions of reinforcement, e.g. magnitude, acquisition probability. Evidence from neurological studies of human and non-human primates indicates that populations of neurons in parts of the brain are sensitive to the relative reward value assigned to stimuli. That is, neural activity is found to occur in response to stimuli predictive of rewards according to their being of lower or higher subjective value with respect to alternatives (Cromwell et al. 2005, Schultz 2015, Isoda 2021).

Here is presented the computational and theoretical basis for a neurocomputational model of relative value processing adapted from previous work (e.g. Lowe & Billing 2017, Lowe et al. 2017, Lowe et al. 2019). This neural-dynamic temporal difference reinforcement learning model computes relative reward valuations in the form of differential outcome expectations (see Urcuioli 2005) for stimuli/states. This model, inspired by Associative Two-Process theory (Trapold 1970, Urcuioli 2005), computationally accounts for action/response selection according to two memory processes: i) retrospective or stimulus-response, wherein habits can be formed; ii) outcome expectancy (‘prospective’). The latter processing route entails the neural representation of valued outcomes preceding, and thereby permitting cueing of, responses, which can occur in the presence of, in place of, or in competition with, the habit-based processing route. As such, habit formation may be modulated by such a memory mechanism. The model of relative value processing is also presented in relation to its potential for differential parameterization for predicting clinical (e.g. Alzheimer’s disease) subjects’ learning performance on differential outcomes (as studied by, e.g. Plaza et al. 2012; Vivas et al. 2018). Such modelling may serve forms of intervention based therapy (including gamified memory training) for optimization of outcome expectancy based learning for modulating the more habit-like learning.

References

Cromwell, H. C., Hassani, O. K., & Schultz, W. (2005). Relative reward processing in primate striatum. Experimental Brain Research, 162(4), 520-525.

Isoda, M. (2021). Socially relative reward valuation in the primate brain. Current Opinion in Neurobiology, 68, 15-22.

Lowe, R., & Billing, E. (2017). Affective-Associative Two-Process theory: A neural network investigation of adaptive behaviour in differential outcomes training. Adaptive Behavior, 25(1), 5-23.

Lowe, R., Almér, A., Billing, E., Sandamirskaya, Y., & Balkenius, C. (2017). Affective– associative two-process theory: a neurocomputational account of partial reinforcement extinction effects. Biological cybernetics, 111(5), 365-388.

Lowe, R., Almér, A., Gander, P., & Balkenius, C. (2019, September). Vicarious value learning and inference in human-human and human-robot interaction. In 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW) (pp. 395-400). IEEE.

Plaza, V., López-Crespo, G., Antúnez, C., Fuentes, L. J., & Estévez, A. F. (2012). Improving delayed face recognition in Alzheimer's disease by differential outcomes. Neuropsychology, 26(4), 483.

Schultz, W. (2015). Neuronal reward and decision signals: from theories to data. Physiological reviews, 95(3), 853-951.

Trapold, M. A. (1970). Are expectancies based upon different positive reinforcing events discriminably different?. Learning and Motivation, 1(2), 129-140.

Urcuioli, P. J. (2005). Behavioral and associative effects of differential outcomes in discrimination learning. Animal Learning & Behavior, 33(1), 1-21.

Vivas, A. B., Ypsilanti, A., Ladas, A. I., Kounti, F., Tsolaki, M., & Estévez, A. F. (2018). Enhancement of visuospatial working memory by the differential outcomes procedure in mild cognitive impairment and Alzheimer’s disease. Frontiers in aging neuroscience, 10, 364.

14: 00-14:30 UTC

Thu 16th Sep

35. Capability and Habit

Matthias Kramm

Wageningen University & Research

Abstract:

Some scholars of philosophy, sociology, and economics have discovered the philosophical legacy of John Dewey as a source of ideas with which they can supplement Amartya Sen’s work on the capability approach. David Crocker (Crocker 2008, 203) refers to ‘Dewey’s ideal of democracy and Sen’s ideal of citizen agency’ to describe how social choice procedures can be organized. Jean de Munck and Bénédicte Zimmermann (Munck and Zimmermann 2015,

123) explore Dewey’s distinction between prizing and appraising in order to combine Sen’s concept of evaluation with ‘Dewey’s sense of practical judgment’. Ortrud Leßmann (Leßmann 2009, 454) makes use of Dewey’s theory of learning to outline the process by which ‘human beings learn to choose’ capabilities. And a paper by Michael Glassmann and Rikki Patton (Glassmann and Patton 2014) deals with Dewey and Sen in the context of educational philosophy.

In this paper, I would like to make two suggestions for how Sen’s capability approach (Sen 2001; 2004; 2009) can be supplemented by Dewey’s concepts of habit and character (Dewey 2007; 1998; 1891). In the course of my analysis, I will explore the hitherto neglected connections between the concept of capability and the concept of habit. And I will suggest a pragmatist framework which is particularly suitable for applications of the capability approach in contexts where researchers or development practitioners have to be aware of the socio-cultural environment and the character of the affected individuals.

My paper will start with a brief comment on Sen’s capability approach and how an action theory might enrich it. After delineating the core concepts of Sen’s capability approach and Dewey’s action theory, I will make two proposals for how Dewey’s action theory might strengthen Sen’s theoretical treatment of the environment and supplement his capability framework with a notion of character development. Subsequently, I will show how one could develop a pragmatist capability theory which builds on Sen’s capability approach while drawing from Dewey’s action theory. Finally, I will draw the consequences of this framework for the conceptualization of impartiality and freedom, before concluding the paper.

References

Crocker, David. 2008. Ethics of Global Development: Agency, Capability and Deliberative Democracy. New York: Cambridge University Press.

Dewey, John. 1891. Outlines of a Critical Theory of Ethics. Ann Arbor: Michigan Register Publishing Company.

———. 1998. ‘Philosophies of Freedom’. In The Essential Dewey: Volume 2.

Bloomington, Indianapolis: Indiana University Press.

———. 2007. Human Nature and Conduct. New York: Cosimo.

Glassmann, Michael, and Rikki Patton. 2014. ‘Capability Through Participatory Democracy: Sen, Freire, and Dewey’. Educational Philosophy and Theory 46 (12).

Leßmann, Ortrud. 2009. ‘Capabilities and Learning to Choose’. Studies in Philosophy and Education 28 (5).

Munck, Jean de, and Bénédicte Zimmermann. 2015. ‘Evaluation as Practical Judgement’. Human Studies 38 (1).

Sen, Amartya. 2001. Development as Freedom. Oxford, New York: New York University Press.

———. 2004. Rationality and Freedom. Cambridge MA, London: Belknap Press.

———. 2009. The Idea of Justice. London: Allen Lane.

14: 30-15:00 UTC

Thu 16th Sep

36. Collective Intentionality and the Transformation of Meaning During the Contemporary Rituals of Birth

Anna M. Hennessey

Visiting Scholar, Berkeley Center for the Study of Religion University of California, Berkeley

Abstract:

This paper examines collective intentionality, one of the three fundamental elements in a classic theory of social ontology, and how we locate its emergence in the way that individuals and social groups transform the meaning of art and other objects used in the context of contemporary birth rituals. In this context, religious art and other objects often undergo an ontological transformation during the rituals of birth when participants secularize them, marking them with new status functions that diverge from their original functions as religious objects. However, some of these same objects are then re-sacralized when used ritualistically during birth. In these cases, the social ontology of the object shifts away from that of a religious or secular identification, collectively recognized instead as encompassing sacred meaning. This sacredness is not part of the object’s original symbolic function as a religious object, however. Instead, the object is re- sacralized and takes on a new ontological status associated with a collective understanding that the nonreligious act of birth is a sacred act in itself.

The term “collective intentionality” derives from John Searle’s 1990 paper “Collective Intentions and Actions,” which Searle then developed in other works, including his 1997 book, The Construction of Social Reality. Earlier and later scholars have also examined the same or similar concepts, sometimes using different terminology, such as is found, for example, in French sociologist Émile Durkheim’s study of what Durkheim termed “collective consciousness.” This paper primarily utilizes the term as found in Searle’s classic theory of social ontology, studying examples of how individuals involved in birth rituals become organically part of a larger production devoted to the making of new meaning out of objects used in the rituals. As such, we note instances in which the individual thinks at the level of the whole, not as that of the part even though the social context of transforming meaning is inextricable from the personal context of experiencing the ritual of birth.

In a classic theory of social ontology collective intentionality refers to intentional states that humans share. These states include belief, desire, and intention. Collective intentionality is not composed of individual intentionalities. Instead, singular human intentions are part of and derived from the collective intentionality that individuals share. As such, collective intentionality can neither be reduced to individual intentionality, nor can it be described as a collective consciousness, as Émile Durkheim would term it. It is a special type of mental cooperation between individuals. John Searle gives examples of a violinist who plays a part in a (collective) symphony performance, and of an offensive lineman playing a part in a (collective) football game as representative of collective intentionality. In these cases, the individual’s performance, while distinct, is organically part of a larger performance; the individual thinks at the level of the whole, not as that of the part. Animals also express collective intentionality, though this intentionality is attached to collective behavior that is biologically innate. Therefore, although animal collective intentionality is a type of social fact, it is not institutionalized in any way (it is not an institutional fact). It is a social behavior that is still lacking in institution.

In certain rituals of birth, those who are involved in the process of birth (women, partners, midwives, doctors, etc.) have historically used different objects as part of the rite of passage. Metaphysically speaking, the ontologies of these objects are defined by their physical make-ups. The complete ontology of the individual object, however, rests on another level—a social level — which is dependent entirely upon how people have historically and collectively defined, used, and perceived of it. This is the social ontology of the object.

This paper examines the social ontology of different objects used in these rituals, looking closely at how the ontologies of the objects have the capacity to change depending on how groups of people collectively perceive of the objects’ meanings and make use of the objects. One object examined is the Sheela-na-gig. The name refers not to a single object but to a type of medieval stone figure carving from Europe, often referred to simply as a “sheela,” whose original meaning has been interpreted in a number of ways. Scholars disagree on the origins of these objects. Some scholars believe that the sheelas were historically understood as sacred devices used during the pre-Christian rituals of birth, while others believe that the figures functioned primarily as apotropaic devices, collectively understood and carved for the purpose of protecting churches and other buildings. Another group of scholars disagrees and believes that these objects acted as didactic representations used to transmit Christian themes of sin and a collective understanding of the female body as profane. Regardless of the original meaning of the sheela figures, the research in all cases shows that they were historically used within the context of religion. The common social ontology of the object as it was originally conceived is therefore understood to have been of a religious nature.

The Sheela-na-gig is one of the clearest cases in which we can locate how a religious object goes through ontological transformation in the context of the contemporary rituals of birth. This paper shows how groups of people are in the twenty-first century secularizing and re-sacralizing the object, collectively utilizing and understanding of the figure in a new way during birth as a rite of passage. These social groups, which come from around the world and are also using other objects and images in a similar way during these contemporary rituals, transmit the new meanings of the objects to one another through the internet and other technology. This paper provides several examples of these objects, showing a variety used ritualistically.

Collective intentionality is integral to the philosophy of social ontology, and an understanding of how it emerges during the contemporary rituals of birth when participants in the ritual define an object’s meaning more broadly shows how the symbolic functions of material objects have the capacity to shift between religious, secular, sacred and nonreligious identifications depending upon social collective recognition of those functions.

15: 00-15:30 UTC

Thu 16th Sep

37. Habitual Behavior: from I-intentionality to We- intentionality

Raffaela Giovagnoli

Faculty of Philosophy, Pontifical Lateran University

Abstract:

The central question of the debate on Collective Intentionality is how to grasp the relationship between individual and collective intentions when we do something together in informal and institutionalized contexts. We'll briefly introduce the debate and the theoretical aspects of this complex issue. Moreover, we suggest to investigate habitual behavior that represents a fundamental part of the nature of human beings that could represent a bridge between I-intentionality and We- intentionality.

We'll consider the role of habits in human individual and social ordinary life and we move from the fact that habitual behavior is fundamental to organize our activities in individual as well as in social contexts. Instead of considering classical and revised version of intentionality, we prefer to focus on habits that reduce the complexity of daily life, and also on their corresponding activity in social life where we take part to informal joint practices as well as to institutionalized ones. We cooperate to create and to participate in social practices because we need to organize our life together with other people to create common spaces that have different functions and significance depending on the corresponding practice (for example, we all pay the ticket to take a train and many of us participate in religious rituals or similar activities).

We’ll propose a fruitful the relationship between habits and rituals that could provide the link to harmonize I-intentionality and We-intentionality. We begin with presenting a plausible sense for the notion of habit, which goes beyond the mere repetitive behavior or routine. We argue for a plausible account of the notion of habit that rests on some Aristotelian thesis also by reference to research in psychology and neuroscience. A habit is not only a mere automatism or a repetitive behavior, but also a stable disposition for action (practical skill), that implies the relationship between automatism and flexibility. The same process is involved on our participation and constitution of social informal and formal spaces.

Recent studies from cognitive neuroscience, biology and psychology show converging perspectives on the organization of goal-directed, intentional action in terms of (brain, computational) structures and mechanisms. They conclude that several cognitive capabilities across the individual and social domains, including action planning and execution, understanding others’ intentions, cooperation and imitation are essentially goal-directed. To form habits we need goal representations both in individual and social contexts.

Routines and goal-directed behavior characterize habits both in the case of individual and social behavior. We create our own habits while fulfilling our basic needs and desires. But, we are social beings and we need to organize our activities also to participate in different social practices. For example, rituals have the important function to create social spaces in which individuals can share emotions, experiences, values, norms and knowledge. The function to share experiences is fulfilled when there exist a social space created by cooperation for reaching certain goal. If we want to get a positive result about the extension of habits in the social dimension we need to move from a sort of goal-directed activity that we can perform together.

References

J. Bernacer and J.I. Murillo, The Aristotelian Conception of Habit and Its Contribution to Human Neuroscience. Frontiers in Human Neuroscience, (8), 2014.

C. Castelfranchi and G. Pezzulo, Thinking as the Control of Imagination: a Conceptual Framework for Goal-directed Systems., Psychological Research (73), (4), 559-577, 2009.

R. Giovagnoli, Habits and Rituals, in Proceedings MDPI of the IS4SI 2017 Summit, Gothenburg, 2017.

R. Giovagnoli, From Habits to Rituals: Rituals as Social Habits, in Open Information Science De Gruyter, v.2, Issue 1 (2018).

R. Giovagnoli, Habits, We-intentionality and Rituals in Proceedings MDPI of the IS4SI 2019 Summit, Berkeley 2019.

R. Giovagnoli., From Habits to Rituals: Rituals as Social Habits in R. Giovagnoli and R. Lowe (Eds.), The Logic of Social Practices, Springer, Sapere, Cham, 2020, pp. 185-199.

A. Graybiel, Habits, Rituals and the Evaluative Brain, Annual Review of Neuroscience, /31), 2008, pp. 359-87.

D. Levinthal, Collective Performance: Modelling the Interaction of Habit-based Actions, Industrial and Corporate Change, vol 23, n. 2, 2014, pp. 329-360.

J. Lombo and J. Gimenez-Amaya, The unity and stability of human behavior. An interdisciplinary approach to habits between philosophy and neuroscience, Frontiers in Human Neuroscience, (8), 2017.

DISCUSSION

15:30-16:00 UTC

Thu 16th Sep

DISCUSSION (Contributed by H&R Conference)


FRIDAY, SEPTEMBER 17

Contributed by Natural Computing IWNC 2021

Block 1:

4:00-7:00 UTC

Fri 17th Sep

IWNC


Marcin Schroeder

INVITED LECTURE

4:00-5:00 UTC

Fri 17th Sep

38. Machines computing and learning?

Genaro J. Mart´ınez

Artificial Life Robotics Laboratory, Escuela Superior de C´omputo, Instituto Polit´ecnico Nacional, M´exico.

Unconventional Computing Lab, University of the West of England, Bristol, United Kingdom.

Abstract:

A recurrent subject in automata theory and computer science is an interesting problem about how machines are able to work, learn, and project complex behavior. In this talk, particularly I will discuss how some cellular automata rules are able to simulate some computable systems from different interpretations, it is the problem about universality. These systems are able to produce and handle a huge number of information massively. In this context, an original problem conceptualized by John von Neumann from the 40s years is: How primitive and unreliable organisms are able to yield reliable components? How machines could construct machines? In biological terms it refers to the problem of self- reproduction and self-replication. In our laboratories, implement these problems in physical robots, where some particular designs display computable systems assembled with modular robots and other constructions display collective complex behavior. Modular robots offer the characteristic to assemble and reconfigure every robot. In Particular, we will see in this talk a number of robots constructed by Cubelets to simulate Turing machines, Post machines, circuits, and non-trivial collective behavior. We will discuss if these machines learn and develop knowledge as a consequence of automation and information.

References

[1] Mart´ınez, G.J., Adamatzky, A., Figueroa, R.Q., Schweikardt, E., Zaitsev, D.A., Zelinka, I., & Oliva-Moreno, L.N. (2021) Computing with Modular Robots, submitted.

[2] Mart´ınez, S.J., Mendoza, I.M., Mart´ınez, G.J., & Ninagawa, S. (2019) Universal One-dimensional Cellular Automata Derived from Turing Machines. International Journal Unconventional Computing, 14(2), 121-138.

[3] Mart´ınez, G.J., Adamatzky, A., Hoffmann, R., D´es´erable, D., & Zelinka, I. (2019) On Patterns and Dynamics of Rule 22 Cellular Automaton. Complex Systems, 28(2), 125-174.

[4] Figueroa, R.Q., Zamorano, D.A., Mart´ınez, G.J., & Adamatzky, A. (2019) A Turing machine constructed with Cubelets robots. Journal of Robotics, Networking and Artificial Life 5(4) 265–268.

[5] Mart´ınez, G.J. & Morita, K. (2018) Conservative Computing in a Onedimensional Cellular Automaton with Memory. Journal of Cellular Automata, 13(4), 325-346.

[6] Mart´ınez, G.J., Adamatzky, A., & McIntosh, H.V. (2014) Complete characterization of structure of rule 54. Complex Systems, 23(3), 259-293.

[7] Mart´ınez, G.J., Seck-Tuoh-Mora, J.C., & Zenil, H. (2013) Computation and Universality: Class IV versus Class III Cellular Automata. Journal of Cellular Automata, 7(5-6), 393-430.

[8] Mart´ınez, G.J., Adamatzky, A., & Alonso-Sanz, R. (2013) Designing Complex Dynamics in Cellular Automata with Memory. International Journal of Bifurcation and Chaos, 23(10), 1330035-131.

[9] Mart´ınez, G.J., Adamatzky, A., Morita, K., & Margenstern, M. (2010).

Computation with competing patterns in Life-like automaton. In: Game of Life Cellular Automata (pp. 547-572). Springer, London.

INVITED LECTURE

5:00-6:00 UTC

Fri 17th Sep

39. Computing with slime mould, plants, liquid marbles and fungi

Andy Adamatzky

Unconventional Computing Lab, UWE, Bristol, UK

Abstract:

Dynamics of any physical, chemical and biological process can be interpreted as a computation. The interpretation per se might be non-trivial (but doable) because one must encode data and results as states of a system and control the trajectory of a system in its state space. One can make a computing device from literally any substrate. I will demonstrate this on the examples of computing devices made from slime mould Physarum polycephalum, growing plant roots, vascular system of a plant leaf, mycelium networks of fungi and liquid marbles. The computing devices developed are based on geometrical dynamics of a slime mould’s protoplasmic network, interaction of action potential like impulses travelling along vasculates and mycelium networks, collision-based computing of plant roots’ tips and droplets of water coated by hydrophobic powder. Computer models and experimental laboratory prototypes of these computing devices are presented.

PANEL DISCUSSION

6:00-7:00 UTC

Fri 17th Sep

PANEL DISCUSSION (Contributed by IWNC Conference)

40. Moderator’s Introduction to “Natural Question about Natural Computing”

Moderated by Marcin J. Schroeder

Confirmed Panelists: Andy Adamatzky, Masami Hagiya, Genaro J. Mart´ınez, Yasuhiro Suzuki,

The question about Natural Computing may be natural but the attempt to answer it by providing a formal definition would be pointless. Definitions of concepts serve the purpose of closing them into an existing framework of concepts with the already established intention or meaning. Natural computing is an open idea that serves the opposite purpose to transcend the currently dominating paradigm of computing. The qualifier “natural” that for centuries was a subject of philosophical disputes is used here not in the restrictive sense. After all, its common-sense negation “artificial” is associated with human skills or competencies which there is no reason to consider non-natural or at least inconsistent with human nature, human inborn capacities.

This conference is the 13th in the long series of International Workshops on Natural Computing whose participants and contributors have had diverse ways of understanding this subject. However, there was never a risk of mutual misunderstanding and there is no such risk now. What was and is common and uniting in these diverse inquiries can be expressed as the search for dynamic processes involving information that have all or some characteristics of computing, but are different from it in the form and means of implementation, procedural description, intention, outcomes, etc. The adjective “natural” reflects the interest in natural processes studied in several different disciplines of science independently from any application in computing, but it did not exclude the interests in the dynamics of information in cultural, social contexts of human life. Just opposite, Natural Computing is an attempt to bridge technological interests with natural aspects of information processing to transcend the limitations of computing, including the limitations of its present applications.

The panelists represent diverse directions of research and study within Natural Computing. I would like to ask them the question: “Quo Vadis?” (Where are you going?) Unlike in the Scriptural origin of this question, this is not a call to return to Rome. It is a request for sharing with the audience panelists’ vision of the direction and future of Natural Computing. This is a question about their motivation to pursue this path of inquiry. Finally, the panelists may choose to reflect on the more general question of the future not just of Natural Computing but Computing in general.

Contributions from Philosophy and Computing Conference APC 2021

Block 2:

13:00-16:00 UTC

Fri 17th Sep


APC

Peter Boltuc

13:00-14:30 UTC

Fri 17th Sep

41. Exploring open-ended intelligence using patternist philosophy

Ben Goertzel

Abstract:

The patternist philosophy of mind begins from the simple observation that key aspects of generally intelligent systems (in particular those aspects lying in Peirce's Third metaphysical category) can be understood by viewing such systems as networks of patterns organized to recognize patterns in themselves and their environments. Among many other applications this approach can be used to drive formalization of the concept of an "open ended intelligence", a generally intelligent system that is oriented toward ongoingly individuating itself while also driving itself through processes of radical growth and transformation. In this talk I will present a new formalization of open-ended intelligence leveraging paraconsistent logic and guided by patternist philosophy, and discuss its implications for practical technologies like AGI and brain-computer interfacing. Given the emphatically closed-ended nature of today's prevailing AI and BCI technologies, it seems critical both pragmatically and conceptually to flesh out the applicability of broader conceptions of intelligence in these areas.

PLENARY PANEL DISCUSSION

14:30-16:00 UTC

Fri 17th Sep

Artificial Inventors, Ai, Law and Institutional Economics

Presenting Panelists: Stephen Thaler, Kate Gaundry

Commenting Panelists: Peter Boltuc, David Kelley


14:30-15:00 UTC

Fri 17th Sep

42. The Artificial Sentience Behind Artificial Inventors

Stephen Thaler

Imagination Engines Inc.

Abstract:

Using a new artificial neural network paradigm called vast topological learning [1], a multitude of artificial neural networks bind themselves into chains that geometrically encode complex concepts along with their anticipated consequences. As certain nets called “hot buttons” become entangled with these chains, simulated volume neurotransmitter release takes place, selectively reinforcing the most advantageous of such topologically expressed ideas. In addition to providing important clues about the nature and role of sentience (i.e., feelings) within neurobiology, this model helps to explain how an artificial inventor called “DABUS” has autonomously generated at least two patentable inventions. [2][3]

[1] Vast Topological Learning and Sentient AGI”, Journal of Artificial Intelligence and Consciousness, Vol. 8, No. 1 (2021) 1-30.

[2] https://www.globallegalpost.com/news/south-africa-issues-worlds-first-patent-listing-ai-as-inventor-161068982

[3] https://www.thetimes.co.uk/article/patently-brilliant-ai-listed-as-inventor-for-first-time-mqj3s38mr

15:00-15:30 UTC

Fri 17th Sep


43. Potential Impacts of Various Inventorship Requirements

Kate Gaudry

Kilpatrick Townsend & Stockton LLP

Abstract:

Though many entities are discussing A.I. and patents, this umbrella topic covers a vast diversity of situations. Not only can artificial intelligence can be tied to inventions in multiple ways, but the involvement of various types of parties can shift potential outcomes and considerations. This presentation will walk through various potential scenarios that may arise (or arise more frequently) as A.I. advances and consider when and how patents may be available to protect the underlying innovation.

15:30-16:00 UTC

Fri 17th Sep


44. Panel Commentary

Peter Boltuc

University of Illinois, Springfield

Warsaw School of Economics

Presentation has a legal part and a philosophical part:

Part I is based on Gaudry: “With reference to Univ. of Utah v.Max-Planck-Gesellschafl zur Forderung der Wissenschaflen e. V, the USPTO explained that the Federal Circuit has ruled that a state could not be an inventor because inventors are individuals who conceive of an invention and conception is a “formation of the mind of the inventor” and “a mental act.” [Gaudry et al. https://www.jdsupra.com/legalnews/should-we-require-human-inventorship-3076784/ ] This criterion is clearly satisfied by modern advanced AI engines, except if epistemic human chauvinism is presupposed.

Following the above, “The USPTO reasoned “conception—the touchstone of inventorship—must be performed by a natural person.” [Gaudry op.cit.] Since the conceptions, according to which a “formation of the mind of the inventor” pertains to human and not artificial minds are flawed just because artificial minds are now much more advanced than this statement presumes based on the computer science at the time. The above quotation does not originate from the 2013 case. It comes from a 1994 case Burroughs Wellcome Co. v. Barr Labs., Inc., 40 F.3d 1223, 1227-28 (Fed. Cir. 1994), which makes a substantial difference since in 1994 there were no computer programs characterized by inventor qualities; thus, the claims were ipso facto used to pertain solely to human persons. Since the 2013 case pertains to institutions (universities) versus persons (human inventors) the 1994 case was appropriate to quote; no claims of non-human inventors involved whatsoever. Therefore, the 1994 ruling, also in its 2013 reiteration, is relevant to Dabus only de dicto; this is because the words inventor in those cases pertained only to individuals, emphasizing human individuals in opposition to institutions. This contrast is visible in the last clause: “To perform this mental act, inventors must be natural persons and cannot be corporations or sovereigns.” [2013], which does not pertain to machines or algorithms.

Part II is based on Boltuc 2017; Hardegger 2021: Based on this observation the author drafts the social structure that incorporates robots, and even cognitive engines, to partake in the ‘social’ life well enough to be its members tout court, which includes their interests morphous enough to allow for their meaningful patent ownership.

The part about robots relies on Boltuc 2017: "Church-Turing Lovers." In Abney, K. A.; Lin, Patrick, J., Ryan R., (Eds.), Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence. Oxford: Oxford University Press’. Part II develops Daniel Hardegger’s session on ‘The4th.Space’ (an APC panel at this conference)

SATURDAY, SEPTEMBER 18

Contributions from Philosophy and Computing Conference APC 2021

Block 1:

4:00-5:30 UTC

Sat 18th Sep

1.APC (90 min.)

Peter Boltuc

4:00-4:30 UTC

Sat 18th Sep

45. On Two Different Kinds of Computational Indeterminacy

Oron Shagrir with Philippos Papayannopoulos, and Nir Fresco

Hebrew University of Jerusalem

Abstract:

A follow up on the project on computational indeterminacy, parts of which are also presented in Jack Copeland's keynote. This talk and discussion is focused on two kinds of computational indeterminacy

4:30-5:30 UTC

Sat 18th Sep

46. Cognitive neurorobotic self in the shared world

Jun Tani

Cognitive Neurorobotics Research Unit, Okinawa Institute of Science and Technology (OIST)

Abstract:

My research has investigated how cognitive agents acquire structural representation via iterative interaction with their environments, exercising agency and learning from resultant perceptual experience. Over the past two decades, my group has tackled this problem by applying the framework of predictive coding and active inference to development of cognitive constructs of robots. Under this framework, intense interaction occurs between top-down intention, which acts proactively on the outer world, and the resultant bottom-up perceptual reality accompanied by prediction error. The system tries to minimize the error either by modifying the intention or the outer world by acting on it. I argue that the system should become conscious when this error minimization process costs some effort. Otherwise, everything can go just smoothly and automatically wherein no space for consciousness remains. We have found that compositionality, which enables some conceptualization, including senses of minimal self and narrative self, can emerge via iterative interactions, as a result of downward causation in terms of constraints such as multiple spatio-temporal scale properties applied to neural network dynamics. Finally, I will introduce our recent results that may account for how abnormal development leads to some developmental diseases, including autism spectrum disorders (ASD) and schizophrenia, which may be caused by different types of failures in the top-down bottom-up interaction.

Reference

(1) Tani, J. (2016). “Exploring Robotic Minds: Actions, Symbols, and Consciousness as Self-Organizing Dynamic Phenomena.” Oxford University Press.

Contributions from International Conference on Philosophy of Information ICPI 2021

Block 1:

5:30-7:30 UTC

Sat 18th Sep

2. ICPI (90 min.)

Wu Kun

EIGHT TALKS CONTRIBUTED BY International Conference on Philosophy of Information (ICPI) (each 15 minutes)

5:30-5:45 UTC

Sat 18th Sep

47. The Future of Anthroposociogenesis – Panhumanism, Anthroporelational Humanism and Digital Humanism

Wolfgang Hofkirchner

The Institute for a Global Sustainable Information Society, Vienna, Austria;

Abstract:

The emergence of human life on Earth is not yet finished. Social evolution has reached a point at which the continuation of human life is even at stake. The reason why that is the case lies in dysfunctionalities of the organization of social systems. Those dysfunctionalities came to the fore when hominization was as successful as to cover the whole globe. What social systems could externalize so far, became part of an ever more deteriorating environment on which every system, in turn, has to live on. This system theoretical insight concerns the organization of relations among humans, of relations with nature and of relations with technology. It is tantamount with a next step of humanization and requires an update of humanism: a panhumanism, an anthroporelational humanism and a digital humanism. The first makes a case for an all-embracing system of humankind, the second for a delicate integration of non-human natural systems, and the third for an intelligent design of techno-social systems. The crisis of anthroposociogenesis will last as long as that insight will have become common sense or the social systems will have broken down.

Keywords: global challenges; humanism; Evolutionary Systems Theory; Unified Theory of Information; critical thinking; social systems; information society

1. Introduction

According to Evolutionary Systems Theory [1], the emergence of existential risks signifies an evolutionary crisis of complex systems. Those crises are caused by an environment more complex than the options of the systems are. If the organizational relations of the systems undergo a qualitative change, they can help the systems catch up with the complexity of their environment (the environment might be external or internal). Such a change transforms the systems into elements of a metasystem or suprasystem that represents a complexity gain from which they benefit.

According to Unified Theory of Information [1], by generating information that is required for those systems to increase their complexity with regard to the challenges they are confronted with, they are able to master the challenges. If they fail to generate the required information, they might break down.

According to a critical social systems theory, based upon Evolutionary Systems Theory, the emergence of global challenges by ushering in a new age of the history of mankind 75 years ago is evidence for an evolutionary crisis of the social systems that have grown interdependent but not yet got integrated with each other. It means that they confront a Great Bifurcation of their possibility space of future trajectories. It needs to be passed by a Great Transformation that chooses the right trajectory to guarantee the continuation of anthroposociogenesis.

According to a critical information society theory, based upon both the Unified Theory of Information and a critical social systems theory, the Great Transformation can be realized only if the required information had been generated. This information is about the ultimate cause of the current multi- or poly-crisis as French intellectual Edgar Morin called it [2] and the means of choice for its overcoming. It illuminates that it is the plethora of social relations that have to undergo a decisive change – the social relations among humans, the social relations with nature and the social relations with technology. Their underlying logics have turned anachronistic and require replacement by new logics that adapt to the new situation.

2. The Logic of Egocentrism needs to be replaced by Panhumanism

The social relations among humans are still determined by a logic of egocentrism. Egocentrism signifies the denial of belonging of a social partition to a greater social whole, whatever the partition may be – a nation, an ethnic group, a private enterprise or an individual. That logic deprives the masses of the world population as well as classes of populations of the access to the societal common good. It is a logic of domination, exploitation and oppression in the anthroposphere yielding a gap between rich and poor, hunger, diseases, and much more.

The anti-colonial liberation struggle – with Frantz Fanon’s “Les damnés de la terre” in 1963 [3] – was the first sign of an awakening worldwide awareness of the role of social relations in the age of global challenges. It provoked the emergence of the solidarity movement all over the world.

The current pandemic situation is but another example for the anti-humanism of egocentric social relations, since a zero-Covid strategy is only implementable if all countries of the world are put in the position to fight the virus by sufficient vaccination. No country can reach a zero-Covid state without the rest of the countries committed to the same policy, just like no single person can be protected from infection unless a sufficient number of persons is vaccinated without being there free riders that frustrate solidarity.

Egocentrism must give way to a logic of Panhumanism as Morin underlined in a recent article [4]. Panhumanism can be defined as a logic of conviviality [5] of the single integrated humanity, of living together for the good of all. That logic is inclusive. It does not exclude any part of the common humankind by antagonisms (zero-sum plays that benefit some at the cost of others) but includes all of them in synergisms (the composition of parts achieves any benefit shared by any part). Being an objective community of destiny denies humanity any rationality of competition that does not serve the co-operation for the common good.

Since Panhumanism includes the concern for the next generations being provided with at least the same developmental potential as the current generation, it implies principles for the social relations with nature and technology in the sense of German philosopher Hans Jonas [6].

3. The Logic of Hubris needs to be replaced by Anthroporelational Humanism

As long as the social relations among humans are still egocentric, the social relations with nature are determined by a logic of hubris. As long as social systems do not take care of other co-existing social systems, they also do not take care of co-existing systems of natural origin. Such a logic undermines the ecological foundations of human life on Earth. It is a logic of extractivism and contamination of the biosphere and the geosphere yielding a decrease in biodiversity, an increase of the heating of the planet, and much more.

The book of US-American biologist Rachel Carson “Silent Spring” in 1962 [7] was the trigger of the global environmental movement. Concerned with the risks of the misuse of chemical pesticides, it has brought to the fore the role of the social relations with nature in the age of global challenges.

Again, the Covid-pandemic is an example for the risks of repressing and penetrating wildlife nature.

Hubris must give way to a logic of Anthroporelational Humanism [8, 9]. Anthroporelational Humanism means that humans relate to nature by not giving up their specific position of an animal sociale or zoon politikon when giving up their anthropocentric perspective. In a systemic perspective, humans as self-organizing systems need to concede self-organizing capacities to non-human natural self-organizing systems according to their place in physical and biological evolution when integrating them with their panhuman social systems. They concede intrinsic values to them in a staged way. Thus, humans are prompted to relativize their own positions. Social relations with nature while taking into consideration human values need to do justice to natural intrinsic values. Is the objective function of a panhuman system the global common good, so is the objective function of anthroporelationalism an alliance with nature in the sense of German philosopher Ernst Bloch [10].

Anthroporelational Humanism implies principles for the social relations with technology.

4. The Logic of Megalomania needs to be replaced by Digital Humanism

As long as the social relations with nature are still hubristic, the social relations with technology are determined by a logic of megalomania. As long as social systems do not take care of co-existing systems of natural origin, they also allow themselves the production and use of tools that are not designed to take care of those systems. Such a logic hypostatizes the effectivity of technology beyond any rational measure. It is a logic of omnipotence ascribed to the technosphere and yielded the deployment of nuclear first strike capabilities, the use of chemical weapons, waging information wars, the development of autonomous artificial intelligence, surveillance, trans- and post human developments, and much more.

This global challenge became clear in 1945 and gave rise to the international peace movement, documented by the Einstein-Russell Manifesto in 1955 [11].

The Covid-pandemic, however, belies the omnipotence of technology, since many states have been experiencing the limits of their health services that were not prepared for a pandemic despite anticipating warnings. Though zero-Covid strategies are followed by some states, in many other states politicians, economic interests and misinformed people are hindering the acceptance of recommendations of scientists.

Megalomania must give way to a logic of Digital Humanism [12, 13]. Digital Humanism is the logic of civilizational self-limitation as Austrian-born writer Ivan Illich coined it [14] – a limitation of the technological tools to their role of serving anthroporelational and panhuman purposes only. Digitalization can provide solutions for boosting those purposes, since any information technology helps smoothen frictions in the functioning of any technology. But digitalization must be ethically designed and the tools cultivated. The observance of the precautionary principle – the “Prevalence of the Bad over the Good Prognosis” [6] (31) is a sine qua non.

5. Conclusion

The becoming of humankind is not yet finished. The ushering in of the age of global challenges is evidence for a Great Bifurcation of anthroposociogenesis that needs to be passed by a Great Transformation. In order to accomplish the next step in social evolution the social relations among humans, the social relations with nature and the social relations with technology have to undergo a decisive change. The logics those relations have been following have brought about the social evolution so far but are not functional anymore. They need to be replaced by logics that adapt to the conditions of humanity being an objective community of destiny. Humanity is on the point of transforming into a meta- or suprasystem, becoming a subject of its own. Evolutionary Systems Theory, Unified Theory of Information, a critical social systems theory and a critical information society theory build cornerstones for an understanding of those processes.

References

1. Hofkirchner, W. Emergent Information; World Scientific: Singapore, 2013.

2. Morin, E.; Kern, A.-B. Terre-Patrie; Seuil: Paris, France, 1993.

3. Fanon, F.; Les damnés de la terre; Maspero: Paris, France, 1961.

4. Morin, E. Abenteuer Mensch. Freitag. 2021, 28. Available online: https://www.freitag.de/autoren/the-guardian/abenteuer-mensch (accessed on 25 August 2021).

5. Convivialist International. The second convivialist manifesto. Civic Sociology. 2020. Available online: https://online.ucpress.edu/cs/article/1/1/12721/112920/THE-SECOND-CONVIVIALIST-MANIFESTO-Towards-a-Post (accessed on 25 August 2021).

6. Jonas, H. The imperative of responsibility: In search of an ethics of the technological age; University of Chicago: Chicago, IL, USA, 1984.

7. Carson, R. Silent spring; Houghton Mifflin: Boston, MA, USA, 1962.

8. Deutsches Referenzzentrum für Ethik in den Biowissenschaften. Anthroporelational. Available online: https://www.drze.de/im-blickpunkt/biodiversitaet/module/anthroporelational (accessed on 25 August 2021).

9. Barthlott, W., Linsenmair, K.E., Porembski, S. (eds.). Biodiversity: structure and function, vol. ii; EOLSS: Oxford, UK, 2009.


5:45-6:00 UTC

Sat 18th Sep

48. The Philosophy – Science Interaction in Innovative Studies

Yixin Zhong

Beijing University of Posts and Telecommunications, Beijing 100876, China,

Abstract:

A deep investigation on the studies of information discipline has been made and a serious problem related to the paradigm, practically employed since its coming into being of the discipline, was found, that is, the paradigm employed has not been the one for information discipline but the one for the physical discipline. Because of this historical mistake, the entirety of information discipline has been divided into a number of sub-disciplines, mutually independent to each other. This has brought to the development of the discipline a lot of difficulties. For the purpose of having a healthy advancement of information discipline, the paradigm change has to be urgently carried out.

Key Words: Paradigm Ex-Leading, Scientific View, Methodology, and Information Discipline

1. The Definition of Paradigm for a Scientific Discipline

Paradigm for a scientific discipline is defined as the integrity of scientific view and methodology for that discipline in which the scientific view defines what the essence of the discipline is while the methodology related defines how to determine the scientific approach to the studies of the discipline. Thus, the paradigm for a scientific discipline delimits the norm that the studies for that discipline should follow.

As result, the studies of a category of scientific discipline should employ its own paradigm. Therefore, the studies of physical discipline should employ the paradigm for physical discipline whereas the studies of information discipline should employ the paradigm for information discipline.

2. The Role The Paradigm Plays in Scientific Studies

The paradigm as defined above plays the role that leads the studies of the related scientific discipline. As a matter of fact, whether the studies of the discipline would be successful or failure in practice will depends on if the paradigm employed for the discipline is correct or not. So, if the paradigm for information discipline has been employed, the studies of information discipline would make successes no matter how difficult the information discipline is. Otherwise, the studies of information discipline will encounter a series of misunderstanding and setbacks.

3. The Real Situation Concerning The Paradigm in Information Discipline

It is a very surprising discovery through the investigation in depth that the paradigm employed for the study of information discipline has ever been the one for physical discipline, see Table 1, not the one for information discipline, see Table 2 below.

Table 1 Major Features for the Paradigm of Physical Discipline

  1. Scientific View

    • Object for study: Physical system with no subjective factor

    • Focus of study: The structure of physical system

    • Property of the object: Deterministic in nature

  2. Methodology

    • General approach: Divide and conquer

    • Means for description/analysis: Purely formal methods

    • Means for decision-making: Form matching


Table 2 Major Features for the Paradigm of Information Discipline

  1. Scientific View

    • Object for study: Info process within subject-object interaction

    • Focus of study: To achieve the goal of double win (subject-object)

    • Property of the object: Non-deterministic in nature

  2. Methodology

    • General approach: Methodology of Information Ecology

    • Means for description/analysis: Form-utility-meaning trinity

    • Means for decision-making: Understanding-based


The use of the paradigm for physical discipline to the study of information discipline is surely the roots for all problems related to the study of information discipline. The major problems existed in the studies of information discipline include, at least, the followings: (1) diversity without unity in theory, separation among the studies of information in various sectors, separation between the studies of information and the studies of intelligence, all due to the physical methodology of “divide and conquer”; (2) merely formal analysis for the studies of information, knowledge, and intelligence without considering the high importance of subject factors, also due to the physical methodology of “purely formal analysis”.

4. Conclusion

It is an appeal presented in the paper that the paradigm practically executed so far in the studies of information discipline worldwide should be shifted as soon as possible.

References

[1] Kuhn, T. S. The Structure of Scientific Revolution [M], University of Chicago Press, 1962

[2] Zhong, Yixin,. From The Methodology of Mechanical Reductionism to The One of Information Ecology [J], Philosophy Analysis, No.5, p.133-144, 2017

[3] Burgin, Mark and Zhong, Yixin., Methodology of Information Ecology in the Context of Modern Academic Research [J],Philosophy Analysis, 119-136, 2019

[4] Zhong Yixin. Principles of Information Science [M]. Beijing: BUPT Press, 1988

[5] Zhong Yixin. Universal Theory of AI [M]. Beijing: Science Press, 2021

6:00-6:15 UTC

Sat 18th Sep

49. Information and the Ontic-Epistemic Cut

Joseph Brenner

Switzerland

Abstract:

1. Introduction

The standard definitions of ontology and epistemology are the studies of, respectively, 1) the real, what is/exists, what we can know and 2) how we know and validate that what we know is correct. If so, does information belong more in the domain of science or philosophy, the abstract or the concrete? Is it primarily ontic, part of the real world or epistemic, part of knowledge? While the question may not be significant in all areas of information research, it may be relevant for anything that involves the value of such research for the common good of the society, including the establishment of a Global Sustainable Information Society. I suggest that it is necessary to identify what parts of contemporary information science and information philosophy are relevant to that objective.

2. Information

Many definitions exist about what information is, what kinds of information exist and their properties. Included are concepts of the relation of information to data, knowledge and meaning. Most of the time, these concepts of information are not related to ontology and epistemology. I believe they should be, as I will try to show in this paper.

3. The Ontic-Epistemic Cut

This concept reflects the general view that ontological and epistemological perspectives are disjunct, their separability residing among other things in the subjective, 1st person content of epistemology as opposed to ontology. Ontology is frequently not to say generally limited to a system classification or categorization of the concrete ‘furniture’ of the world, without expressing the features of interaction and change. One example is the epistemic cut in biology, formulated by Howard Pattee to take account of the apparent matter-symbol distinction.

4. Information and Logic in Reality

I have developed a concept of logic as a non-semantic Logic in Reality (LIR), grounded in physics, a new non-propositional logic observable in the evolution of real processes. Information is such a process evolving according to a general Principle of Dynamic Opposition (PDO). The concept of information as process is consistent with an overlap between physical-ontic and non-physical-epistemic aspects of information.

5. The Informational Convergence of Ontology and Epistemology

A recent article with Wu Kun on the convergence of science and philosophy under the influence of information, suggested a similar convergence of ontology and epistemology. It suggested a restatement of the ontic-epistemic distinction in informational terms, reflecting the underlying principle of dynamic opposition in nature as formulated in Logic in Reality. It could be related to Deacon’s ideas about information as absence and Poli’s ideas about the ontology of what is not, or not fully there. Wolfgang Hofkirchner’s concept of a Praxio-Onto-Epistemology is relevant to the proposed convergence.

6. Directions for Further Work

Information thus expresses non-separability as fundamental scientific-philosophical as well as logical principle. I see two directions for the applicability of this principle applied to

1) the ontic-epistemic distinction as discussed in a new synthetic natural philosophy in its relation to science;

2) the conceptual structure and role of information a potential sustainable information society.

Keywords: Information; The Ontic-Epistemic Cut; Logic in Reality

6:15-6:30 UTC

Sat 18th Sep

50. A Chase for God in the Human Exploration of Knowledge

Kun Wu, Kaiyan Da, Tianqi Wu

Department of Philosophy, School of Humanities and Social Sciences, Xi’an Jiaotong University, Xi’an 710049, China;

Abstract:

In the human exploration of knowledge (including philosophy and science), there is always a chase for God (a strong desire for pursuing perfection, because God is considered as the only infinite, self-caused, and unique substance of the universe), which is a simple and extreme thinking paradigm that people have in the pursuit of complete idealization, infinite eternity, and absolute ultimateness. On the one hand, it is good for people to chase for God, because it guides people to pursue love, beauty, and all perfect things in theory and practice; however, on the other hand, this kind of thinking paradigm have obvious limitations too, because the existence and evolution of the world is very complex, and it is full of multi-dimensional, multi-layered, and multidirectional uncertainties and randomness interwoven and interacted with each other. In fact, the world is not merely a counting machine.

Keywords: knowledge, philosophy, science, human, God

1. A Chase for God in Human Philosophical Thinking

The pursuit of perfect wisdom and ability is one of the oldest traditions of mankind. The original paradigm of this tradition is always associated with God. Philosophers of ancient Greece have realized very early that human existence and human thoughts are limited, so they have attributed the perfect wisdom to God.

Heraclitus emphasized that “there is one wisdom, to understand the intelligent will by which all things are governed through all”, while pointing out that only God has the “wisdom by which all things are governed through all”. Similarly, Socrates has claimed that mankind does not have wisdom, but they can acquire wisdom by obeying the will of God. Plato proposed his theory of the Forms are also responsible for both knowledge or certainty, and are grasped by pure reason, and reason teaches that God is perfect. Feuerbach shows that in every aspect God corresponds to some feature or need of human nature. As he states: “if man is to find contentment in God, he must find himself in God”. Thus, God is nothing else than human: he is, so to speak, the outward projection of a human’s inward nature.

From the views of the above philosophers, we can conclude that in their eyes, God is the representation of perfection, who is universal, eternal, and absolute in his full wisdom and infinite ability. However, everything in the material world, including men and animals, is special, temporal, and relative with limited wisdom and ability. Thus, God becomes the embodiment of absolute truth.

2. A Chase for God in Human Philosophical Science

In the process of the development of modern science, there is also a chase for God, for example, the first cause of Newton: The God. We know that Newtonian mechanics is based on Newton’s belief in God. He asserted that “God in he beginning formed. Matter in solid, massy, hard, impenetrable, moveable particles, of such sizes and figures, and with such other properties, and in such proportion to space, as most conduced to the end for which he formed them; and that these primitive particles, being solids, are incomparably harder than any porous bodies compounded of them; even so very hard, as never to wear or break in pieces; no ordinary power being able to divide what God himself made one in the first creation” [1].

In this paragraph by Newton, the issue is not one of establishing the reality of a God whose existence might be in doubt, rather, the aim is to learn more about God and to get to know him better. Newton writes here not only of belief in God, but knowledge of God.

3. A Chase for God in Information Science Including Artificial Intelligence

In information science, including the research methods, characteristics, and possibilities of the future development of the practical research and theoretical assumptions of AI, there is also a tendency for a chase for God.

In the early stage of information science, because of the success of computationalism, many researchers believe in Pythagoras’ philosophy, arguing that all the intelligent behaviors can be realized by number and computation. The famous physicist John Wheeler wrote an article in 1989 titled Information, Physics, Quantum: The Search for Links. In this article, he put forward a new thesis “It from bit”. The similar expression of this thesis is based on some other deep thoughts of him: “I think of my lifetime in physics as divided into three periods. In the first period,” “I was in the grip of the idea that Everything Is Particles.” “I call my second period Everything Is Fields.” “Now I am in the grip of a new vision, that Everything Is Information.” [2]

Besides, with the development of the technology of deciphering, replacement, and recombination of genetic genes, along with the research results of nanotechnology, some researchers believe that humans can produce anything including organics, inorganics, even life, and intelligence. Based on this, many researchers begin to talk about “superman” and the possibility of immortality.

References

10. Henry, J. Enlarging the Bounds of Moral Philosophy. Notes and Records: the Royal Society Journal of the History of Science. 2017, 71(1), 21–39.

11. Wheeler, J. A. Geons, Black Holes, and Quantum Foam: A Life in Physics. W. W. Norton & Company, New York, USA, 1998.

6:15-6:30 UTC

Sat 18th Sep

51. The Second Quantum Revolution and its Philosophical Meaning

Hongfang L.

School of Marxism University of Chinese Academy of Sciences

Abstract:

We have a strong desire to understand everything from a single or very few origins. Driven by such a desire, physics theories were developed through the cycle of discoveries: unification, more discoveries, bigger unification. Here, we would like review the development of physics and its four revolutions. Especially, the second quantum revolution and its philosophical meaning, which realizes a unification of force and matter by quantum information. In other words, quantum information unifies matter. It from qubit, not bit.

The first revolution in physics is the mechanical revolution, which tells all matter as formed by particles, which obey Newton’s laws. Interactions are instantaneous over distance. The success and the completeness of Newton’s theory gave us a sense that we understood everything.

The second revolution in physics is the electromagnetic revolution. The true essence of the electromagnetic revolution is the discovery of a new form of matter --wave-like matter: electromagnetic waves, which obey Maxwell equation, and is very different form the particle-like matter governed by Newton equation. Thus, the sense that Newton theory describes everything is incorrect. Newton theory does not apply to wave-like matter. Moreover, unlike the particle-like matter, the new wave-like matter is closely related to a kind of interaction-electromagnetic interaction. In fact, the electromagnetic interaction can be viewed as an effect of the newly discovered wave-like matter. Wave-like matter causes interaction.

The third revolution in physics is relativity revolution, which achieves a unification of space and time, mass and energy, as well as interaction and geometry, and a unification of gravity and space-time distortion. Since the gravity is viewed as a distortion of space and since the distortion can propagate, Einstein discovered the second wave-like matter—gravitation wave. Since that time, the geometric way to view our world has dominated theoretical physics.

However, such a geometric view of world was immediately challenged by new discoveries from microscopic world. The experiments in microscopic world tell us that not only Newton theory is incorrect, even its relativity modification is incorrect. This is because Newton theory and its relativistic modification are theories for particle-like matter. But through experiments on very tiny things, such as electrons, people found that the particles are not really particles. They also behave like waves at the same time. Similarly, experiments also reveal that the light waves behave like a beam of particles (photons) at the same time. So the real matter in our world is not what we thought it was. The Newton theory for particle-like matter and Maxwell/Einstein theories for wave-like matter can not be the correct theories for matter. We need a new theory for the new form of existence: particle-wave-like matter. The new theory is the quantum theory that explains the microscopic world. The quantum theory unifies the particle-like matter and wave-like matter.

Quantum revolution is the fourth revolution in physics, which tells us there is no particle-like matter nor wave-like matter. All the matter in our world is particle-wave-like matter, particle-like matter=wave-like matter. In other words, quantum theory reveals the true existence in our world to be quite different from the classical notion of existence in our mind. What exist in our world are not particles or waves, but somethings that are both particle and wave. Such a picture is beyond our wildest imagination, but reflects the truth about our world and is the essence of quantum theory. The quantum theory represents the most dramatic revolution in physics.

After realizing that even the notion of existence is change by quantum theory, it is no longer surprising to see that quantum theory also blurs the distinction between information and matter. It appears that we are now entering into a new stage of the second quantum revolution, where qubits emerge as the origin of everything. It is known as “it from qubit”, which realizes a unification of force and matter by quantum information. Qubit is the simplest element in quantum information. In fact, it implies that information is matter, and matter is information. matter and space=information(qubits), quantum information unifies matter. This is because the frequency is an attribute of information. Quantum theory tells us that frequency is energy E=h, and relativity tells us that energy is mass m=E/c2. Both energy and mass are attributes of matter. So matter=information. That is, the essence of quantum theory is that the energy-frequency relation implies that matter=information. This represents a new way to view our world.

The above point of view of “matter=information” is similar to Wheeler’s “it from bit”, which represents a deep desire to unify matter and information. However, in our world, “it” are very complicated. Most “it” are fermions, while “bit” are bosonic. Can fermionic “it” come form bosonic “bit”? The statement “matter=information” means that those wave equations can all come from qubits. In other words, we know that elementary particles (i.e. matter) are described by gauge fields and anti-commuting field in a quantum field theory. Here, we try to say that all those very different quantum fields can arise from qubits. Is this possible? What is the microscopic structure of the space? What kind of microscopic structure can, at the same time, give rise to waves that satisfy Maxwell equation, Dirac/Weyl equation, and Einstein equation?

According to Wen Xiaogang and others, since our space is dynamical medium, the simplest choice is to assume the space to be an ocean of qubits. Scientists have given such and ocean a formal name “qubit ether”. Then the matter, i.e. the elementary particles, are simply the waves, “bubbles” and other defects in the qubit ocean (or qubit ether). This is how “it from qubit” or “matter =information”. We need to find a qubit ether with a microscopic structure.

However, for a long time, scientists do not know how waves satisfying Maxell equation or Yang-Mills equation can emerge from any qubit ether. So, even though quantum theory strongly suggests “matter=information”, trying to obtain all elementary particles from an ocean of simple qubits is regarded as impossible by many and has never become an active research effort.

So the key to understand “matter=information” is to identify the microscopic structure of the qubit ether (which can be viewed as space). The microscopic structure of our space must be very rich. Since our space not only can carry gravitational wave and electromagnetic wave, it can also carry electron wave, quark wave, gluon wave, and the waves that correspond to all elementary particles. Is such a qubit ether possible?

According to Wen Xiaogang, in condensed matter physics, the discovery of fractional quantum Hall states bring us into a new world of highly entangled many-body systems. When the strong entanglement becomes long range entanglement, the systems will possess a new kind of order-topological order, and represent new states of matter. Wen Xiaoguang finds that the waves (the excitations) in topologically ordered state can be very strange: they can be waves that satisfy Maxwell equation, Yang-Mills equation, or Dirac/Weyl equation. So the impossible become possible: all elementary particles can emerge from long range entangled qubit ether.

The above picture is “it from qubit” is very different from Wheeler’s “it from bit”. This is because here the observed elementary particles can only emerge from long range entangled qubit ether. The requirement of quantum entanglement implies that “it cannot from bit”. In fact, “it from entangled qubits”.

This leads scientists to wonder that maybe photons, electrons, gravitons, etc, are also collective motions of a certain underlying structure that fill the entire space. They may not have smaller parts. Looking for the smaller parts of photons, electrons, and gravitons to gain a deeper understanding of those elementary particles may not be a right approach. That is, the reductionism approach is unsuitable in quantum world.

Here, scientists will use a different approach, emergence approach, to gain a deeper understanding of elementary particles. In the emergence approach, we view space as an ocean of qubits, i.e. a qubit ether. The empty space (the vacuum) corresponds to the ground state of the qubit ether, and the elementary particles (that form the matter) correspond to the excitations of the qubit ether. That is, in the emergence approach, there is only form of “matter”—the space (the vacuum) itself, which is formed by qubits. What we regarded as matter are distortions and defects in this “space-matter”.

According to Wen Xiaogang, if particles/qubits form large oriented string and if those strings form a quantum liquid state, then the collective motion of the such organized particles /qubits will correspond to waves described by Maxwell equation and Dirac equation. The strings in the string liquid are free to join and cross each other As a result, the strings look more like a network. For this reason, the string liquid is actually a liquid of string-nets, which is called string-net condensed state.

We see that qubit that organize into string-net liquid naturally explain both light and electrons (gauge interactions and Fermin statistics), In other words, string-net theory provides a way to unify light and electrons. So, the fact that our vacuum contains both light and electrons may not be a mere accident. It may actually suggest that the vacuum is indeed a long-range entangled qubit state, whose order is described by a string-net liquid.

We would like to stress that the string-nets are formed by qubits. So in the string-net picture, both the Maxwell equation and Dirac equation, emerge from local qubit model, as long as the qubits form a long-range entangled state (i.e. a string-net liquid). In other words, light and electrons are unified by the long-range entanglement of qubits. Information unifies matter!

Gauge fields are fluctuations of long-range entanglement. String-net is only a description of the patterns of long-range entanglement. According to Wen xiaogang, using long-range entanglement and their string-net realization, we can obtain the simultaneous emergence of both gauge bosons (as string density waves) and fermions (as string ends) in any dimensions and for any gauge group. This result gives us hope that maybe all elementary particles are emergent and can be unified using local qubit models. Thus, long-range entanglement offers us a new option to view our world: maybe our vacuum is a long-range entangled state. It is the pattern of the long-range entanglement in the vacuum that determines the content and the structures of observed elementary particles.

Moreover, the string-net unification of gauge bosons and fermions is very different from the superstring theory for gauge bosons and fermions. In the string-net theory, gauge bosons and fermions come from the qubits that form the space, and “string-net” is simply the name that describe how qubits are organized in the ground state. So string-net is not a thing, but a pattern of qubits. In the string-net theory, the gauge bosons are waves of collective fluctuations of the string-nets, and a fermion corresponds to one end of string. This is an emergence approach. This research approach is very different from the superstring theory. In contrast, gauge bosons and fermions come from strings in the superstring theory. Both gauge bosons and fermions correspond to small pieces of strings. Different vibrations of the small pieces of strings give rise to different kind of particles. The superstring theory is still reductionism approach.

To summarize, topological order and long-range entanglement give rise to new states of quantum matter. Topological order, or more generally, quantum order have many new emergent phenomena, such as emergent gauge theory, fractional charge, etc.

Keywords: quantum revolution; quantum field theory; qubit

6:30-6:45 UTC

Sat 18th Sep

52. Information and Disinformation with their Boundaries and Interfaces

Gordana Dodig-Crnkovic[1,2]

1 Department of Computer Science and Engineering, Chalmers University of Technology and the University of Gothenburg, 40482 Gothenburg, Sweden;

2 School of Innovation, Design and Engineering, Mälardalen University, 721 23 Västerås, Sweden

Abstract:

This paper presents highlights from the workshop Boundaries of Disinformation held on Chalmers University of Technology. It addresses the phenomenon of disinformation, its historical and current forms. Digitalization and hyperconnectivity have been identified as leading contemporary sources of disinformation. In the effort to counteract disinformation, it is important not to forget the need for the balance between individual freedom of expression and societal institutionalized thinking used to prevent spreading of disinformation. The debate about this topic must involve major stakeholders.

Keywords: information; disinformation; demarcation

1. Introduction

Last year a workshop has been held on Chalmers University of Technology on the topic of Boundaries of Disinformation [1]. It gathered Swedish and international thinkers, representing different approaches to the diverse topics of: Artificial Intelligence (Max Tegmark, physicist and AI researcher, MIT), Democracy (Daniel Lindvall, sociologist and independent researcher), Epistemology (Åsa Wikforss (philosopher, Stockholm University), Ethics (Gordana Dodig-Crnkovic, Chalmers University of Technology), Human-Computer Interaction (Wolfgang Hofkirchner, Vienna University of Technology) and Law (Chris Marsden, University of Sussex). The workshop was organized and moderated by Joshua Bronson (philosopher) and Susanne Stenberg (legal expert within R&D) from RISE, Research Institutes of Sweden.

The workshop topic was introduced by Bronson and Stenberg. Disinformation was presented as a significant problem of contemporary societies, bringing “the challenge of dealing with disinformation magnified by digitalization and increasing use and dependence on AI in more and more aspects of our society.

Disinformation is typically defined as purposefully spreading false information to deceive, cause harm to, or disrupt an opponent. Disinformation can be generated and spread by individuals, groups, organizations, companies, or governments and equally disinformation can target any of these. According to Bronson and Stenberg, today we have effectively lowered the barrier for content creation and dissemination to such an extent that traditional gatekeepers, such as governments, universities, publishers, and media, are unable to steer information and content creation.

The aim of the workshop Boundaries of Disinformation was to map the edges of this problem.

In what follows I will present my take on the problem of disinformation, its relation to information and its role in society, after having participated in the workshop and learned a great deal from my colleagues discussing various manifestations of disinformation and possibilities of its control.

2. Phenomenon of Disinformation, Old and Omnipresent

Historical examples of disinformation are many, as illustrated by the following three ancient examples of "fake news”: the donation of Constantine from 8th century, a sanctioned surrender of the Hospitaliers of the Knights Templar in 1140s and the story from 1782 when Benjamin Franklin created a fake issue of a Boston newspaper, as reported in [2]. War- and political propaganda and counterpropaganda are classical cases of disinformation and misinformation.

We meet information and disinformation daily on both micro (individual)- meso (group)- and macro- (global) scales.

3. What is New?

As never before, content production today has become simple and affordable to all and, consequently, it has run out of societal control.

“The idea that different people can get a piece of paper that states the same thing is powerful. It's equalizing. It's easy to trust the information in this case because accepting that a huge group of people are being misled is, well, unbelievable. There isn't a way to prevent fake news entirely, but it starts with critical reading and conversations.” [2] Not only general public/“ordinary people” have got voice that can reach around the globe, but also politicians can directly tweet to their followers circumventing democratic goalkeepers.

Proposed automated means and Artificial Intelligence used for fighting disinformation bring their own challenges as presented in the overview of self-regulation, co-regulation, and classic regulatory responses, as currently adopted by social platforms and EU countries [3], connecting the technological, legal, and social dimensions.

4. Digitalization and Hyperconnectivity as Sources of Disinformation

New online content production and communication has as a consequence the phenomenon of ”informational bubbles” – isolated groups that share information and values independently of the rest of the world. Easily such groups can assume extreme positions such as anti-vaxxers or groups claiming that the Earth is flat.

Social networks, electronic web-based media, digital platforms, web bots – provide ways for disinformation to uncontrollably develop in dangerous ways and proportions.

New technologies make content creation and dissemination easy, avoiding the traditional gatekeeping mechanisms of publishers, (predefined) media formats, (existing) institutions, universities and governments.

Joshua Bronson and Susanne Stenberg asked the following questions:

Can we establish new gatekeepers who would:

  • tell the difference between managing disinformation and censoring

  • establish relationship between facts and disinformation

  • find out if and when information can be traced

  • establish the possibilities and limits of AI solutions to disinformation

  • increase media literacy in our radically changing digital landscape

  • help framing laws to protect freedom of expression while guarding against disinformation

Boundaries of Disinformation

There are number of questions that must be answered to understand disinformation, its role and production means, such as:

  • Who decides what is “the case”/ “the fact”/ ”the truth”?

  • What is ”authoritative information” / “trustworthy information”?

  • Who are authorities and for what?

It is important, in the effort to identify and counteract disinformation, not to forget the need for boundary/balance between individual freedom and societal institutionalized thinking. We need to better understand and formulate the relationship between Authority vs. Freedom vs. Responsibility in this context.

Moving towards a more truth-based society is about elucidating and explicating

  • Not only: “How?” (AI, computers and media literacy, etc.)

  • But also: “Why?” (philosophy, ethics, law, critical thinking, etc.) which is a question for democracies to decide.

Interfaces between Information and Disinformation

As Wu argues [4], there is interaction and convergence of the philosophy and science of information in sciences. Consequently, there are in parallel with Information vs. Disinformation, related questions of demarcation between Science vs. Pseudoscience according to Popper, [5] which caused a lot of discussion among philosophers of science.

There are cases in the history of science in which false information/knowledge (false for us here and now) has led to the production of true information/knowledge (true for us here and now). The whole development of science can be seen as a refinement and replacement of inadequate knowledge by the more adequate one. A classic example of the mechanism leading to new discoveries and new insights is serendipity, making unexpected discoveries by accident.

The pre-condition for the discovery of new scientific ‘truths’ (where the term ‘true’ is used in its limited sense to mean ‘true to our best knowledge‘) is not that we start with a critical mass of absolutely true information, but that in continuous interaction (feedback loop) with the world we refine our set of (partial) truths. With good reason, truth is not an operative term for working scientists. Instead, it is the notion of correctness which refers to a given reference frame. Each change of the frame of reference (like in scientific revolutions where change from geocentric to heliocentric view developed) will lead to different understanding of what is “true” and “correct”.

Interestingly, Christopher Columbus had, for the most part, incorrect information about his proposed journey to India. He never saw India, but he made a great discovery. The "discovery" of America was not incidental; it was a result of a combination of many favorable historical preconditions combined with both true and false information about the state of affairs. Similar discoveries are constant occurrences in science.

“Yet libraries are full of ‘false knowledge’”, as Floridi points out in his Afterword [5]. And yet we find them useful.

How much should we be worried? Current debates about Covid-19 vaccines show how harmful disinformation (about the danger of vaccines in this case) can be for the society. What can be done to assure and maintain correctness and trustworthiness of information? Whose responsibility is it to keep media free from misinformation, disinformation, malinformation? It is very important that we discuss it here and now, broadly involving diverse stakeholders, while rapid development of AI makes content production increasingly simple, available, and vastly abundant.

References

  1. http://www.gordana.se/work/PRESENTATIONS-files/20201202-ETHICS-of-DISINFORMATION.pdf

  2. https://blogs.scientificamerican.com/anthropology-in-practice/three-historical-examples-of-fake-news/ Three Historical Examples of "Fake News"

  3. https://www.europarl.europa.eu/RegData/etudes/STUD/2019/624278/EPRS_STU(2019)624278_EN.pdf Automated tackling of disinformation

  4. Wu, K. The Interaction and Convergence of the Philosophy and Science of Information. Philosophies 2016, 3, 228-244. https://doi.org/10.3390/philosophies1030228

  5. Popper, K. The Logic of Scientific Discovery (2nd ed.). London: Routledge. 2005

  6. Floridi, L. LIS as Applied Philosophy of Information: A Reappraisal. Library Trends 2004, 52(3), 658-665.

6:45-7:00 UTC

Sat 18th Sep

53. A Quantum Manifestation of Information

Tian’en Wang

Shanghai University

Abstract:

There is still a way to go for quantum information studies. And the “information quantum mechanics” of today refers to studies on quantum physics and has little to do with understanding information itself; yet starting from the view of information is indeed a key for deepening our understanding of quantum physics.

Since human cannot directly perceive microscopic objects, human as advanced receiver cannot avoid self-involvement in the same macro scale. In this vein, that “where there is intention there is information” no longer applies. Instead, through approaches similar to back away, we can perceive scenarios that, before, were hard to be perceived; and something unprecedented emerges in the quantum domain, that is,

while the picture of matter/energy blurs, that of information gradually comes into focus--quantum information itself is a typical receptive relation.

From a macroscopic perspective, regarding the green of leaves as an objective fact is like covering your eyes with leaves so that you can see nothing else. It is not just the green, in fact, our concept of the shape of a leaf is also “perceived” by our human eye. To perceive the leaf with a receptor that perceives differently from human eye, considering that the room a leaf occupies is way much larger than the summation of the room that every atom of the leaf occupies, the leaf may be perceived as something similar to how we perceive the solar system, thus the leaf can be regarded as something “hollow and spacious”, which is totally different from the characters of leaf in our concept--let alone that the space an entity occupies is relatively spoken and the positions referred to in observations are mutually dependent. In addition, the space itself is something stipulated by human receiver according to perception, which can be easily neglected because of its objectivity. In